path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
p4/project_notebook.ipynb | ###Markdown
Implementing a Route PlannerIn this project you will use A\* search to implement a "Google-maps" style route planning algorithm.
###Code
# Run this cell first!
from helpers import Map, load_map, show_map
from student_code import shortest_path
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Map Basics
###Code
map_10 = load_map('map-10.pickle')
show_map(map_10)
###Output
_____no_output_____
###Markdown
The map above (run the code cell if you don't see it) shows a disconnected network of 10 intersections. The two intersections on the left are connected to each other but they are not connected to the rest of the road network. On the graph above, the edge between 2 nodes(intersections) represents a literal straight road not just an abstract connection of 2 cities.These `Map` objects have two properties you will want to use to implement A\* search: `intersections` and `roads`**Intersections**The `intersections` are represented as a dictionary. In this example, there are 10 intersections, each identified by an x,y coordinate. The coordinates are listed below. You can hover over each dot in the map above to see the intersection number.
###Code
map_10.intersections
###Output
_____no_output_____
###Markdown
**Roads**The `roads` property is a list where, if `i` is an intersection, `roads[i]` contains a list of the intersections that intersection `i` connects to.
###Code
# this shows that intersection 0 connects to intersections 7, 6, and 5
map_10.roads[0]
# This shows the full connectivity of the map
map_10.roads
# map_40 is a bigger map than map_10
map_40 = load_map('map-40.pickle')
show_map(map_40)
###Output
_____no_output_____
###Markdown
Advanced VisualizationsThe map above shows a network of roads which spans 40 different intersections (labeled 0 through 39). The `show_map` function which generated this map also takes a few optional parameters which might be useful for visualizaing the output of the search algorithm you will write.* `start` - The "start" node for the search algorithm.* `goal` - The "goal" node.* `path` - An array of integers which corresponds to a valid sequence of intersection visits on the map.
###Code
# run this code, note the effect of including the optional
# parameters in the function call.
show_map(map_40, start=5, goal=34, path=[5,16,37,12,34])
###Output
_____no_output_____
###Markdown
Writing your algorithmYou should open the file `student_code.py` in another tab and work on your algorithm there. Do that by selecting `File > Open` and then selecting the appropriate file.The algorithm you write will be responsible for generating a `path` like the one passed into `show_map` above. In fact, when called with the same map, start and goal, as above your algorithm should produce the path `[5, 16, 37, 12, 34]````bash> shortest_path(map_40, 5, 34)[5, 16, 37, 12, 34]```
###Code
path = shortest_path(map_40, 5, 34)
if path == [5, 16, 37, 12, 34]:
print("great! Your code works for these inputs!")
else:
print("something is off, your code produced the following:")
print(path)
###Output
_____no_output_____
###Markdown
Testing your CodeIf the code below produces no errors, your algorithm is behaving correctly. You are almost ready to submit! Before you submit, go through the following submission checklist:**Submission Checklist**1. Does my code pass all tests?2. Does my code implement `A*` search and not some other search algorithm?3. Do I use an **admissible heuristic** to direct search efforts towards the goal?4. Do I use data structures which avoid unnecessarily slow lookups?When you can answer "yes" to all of these questions, submit by pressing the Submit button in the lower right!
###Code
from test import test
test(shortest_path)
###Output
_____no_output_____ |
tests/Test_main.ipynb | ###Markdown
Tests on Main Function
###Code
from VIonLDA import *
import numpy as np
from scipy.special import digamma, polygamma
import time
###Output
_____no_output_____
###Markdown
Simulation data
###Code
np.random.seed(123)
docs, alpha, BETA=simulation_data()
###Output
_____no_output_____
###Markdown
Vectorization version Variational EM on LDA
###Code
M_step_Vectorization?
%%time
a1,B1=M_step_Vectorization(docs=docs,k=10,tol=1e-3,tol_estep=1e-3,max_iter=100,initial_alpha_shape=100,initial_alpha_scale=0.01)
###Output
_____no_output_____
###Markdown
Get the mmse
###Code
mmse(alpha=alpha,BETA=BETA,alpha_est=a1,BETA_est=B1)
###Output
_____no_output_____ |
examples/13-lego-3d.ipynb | ###Markdown
Lego 3DThis Jupyter notebook demonstrates making a simple 3d game with Jupylet. Run the notebook and click the game canvas once it shows up to bring it into focus and then use the arrows and keys W, A, S, D, SHIFT, ALT and CAPSLOCK to control the camera or the moon.* Moon textures are by [NASA](https://svs.gsfc.nasa.gov/4720). * Nebula sky was generated with [spacescape](https://github.com/petrocket/spacescape) by Alex Peterson. * Alien texture is from [TextureHaven](https://texturehaven.com/textures/).* Eye texture is from [Flickr](https://www.flickr.com/photos/filterforge/33680912465/in/photostream/) stated as CC BY 2.0.
###Code
import logging
import random
import struct
import time
import glm
import sys
import os
sys.path.insert(0, os.path.abspath('./..'))
from jupylet.label import Label
from jupylet.app import App
from jupylet.state import State
from jupylet.loader import load_blender_gltf
logger = logging.getLogger()
app = App(768, 512)
scene = load_blender_gltf('./scenes/lego/lego.gltf')
scene.shadows = True
camera = scene.cameras['Camera']
brick = scene.meshes['brick.green']
state = State(
capslock = False,
shift = False,
alt = False,
up = False,
down = False,
right = False,
left = False,
key_w = False,
key_s = False,
key_a = False,
key_d = False,
lv = glm.vec3(0),
av = glm.vec3(0),
)
@app.event
def key_event(key, action, modifiers):
logger.info('Enter key_event(key=%r, action=%r, modifiers=%r).', key, action, modifiers)
keys = app.window.keys
value = action == keys.ACTION_PRESS
if key == keys.CAPS_LOCK and value:
state.capslock = not state.capslock
state.alt = modifiers.alt
state.shift = modifiers.shift
if key == keys.SPACE:
state.lv *= 0.
state.av *= 0.
if key == keys.UP:
state.up = value
if key == keys.DOWN:
state.down = value
if key == keys.LEFT:
state.left = value
if key == keys.RIGHT:
state.right = value
if key == keys.W:
state.key_w = value
if key == keys.S:
state.key_s = value
if key == keys.A:
state.key_a = value
if key == keys.D:
state.key_d = value
obj = brick if state.capslock else camera
linear_acceleration = 1 / 2
angular_acceleration = 1 / 24
@app.run_me_every(1/48)
def move_object(ct, dt):
global obj
obj = brick if state.capslock else camera
sign = -1 if obj is camera else 1
if state.right and state.shift:
state.av.z += angular_acceleration * sign
if state.right and not state.shift:
state.av.y -= angular_acceleration
if state.left and state.shift:
state.av.z -= angular_acceleration * sign
if state.left and not state.shift:
state.av.y += angular_acceleration
if state.up:
state.av.x -= angular_acceleration
if state.down:
state.av.x += angular_acceleration
if state.key_w and state.alt:
state.lv.y += linear_acceleration
if state.key_w and not state.alt:
state.lv.z += linear_acceleration * sign
if state.key_s and state.alt:
state.lv.y -= linear_acceleration
if state.key_s and not state.alt:
state.lv.z -= linear_acceleration * sign
if state.key_a:
state.lv.x += linear_acceleration * sign
if state.key_d:
state.lv.x -= linear_acceleration * sign
state.lv = glm.clamp(state.lv, -64, 64)
state.av = glm.clamp(state.av, -64, 64)
obj.move_local(dt * state.lv)
obj.rotate_local(dt * state.av.x, (1, 0, 0))
obj.rotate_local(dt * state.av.y, (0, 1, 0))
obj.rotate_local(dt * state.av.z, (0, 0, 1))
state.lv *= 0.67 ** dt
state.av *= 0.67 ** dt
label0 = Label('Hello World!', color='white', font_size=12, x=10, y=74)
label1 = Label('Hello World!', color='white', font_size=12, x=10, y=52)
label2 = Label('Hello World!', color='white', font_size=12, x=10, y=30)
label3 = Label('Hello World!', color='white', font_size=12, x=10, y=8)
hello_world = Label('hello, world 3D!', color='cyan', font_size=24, x=575, y=10)
@app.event
def render(ct, dt):
app.window.clear()
scene.draw()
label0.text = 'time to draw - %.2f ms' % (1000 * app._time2draw_rm)
label1.text = 'up - %r' % obj.up
label2.text = 'front - %r' % obj.front
label3.text = 'position - %r' % obj.position
label0.draw()
label1.draw()
label2.draw()
label3.draw()
hello_world.draw()
#app.get_logging_widget()
app.run()
###Output
_____no_output_____
###Markdown
Lego 3DThis Jupyter notebook demonstrates making a simple 3d game with Jupylet. Run the notebook and click the game canvas once it shows up to bring it into focus and then use the arrows and keys W, A, S, D, SHIFT, ALT and CAPSLOCK to control the camera or the moon.* Moon textures are by [NASA](https://svs.gsfc.nasa.gov/4720). * Nebula sky was generated with [spacescape](https://github.com/petrocket/spacescape) by Alex Peterson. * Alien texture is from [TextureHaven](https://texturehaven.com/textures/).* Eye texture is from [Flickr](https://www.flickr.com/photos/filterforge/33680912465/in/photostream/) stated as CC BY 2.0.
###Code
import random
import struct
import time
import sys
import os
sys.path.insert(0, os.path.abspath('./..'))
import glm
import pyglet.window.key as key
from jupylet.label import Label
from jupylet.app import App, State
from jupylet.model import load_blender_gltf
app = App(768, 512)
scene = load_blender_gltf('./scenes/lego/lego.gltf')
camera = scene.cameras['Camera']
brick = scene.meshes['brick.green']
state = State(
capslock = False,
shift = False,
alt = False,
up = False,
down = False,
right = False,
left = False,
key_w = False,
key_s = False,
key_a = False,
key_d = False,
lv = glm.vec3(0),
av = glm.vec3(0),
)
@app.event
def on_key_press(symbol, modifiers):
on_key(symbol, modifiers, True)
@app.event
def on_key_release(symbol, modifiers):
on_key(symbol, modifiers, False)
def on_key(symbol, modifiers, value):
if symbol == key.CAPSLOCK and value:
state.capslock = not state.capslock
if symbol == key.LALT:
state.alt = value
if symbol == key.LSHIFT:
state.shift = value
if symbol == key.UP:
state.up = value
if symbol == key.DOWN:
state.down = value
if symbol == key.LEFT:
state.left = value
if symbol == key.RIGHT:
state.right = value
if symbol == key.W:
state.key_w = value
if symbol == key.S:
state.key_s = value
if symbol == key.A:
state.key_a = value
if symbol == key.D:
state.key_d = value
obj = brick if state.capslock else camera
linear_acceleration = 1 / 2
angular_acceleration = 1 / 24
@app.run_me_again_and_again(1/48)
def move_object(dt):
global obj
obj = brick if state.capslock else camera
sign = -1 if obj is camera else 1
if state.right and state.shift:
state.av.z += angular_acceleration * sign
if state.right and not state.shift:
state.av.y -= angular_acceleration
if state.left and state.shift:
state.av.z -= angular_acceleration * sign
if state.left and not state.shift:
state.av.y += angular_acceleration
if state.up:
state.av.x -= angular_acceleration
if state.down:
state.av.x += angular_acceleration
if state.key_w and state.alt:
state.lv.y += linear_acceleration
if state.key_w and not state.alt:
state.lv.z += linear_acceleration * sign
if state.key_s and state.alt:
state.lv.y -= linear_acceleration
if state.key_s and not state.alt:
state.lv.z -= linear_acceleration * sign
if state.key_a:
state.lv.x += linear_acceleration * sign
if state.key_d:
state.lv.x -= linear_acceleration * sign
state.lv = glm.clamp(state.lv, -10, 10)
state.av = glm.clamp(state.av, -10, 10)
obj.move_local(dt * state.lv)
obj.rotate_local(dt * state.av.x, (1, 0, 0))
obj.rotate_local(dt * state.av.y, (0, 1, 0))
obj.rotate_local(dt * state.av.z, (0, 0, 1))
state.lv *= 0.67 ** dt
state.av *= 0.67 ** dt
label0 = Label('Hello World!', color='white', font_size=12, x=10, y=74)
label1 = Label('Hello World!', color='white', font_size=12, x=10, y=52)
label2 = Label('Hello World!', color='white', font_size=12, x=10, y=30)
label3 = Label('Hello World!', color='white', font_size=12, x=10, y=8)
hello_world = Label('Hello World 3D!', color='cyan', font_size=16, x=600, y=10)
dtl = [0]
@app.event
def on_draw():
app.window.clear()
app.set3d()
t0 = time.time()
scene.draw()
dtl.append(0.98 * dtl[-1] + 0.02 * (time.time() - t0))
dtl[:] = dtl[-256:]
app.set2d()
label0.text = 'time to draw - %.2f ms' % (1000 * dtl[-1])
label1.text = 'up - %r' % obj.up
label2.text = 'front - %r' % obj.front
label3.text = 'position - %r' % obj.position
label0.draw()
label1.draw()
label2.draw()
label3.draw()
hello_world.draw()
app.run()
scene.lights['Light.Sun'].intensity = 0.5
scene.lights['Light.Spot'].intensity = 60
scene.lights['Light.Point'].intensity = 80
scene.shadows = True
scene.lights['Light.Point'].shadows = True
scene.lights['Light.Spot'].shadows = True
scene.lights['Light.Sun'].shadows = True
brick.hide = False
###Output
_____no_output_____ |
.ipynb_checkpoints/4_dataset_model_testing-checkpoint.ipynb | ###Markdown
Algorithms ExplorationHere we'll be testing a few different supervised learning algorithms against the dataset we developed so far
###Code
#Importing the dataset
import pandas as pd
startups = pd.read_csv('data/startups_3.csv', index_col=0)
startups[:3]
X = startups.drop('acquired', 1)
X_numeric = X.filter(regex=('(number_of|avg_).*|.*(funding_total_usd|funding_rounds|_at)'))
X_categorical = X.filter(regex=('(Category_|state_).*'))
X_state = X.filter(regex=('(state_).*'))
X_category = X.filter(regex=('(Category_).*'))
y = startups['acquired']
#startups_not_operating = pd.read_csv('data/startups_not_operating_3.csv', index_col=0)
#X = startups_not_operating.drop('acquired', 1)
#y = startups_not_operating['acquired']
from sklearn.decomposition import PCA
pca = PCA(n_components=6)
pca.fit(X_numeric)
X_pca = pca.transform(X_numeric)
X_pca = pd.DataFrame(X_pca)
X_pca[:3]
from imblearn.under_sampling import RandomUnderSampler
rus = RandomUnderSampler(random_state=42, return_indices=True)
X_undersampled, y_undersampled, indices = rus.fit_sample(X, y)
X_software = X[X['Category_Software'] == 1]
y_software = y.loc[X_software.index]
y_software.shape
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import StratifiedKFold
from sklearn.metrics import roc_auc_score
from sklearn.metrics import confusion_matrix
from sklearn import grid_search
def run_classifier(parameters, classifier, stratify=True, new_X=None, new_y=None):
X_train, X_test, y_train, y_test = train_test_split(new_X if new_X is not None else X, new_y if new_y is not None else y, test_size=0.2, random_state=42, stratify = y if stratify else None)
clf = grid_search.GridSearchCV(classifier, parameters, n_jobs=4, scoring='roc_auc')
clf.fit(X=X_train, y=y_train)
model = clf.best_estimator_
print (clf.best_score_, clf.best_params_)
print roc_auc_score(y_test, model.predict(X_test))
print pd.crosstab(y_test, model.predict(X_test), rownames=['True'], colnames=['Predicted'], margins=True)
return model
from sklearn.tree import DecisionTreeClassifier
dt_clf = run_classifier({'max_depth':range(5,20)}, DecisionTreeClassifier())
from sklearn.ensemble import RandomForestClassifier
rf_clf = run_classifier({'max_depth':range(5,10), 'n_estimators':[50], 'class_weight':['balanced']}, RandomForestClassifier(random_state=0))
#Subsampled
from sklearn.ensemble import RandomForestClassifier
rf_clf = run_classifier({'max_depth':range(6,10), 'n_estimators':[50], 'class_weight':['balanced_subsample']}, RandomForestClassifier(random_state=0))
from sklearn.svm import SVC
#parameters = {'kernel':['linear', 'rbf', 'poly'], 'C':[1, 10, 100, 1000], 'class_weight':['balanced']}
parameters = {'kernel':['rbf'], 'C':[100], 'class_weight':['balanced']}
svc_clf = run_classifier(parameters, SVC(random_state=0))
from sklearn.svm import SVC
X_train, X_test, y_train, y_test = train_test_split(X_pca, y, test_size=0.2, random_state=42, stratify=y)
#clf = grid_search.GridSearchCV(classifier, parameters, n_jobs=4, scoring='roc_auc')
clf = SVC(C=1, kernel='rbf', class_weight={0:1, 1:8})
clf.fit(X=X_train, y=y_train)
#model = clf.best_estimator_
#print (clf.best_score_, clf.best_params_)
print roc_auc_score(y_train, clf.predict(X_train))
print roc_auc_score(y_test, clf.predict(X_test))
print pd.crosstab(y_test, clf.predict(X_test), rownames=['True'], colnames=['Predicted'], margins=True)
import warnings
warnings.filterwarnings('ignore')
from sklearn.neighbors import KNeighborsClassifier
parameters = {'n_neighbors':[3, 5]}
knn_clf = run_classifier(parameters, KNeighborsClassifier(), stratify=False, new_X=X_undersampled, new_y=y_undersampled)
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import BernoulliNB
from sklearn.cross_validation import cross_val_score
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
clf = BernoulliNB()
print cross_val_score(clf, X_train, y_train, scoring='roc_auc', cv=5)
clf.fit(X_train, y_train)
print roc_auc_score(y_train, clf.predict(X_train))
print roc_auc_score(y_test, clf.predict(X_test))
print pd.crosstab(y_test, clf.predict(X_test), rownames=['True'], colnames=['Predicted'], margins=True)
from sklearn.linear_model import SGDClassifier
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
clf = SGDClassifier(random_state=0, class_weight='balanced', loss='log')
print cross_val_score(clf, X_train, y_train, scoring='roc_auc', cv=10)
clf.fit(X_train, y_train)
print roc_auc_score(y_train, clf.predict(X_train))
print roc_auc_score(y_test, clf.predict(X_test))
print pd.crosstab(y_test, clf.predict(X_test), rownames=['True'], colnames=['Predicted'], margins=True)
from sklearn.linear_model import SGDClassifier
parameters={'l1_ratio':[0.10, 0.15, 0.20], 'alpha':[0.001, 0.0001, 0.00001, 0.000001], 'class_weight':['balanced'], 'random_state':[0], 'loss':['hinge', 'log'], 'penalty':['l2', 'l1', 'elasticnet']}
sgd_clf = run_classifier(parameters, SGDClassifier(), stratify=True, new_X=X_numeric)
#SGD for Category Software
from sklearn.linear_model import SGDClassifier
parameters={'l1_ratio':[0.10, 0.15, 0.20], 'alpha':[0.001, 0.0001, 0.00001, 0.000001], 'class_weight':['balanced'], 'random_state':[0], 'loss':['hinge', 'log'], 'penalty':['l2', 'l1', 'elasticnet']}
sgd_clf = run_classifier(parameters, SGDClassifier(), stratify=False, new_X=X_software, new_y=y_software)
from sklearn.linear_model import LogisticRegression
parameters={}
lr_clf = run_classifier(parameters, LogisticRegression(), stratify=True)
next: ignore "ipo" and "closed", work only with opearting and acquired
###Output
_____no_output_____ |
macro/Aiyagari.ipynb | ###Markdown
[](https://mybinder.org/v2/gh/NumEconCopenhagen/NumEconNotebooks/master?urlpath=lab/tree/macro/Aiyagari.ipynb) Setup
###Code
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
import numecon.macro.Aiyagari as Aiyagari
###Output
_____no_output_____
###Markdown
Aiyagari Setup Make a dictionary called **par** with your parameter choices.
###Code
par = {}
###Output
_____no_output_____
###Markdown
Create a **Aiygari model** with the chosen parameters.
###Code
model = Aiyagari.AiyagariModel(name='baseline',**par)
###Output
_____no_output_____
###Markdown
Find stationary equilibrium
###Code
model.find_stationary_equilibrium()
# a. figure of R
fig = plt.figure(figsize=(6,6),dpi=100)
ax = fig.add_subplot(1,1,1)
ts = np.arange(model.ss_simT)
ax.plot(ts,model.R_ss*np.ones(model.ss_simT),'-',color='black')
ax.plot(ts,model.R_func(model.ss_sim_k),'--',color='firebrick')
ax.set_ylim([1.00,1.06])
ax.set_title('Convergence to the stationary equilibrium ($R_t$)')
# b. figure of k
fig = plt.figure(figsize=(6,6),dpi=100)
ax = fig.add_subplot(1,1,1)
ts = np.arange(model.ss_simT)
ax.plot(ts,model.k_ss*np.ones(model.ss_simT),'-',color='black')
ax.plot(ts,model.ss_sim_k,'--',color='firebrick')
ax.set_ylim([0,10])
ax.set_title('Convergence to the stationary equilibrium ($k_t$)');
###Output
_____no_output_____
###Markdown
Consumption functions
###Code
fig = plt.figure(figsize=(6,4),dpi=100)
ax = fig.add_subplot(1,1,1)
for z in range(model.Nz):
ax.plot(model.grid_m,model.c_inf[z,:],label=f'$z_t = {model.grid_z[z]:.2}$')
ax.set_xlabel('$m_t$')
ax.set_ylabel('$c_t$')
ax.legend(loc='lower right')
fig.savefig('figs/Aiyagari_consumption_functions.pdf')
fig.savefig('figs/Aiyagari_consumption_functions.png')
###Output
_____no_output_____
###Markdown
Stationary distribution
###Code
fig = plt.figure(figsize=(6,4),dpi=100)
ax = fig.add_subplot(1,1,1)
for z in range(model.Nz):
I = model.ss_sim_z == z
ax.hist(model.ss_sim_a[I],label=f'$z_t = {model.grid_z[z]:.2}$',alpha=0.80,bins=100,density=True)
ax.set_xlabel('$a_t$')
ax.set_ylabel('frequency')
ax.legend(loc='lower right')
fig.savefig('figs/Aiyagari_stationary_distribution.pdf')
fig.savefig('figs/Aiyagari_stationary_distribution.png')
###Output
_____no_output_____
###Markdown
Find transition path
###Code
# a. initial values
I = np.random.choice(model.ss_simN,size=model.transN)
model.trans_sim_a0 = 0.95*model.ss_sim_a[I]
model.trans_sim_z0 = model.ss_sim_z[I]
# b. find transition math
R_ini = model.R_ss
mu = 0.00 # convergence rate
model.find_transition_path(mu)
fig = plt.figure(figsize=(6,4),dpi=100)
ax = fig.add_subplot(1,1,1)
ts = np.arange(model.transT)
ax.plot(ts,model.R_ss*np.ones(model.transT),'-',color='black',label='steady state')
ax.plot(ts,model.R_func(model.sim_k),'-',lw=2,color='firebrick',label='implied $R(k_t)$')
ax.plot(ts,model.sim_R,'o',markersize=3,markerfacecolor='None',markeredgecolor='navy',label='imposed $R_t$')
ax.set_xlim([0,200])
ax.set_ylim([model.R_ss-0.01,model.R_ss+0.01])
ax.legend(loc='upper right')
fig.savefig('figs/Aiyagari_transition_path.pdf')
fig.savefig('figs/Aiyagari_transition_path.png')
###Output
_____no_output_____ |
Deep_Learning/PyTorch/Neural_nets/two_layer_net_module.ipynb | ###Markdown
PyTorch: Custom nn Modules--------------------------A fully-connected ReLU network with one hidden layer, trained to predict y from xby minimizing squared Euclidean distance.This implementation defines the model as a custom Module subclass. Whenever youwant a model more complex than a simple sequence of existing Modules you willneed to define your model this way.
###Code
import torch
class TwoLayerNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
"""
In the constructor we instantiate two nn.Linear modules and assign them as
member variables.
"""
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
"""
In the forward function we accept a Tensor of input data and we must return
a Tensor of output data. We can use Modules defined in the constructor as
well as arbitrary operators on Tensors.
"""
h_relu = self.linear1(x).clamp(min=0)
y_pred = self.linear2(h_relu)
return y_pred
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Construct our model by instantiating the class defined above
model = TwoLayerNet(D_in, H, D_out)
# Construct our loss function and an Optimizer. The call to model.parameters()
# in the SGD constructor will contain the learnable parameters of the two
# nn.Linear modules which are members of the model.
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
for t in range(500):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x)
# Compute and print loss
loss = criterion(y_pred, y)
print(t, loss.item())
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____ |
notebooks/Phenotype_Enrichment.ipynb | ###Markdown
Perform enrichment test using phenotype ontologyGiven a file of genes:```$ head data/rp-genes.tsvNCBIGene:6295 SAGNCBIGene:1258 CNGB1NCBIGene:3614 IMPDH1NCBIGene:26121 PRPF31```The example file here is derived from a Monarch query for all retinitis pigmentosa genes.We want to test each class in HPO to see if it is enriched for genes in this set.First we need to parse the gene Ids in the file
###Code
## Parse ids from file
file = open("data/rp-genes.tsv", "r")
gene_ids = [row.split("\t")[0] for row in file]
## show first 10 IDs:
gene_ids[:10]
## Create an ontology factory in order to fetch HPO
from ontobio.ontol_factory import OntologyFactory
ofactory = OntologyFactory()
ont = ofactory.create("hp") ## Load HP. Note the first time this runs Jupyter will show '*' - be patient
## Create an association factory to get gene-phenotype associations
from ontobio.assoc_factory import AssociationSetFactory
afactory = AssociationSetFactory()
## Load Associations from Monarch. Note the first time this runs Jupyter will show '*' - be patient
aset = afactory.create(ontology=ont, subject_category='gene', object_category='phenotype', taxon='NCBITaxon:9606')
## Run enrichment tests using all classes in ontology
enr = aset.enrichment_test(subjects=gene_ids, threshold=0.00005, labels=True)
## Show first 20 results
for r in enr[:20]:
print("{:8.3g} {} {:40s}".format(r['p'],r['c'],str(r['n'])))
###Output
1.24e-123 HP:0030466 Abnormal full-field electroretinogram
4.25e-111 HP:0000512 Abnormal electroretinogram
6.94e-104 HP:0000546 Retinal degeneration
1.28e-103 HP:0030453 Abnormal visual electrophysiology
3.03e-102 HP:0008323 Abnormal light- and dark-adapted electroretinogram
7.84e-97 HP:0000608 Macular degeneration
1.08e-95 HP:0001103 Abnormality of the macula
1.66e-93 HP:0000556 Retinal dystrophy
3.85e-90 HP:0001147 Retinal exudate
5.53e-90 HP:0000662 Nyctalopia
4.51e-89 HP:0007675 Progressive night blindness
1.37e-88 HP:0008020 Progressive cone degeneration
1.55e-85 HP:0007703 Abnormality of retinal pigmentation
2.43e-84 HP:0030506 Yellow/white lesions of the retina
9.5e-84 HP:0200065 Chorioretinal degeneration
4.22e-83 HP:0001730 Progressive hearing impairment
2.91e-81 HP:0030469 Abnormal dark-adapted electroretinogram
2.91e-81 HP:0030470 Abnormal dark-adapted bright flash electroretinogram
2.91e-81 HP:0030478 Abnormal amplitude of dark-adapted bright flash electroretinogram
2.91e-81 HP:0007984 Electronegative electroretinogram
###Markdown
Given that the initial gene set is for retinitis pigmentosa genes, it's not surprising that enriched phenotypeterms are related to retinal degeneration Viewing ResultsWe can use different visualization options to see the enriched terms. First we will show a simple tree view
###Code
## Get all enriched class Ids
terms = [r['c'] for r in enr]
## Create a minimal slim of HPO consisting of enriched terms,
## with non-informative intermediate nodes removed
from ontobio.slimmer import get_minimal_subgraph
g = get_minimal_subgraph(ont.get_graph(), terms)
## Render as ascii-tree
from ontobio.graph_io import GraphRenderer
w = GraphRenderer.create('tree')
w.write(g, query_ids=terms)
###Output
. HP:0000118 ! Phenotypic abnormality
% HP:0000478 ! Abnormality of the eye *
% HP:0012372 ! Abnormal eye morphology *
% HP:0012374 ! Abnormality of the globe *
% HP:0100887 ! Abnormality of globe size *
% HP:0000568 ! Microphthalmia *
% HP:0008056 ! Aplasia/Hypoplasia affecting the eye *
% HP:0000568 ! Microphthalmia *
% HP:0008057 ! Aplasia/Hypoplasia affecting the fundus *
% HP:0008061 ! Aplasia/Hypoplasia of the retina *
% HP:0007770 ! Hypoplasia of the retina *
% HP:0008047 ! Abnormality of the vasculature of the eye *
% HP:0008046 ! Abnormality of the retinal vasculature *
% HP:0007843 ! Attenuation of retinal blood vessels *
% HP:0004329 ! Abnormality of the posterior segment of the globe *
% HP:0001098 ! Abnormality of the fundus *
% HP:0000479 ! Abnormality of the retina *
% HP:0000532 ! Chorioretinal abnormality *
% HP:0200065 ! Chorioretinal degeneration *
% HP:0000533 ! Chorioretinal atrophy *
% HP:0011958 ! Retinal perforation *
% HP:0011530 ! Retinal hole *
% HP:0000546 ! Retinal degeneration *
% HP:0000547 ! Tapetoretinal degeneration *
% HP:0007893 ! Progressive retinal degeneration *
% HP:0007769 ! Peripheral retinal degeneration *
% HP:0000655 ! Vitreoretinal degeneration *
% HP:0001105 ! Retinal atrophy *
% HP:0008020 ! Progressive cone degeneration *
% HP:0008061 ! Aplasia/Hypoplasia of the retina *
% HP:0007770 ! Hypoplasia of the retina *
% HP:0007703 ! Abnormality of retinal pigmentation *
% HP:0007894 ! Hypopigmentation of the fundus *
% HP:0000580 ! Pigmentary retinopathy *
% HP:0000510 ! Rod-cone dystrophy *
% HP:0001146 ! Pigmentary retinal degeneration *
% HP:0007737 ! Bone spicule pigmentation of the retina *
% HP:0000488 ! Retinopathy *
% HP:0001103 ! Abnormality of the macula *
% HP:0007754 ! Macular dystrophy *
% HP:0000608 ! Macular degeneration *
% HP:0007868 ! Age-related macular degeneration *
% HP:0030498 ! Macular thickening *
% HP:0040049 ! Macular edema *
% HP:0011505 ! Cystoid macular edema *
% HP:0030506 ! Yellow/white lesions of the retina *
% HP:0001147 ! Retinal exudate *
% HP:0007898 ! Exudative retinopathy *
% HP:0011532 ! Subretinal exudate *
% HP:0012777 ! Retinal neoplasm *
% HP:0009919 ! Retinoblastoma *
% HP:0000556 ! Retinal dystrophy *
% HP:0000548 ! Cone/cone-rod dystrophy *
% HP:0000510 ! Rod-cone dystrophy *
% HP:0007754 ! Macular dystrophy *
% HP:0008046 ! Abnormality of the retinal vasculature *
% HP:0007843 ! Attenuation of retinal blood vessels *
% HP:0000587 ! Abnormality of the optic nerve *
% HP:0012795 ! Abnormality of the optic disc *
% HP:0000648 ! Optic atrophy *
% HP:0000543 ! Optic disc pallor *
% HP:0008057 ! Aplasia/Hypoplasia affecting the fundus *
% HP:0008061 ! Aplasia/Hypoplasia of the retina *
% HP:0007770 ! Hypoplasia of the retina *
% HP:0000610 ! Abnormality of the choroid *
% HP:0000532 ! Chorioretinal abnormality *
% HP:0200065 ! Chorioretinal degeneration *
% HP:0000533 ! Chorioretinal atrophy *
% HP:0000553 ! Abnormality of the uvea *
% HP:0000610 ! Abnormality of the choroid *
% HP:0000532 ! Chorioretinal abnormality *
% HP:0200065 ! Chorioretinal degeneration *
% HP:0000533 ! Chorioretinal atrophy *
% HP:0004328 ! Abnormality of the anterior segment of the globe *
% HP:0000517 ! Abnormality of the lens *
% HP:0000518 ! Cataract *
% HP:0000481 ! Abnormality of the cornea *
% HP:0011486 ! Abnormality of corneal thickness *
% HP:0100689 ! Decreased corneal thickness *
% HP:0000563 ! Keratoconus *
% HP:0100691 ! Abnormality of the curvature of the cornea *
% HP:0100692 ! Increased corneal curvature *
% HP:0000563 ! Keratoconus *
% HP:0100012 ! Neoplasm of the eye *
% HP:0012777 ! Retinal neoplasm *
% HP:0009919 ! Retinoblastoma *
% HP:0012373 ! Abnormal eye physiology *
% HP:0030637 ! Cone dysfunction syndrome *
% HP:0000551 ! Abnormality of color vision *
% HP:0000496 ! Abnormality of eye movement *
% HP:0012547 ! Abnormal involuntary eye movements *
% HP:0000639 ! Nystagmus *
% HP:0000597 ! Ophthalmoparesis *
% HP:0000602 ! Ophthalmoplegia *
% HP:0000504 ! Abnormality of vision *
% HP:0000505 ! Visual impairment *
% HP:0000618 ! Blindness *
% HP:0007875 ! Congenital blindness *
% HP:0000662 ! Nyctalopia *
% HP:0007675 ! Progressive night blindness *
% HP:0000613 ! Photophobia *
% HP:0001123 ! Visual field defect *
% HP:0001133 ! Constriction of peripheral visual field *
% HP:0000551 ! Abnormality of color vision *
% HP:0007663 ! Reduced visual acuity *
% HP:0000572 ! Visual loss *
% HP:0000529 ! Progressive visual loss *
% HP:0000501 ! Glaucoma *
% HP:0030453 ! Abnormal visual electrophysiology *
% HP:0000512 ! Abnormal electroretinogram *
% HP:0030466 ! Abnormal full-field electroretinogram *
% HP:0008323 ! Abnormal light- and dark-adapted electroretinogram *
% HP:0000654 ! Decreased light- and dark-adapted electroretinogram amplitude *
% HP:0030469 ! Abnormal dark-adapted electroretinogram *
% HP:0030470 ! Abnormal dark-adapted bright flash electroretinogram *
% HP:0030478 ! Abnormal amplitude of dark-adapted bright flash electroretinogram *
% HP:0007984 ! Electronegative electroretinogram *
% HP:0000550 ! Undetectable electroretinogram *
% HP:0040064 ! Abnormality of limbs *
% HP:0040068 ! Abnormality of limb bone *
% HP:0002813 ! Abnormality of limb bone morphology *
% HP:0045060 ! Aplasia/hypoplasia involving bones of the extremities *
% HP:0006496 ! Aplasia/hypoplasia involving bones of the upper limbs *
% HP:0009824 ! Upper limb undergrowth *
% HP:0011297 ! Abnormality of digit *
% HP:0010442 ! Polydactyly *
% HP:0009815 ! Aplasia/hypoplasia of the extremities *
% HP:0009826 ! Limb undergrowth *
% HP:0009824 ! Upper limb undergrowth *
% HP:0045060 ! Aplasia/hypoplasia involving bones of the extremities *
% HP:0006496 ! Aplasia/hypoplasia involving bones of the upper limbs *
% HP:0009824 ! Upper limb undergrowth *
% HP:0002817 ! Abnormality of the upper limb *
% HP:0006496 ! Aplasia/hypoplasia involving bones of the upper limbs *
% HP:0009824 ! Upper limb undergrowth *
% HP:0000598 ! Abnormality of the ear *
% HP:0000370 ! Abnormality of the middle ear *
% HP:0011452 ! Functional abnormality of the middle ear *
% HP:0000405 ! Conductive hearing impairment *
% HP:0000364 ! Hearing abnormality *
% HP:0000365 ! Hearing impairment *
% HP:0012714 ! Severe hearing impairment *
% HP:0008625 ! Severe sensorineural hearing impairment *
% HP:0000405 ! Conductive hearing impairment *
% HP:0000407 ! Sensorineural hearing impairment *
% HP:0008625 ! Severe sensorineural hearing impairment *
% HP:0000408 ! Progressive sensorineural hearing impairment *
% HP:0008527 ! Congenital sensorineural hearing impairment *
% HP:0001730 ! Progressive hearing impairment *
% HP:0000408 ! Progressive sensorineural hearing impairment *
% HP:0000359 ! Abnormality of the inner ear *
% HP:0011389 ! Functional abnormality of the inner ear *
% HP:0000407 ! Sensorineural hearing impairment *
% HP:0008625 ! Severe sensorineural hearing impairment *
% HP:0000408 ! Progressive sensorineural hearing impairment *
% HP:0008527 ! Congenital sensorineural hearing impairment *
% HP:0002664 ! Neoplasm *
% HP:0011793 ! Neoplasm by anatomical site *
% HP:0100012 ! Neoplasm of the eye *
% HP:0012777 ! Retinal neoplasm *
% HP:0009919 ! Retinoblastoma *
% HP:0011792 ! Neoplasm by histology *
% HP:0030060 ! Nervous tissue neoplasm *
% HP:0030061 ! Neuroectodermal neoplasm *
% HP:0030063 ! Neuroepithelial neoplasm *
% HP:0009919 ! Retinoblastoma *
% HP:0002898 ! Embryonal neoplasm *
% HP:0009919 ! Retinoblastoma *
% HP:0000119 ! Abnormality of the genitourinary system *
% HP:0010935 ! Abnormality of the upper urinary tract *
% HP:0000077 ! Abnormality of the kidney *
% HP:0012210 ! Abnormal renal morphology *
% HP:0100957 ! Abnormality of the renal medulla *
% HP:0000090 ! Nephronophthisis *
% HP:0000078 ! Abnormality of the genital system *
% HP:0000080 ! Abnormality of reproductive system physiology *
% HP:0000135 ! Hypogonadism *
% HP:0012243 ! Abnormal genital system morphology *
% HP:0000812 ! Abnormal internal genitalia *
% HP:0000008 ! Abnormality of female internal genitalia *
% HP:0000140 ! Abnormality of the menstrual cycle *
% HP:0100608 ! Metrorrhagia *
% HP:0010461 ! Abnormality of the male genitalia *
% HP:0000032 ! Abnormality of male external genitalia *
% HP:0000036 ! Abnormality of the penis *
% HP:0008736 ! Hypoplasia of penis *
% HP:0000035 ! Abnormality of the testis *
% HP:0010460 ! Abnormality of the female genitalia *
% HP:0000008 ! Abnormality of female internal genitalia *
% HP:0000140 ! Abnormality of the menstrual cycle *
% HP:0100608 ! Metrorrhagia *
% HP:0000811 ! Abnormal external genitalia *
% HP:0000032 ! Abnormality of male external genitalia *
% HP:0000036 ! Abnormality of the penis *
% HP:0008736 ! Hypoplasia of penis *
% HP:0000035 ! Abnormality of the testis *
% HP:0003241 ! External genital hypoplasia *
% HP:0000050 ! Hypoplastic male external genitalia *
% HP:0008736 ! Hypoplasia of penis *
% HP:0001507 ! Growth abnormality *
% HP:0000002 ! Abnormality of body height *
% HP:0004322 ! Short stature *
% HP:0001510 ! Growth delay *
% HP:0004322 ! Short stature *
% HP:0004323 ! Abnormality of body weight *
% HP:0004324 ! Increased body weight *
% HP:0001513 ! Obesity *
% HP:0000818 ! Abnormality of the endocrine system *
% HP:0000819 ! Diabetes mellitus *
% HP:0005978 ! Type II diabetes mellitus *
% HP:0008373 ! Puberty and gonadal disorders *
% HP:0000135 ! Hypogonadism *
% HP:0000858 ! Menstrual irregularities *
% HP:0000140 ! Abnormality of the menstrual cycle *
% HP:0100608 ! Metrorrhagia *
% HP:0003117 ! Abnormality of circulating hormone level *
% HP:0040214 ! Abnormal insulin level *
% HP:0040215 ! Abnormal circulating insulin level *
% HP:0000842 ! Hyperinsulinemia *
% HP:0002597 ! Abnormality of the vasculature *
% HP:0008047 ! Abnormality of the vasculature of the eye *
% HP:0008046 ! Abnormality of the retinal vasculature *
% HP:0007843 ! Attenuation of retinal blood vessels *
% HP:0001574 ! Abnormality of the integument *
% HP:0000951 ! Abnormality of the skin *
% HP:0011121 ! Abnormality of skin morphology *
% HP:0001000 ! Abnormality of skin pigmentation *
% HP:0001070 ! Mottled pigmentation *
% HP:0011354 ! Generalized abnormality of skin *
% HP:0000987 ! Atypical scarring of skin *
% HP:0001939 ! Abnormality of metabolism/homeostasis *
% HP:0011032 ! Abnormality of fluid regulation *
% HP:0000969 ! Edema *
% HP:0040049 ! Macular edema *
% HP:0011505 ! Cystoid macular edema *
% HP:0003119 ! Abnormality of lipid metabolism *
% HP:0003107 ! Abnormality of cholesterol metabolism *
% HP:0010979 ! Abnormality of the level of lipoprotein cholesterol *
% HP:0010981 ! Hypolipoproteinemia *
% HP:0003563 ! Hypobetalipoproteinemia *
% HP:0008181 ! Abetalipoproteinemia *
% HP:0011013 ! Abnormality of carbohydrate metabolism/homeostasis *
% HP:0011014 ! Abnormal glucose homeostasis *
% HP:0000819 ! Diabetes mellitus *
% HP:0005978 ! Type II diabetes mellitus *
% HP:0000842 ! Hyperinsulinemia *
% HP:0000707 ! Abnormality of the nervous system *
% HP:0012639 ! Abnormality of nervous system morphology *
% HP:0000759 ! Abnormal peripheral nervous system morphology *
% HP:0009830 ! Peripheral neuropathy *
% HP:0012757 ! Abnormal neuron morphology *
% HP:0002450 ! Abnormal motor neuron morphology *
% HP:0007373 ! Motor neuron atrophy *
% HP:0007354 ! Amyotrophic lateral sclerosis *
% HP:0002011 ! Morphological abnormality of the central nervous system *
% HP:0011282 ! Abnormality of hindbrain morphology *
% HP:0011283 ! Abnormality of the metencephalon *
% HP:0001317 ! Abnormality of the cerebellum *
% HP:0001251 ! Ataxia *
% HP:0007367 ! Atrophy/Degeneration affecting the central nervous system *
% HP:0007373 ! Motor neuron atrophy *
% HP:0007354 ! Amyotrophic lateral sclerosis *
% HP:0002180 ! Neurodegeneration *
% HP:0100705 ! Abnormality of the glial cells *
% HP:0002171 ! Gliosis *
% HP:0012638 ! Abnormality of nervous system physiology *
% HP:0001250 ! Seizures *
% HP:0012759 ! Neurodevelopmental abnormality *
% HP:0001249 ! Intellectual disability *
% HP:0012758 ! Neurodevelopmental delay *
% HP:0001263 ! Global developmental delay *
% HP:0011442 ! Abnormality of central motor function *
% HP:0011443 ! Abnormality of coordination *
% HP:0001251 ! Ataxia *
% HP:0002493 ! Upper motor neuron dysfunction *
% HP:0001347 ! Hyperreflexia *
% HP:0000708 ! Behavioral abnormality *
% HP:0000613 ! Photophobia *
% HP:0000924 ! Abnormality of the skeletal system *
% HP:0011842 ! Abnormality of skeletal morphology *
% HP:0011844 ! Abnormal appendicular skeleton morphology *
% HP:0002813 ! Abnormality of limb bone morphology *
% HP:0045060 ! Aplasia/hypoplasia involving bones of the extremities *
% HP:0006496 ! Aplasia/hypoplasia involving bones of the upper limbs *
% HP:0009824 ! Upper limb undergrowth *
% HP:0011297 ! Abnormality of digit *
% HP:0010442 ! Polydactyly *
% HP:0009115 ! Aplasia/hypoplasia involving the skeleton *
% HP:0009815 ! Aplasia/hypoplasia of the extremities *
% HP:0009826 ! Limb undergrowth *
% HP:0009824 ! Upper limb undergrowth *
% HP:0045060 ! Aplasia/hypoplasia involving bones of the extremities *
% HP:0006496 ! Aplasia/hypoplasia involving bones of the upper limbs *
% HP:0009824 ! Upper limb undergrowth *
% HP:0040068 ! Abnormality of limb bone *
% HP:0002813 ! Abnormality of limb bone morphology *
% HP:0045060 ! Aplasia/hypoplasia involving bones of the extremities *
% HP:0006496 ! Aplasia/hypoplasia involving bones of the upper limbs *
% HP:0009824 ! Upper limb undergrowth *
% HP:0011297 ! Abnormality of digit *
% HP:0010442 ! Polydactyly *
% HP:0003549 ! Abnormality of connective tissue *
% HP:0100699 ! Scarring *
% HP:0000987 ! Atypical scarring of skin *
% HP:0001324 ! Muscle weakness *
% HP:0001871 ! Abnormality of blood and blood-forming tissues *
% HP:0005561 ! Abnormality of bone marrow cell morphology *
% HP:0012130 ! Abnormality of cells of the erythroid lineage *
% HP:0001877 ! Abnormality of erythrocytes *
% HP:0004447 ! Poikilocytosis *
% HP:0001927 ! Acanthocytosis *
% HP:0000152 ! Abnormality of head or neck *
% HP:0000234 ! Abnormality of the head *
% HP:0000271 ! Abnormality of the face *
% HP:0000366 ! Abnormality of the nose *
% HP:0000422 ! Abnormality of the nasal bridge *
% HP:0000431 ! Wide nasal bridge *
% HP:0005288 ! Abnormality of the nares *
% HP:0000463 ! Anteverted nares *
% HP:0010938 ! Abnormality of the external nose *
% HP:0000429 ! Abnormality of the nasal alae *
% HP:0000463 ! Anteverted nares *
% HP:0005105 ! Abnormal nasal morphology *
% HP:0000463 ! Anteverted nares *
% HP:0000315 ! Abnormality of the orbital region *
% HP:0100887 ! Abnormality of globe size *
% HP:0000568 ! Microphthalmia *
###Markdown
visualizationNow we will show enriched terms in a graph using graphviz
###Code
terms = [r['c'] for r in enr[:30]]
g = get_minimal_subgraph(ont.get_graph(), terms)
w = GraphRenderer.create('png')
w.outfile = "output/enr.png"
w.write(g, query_ids=terms)
###Output
_____no_output_____ |
tusimple-benchmark/example/velocity_demo.ipynb | ###Markdown
This is the demo code for running velocity esitmation evaluation.
###Code
import os
from evaluate.velocity import VeloEval
import copy
import numpy as np
###Output
_____no_output_____
###Markdown
We first create a file list containing all the ground truth annotation files. Then we load them.
###Code
dataset_path = '/mnt/truenas/scratch/chenyangli/benchmark/v2/clips/'
folder_path = os.listdir(dataset_path)
annotations = [os.path.join(dataset_path, x, 'annotation.json') for x in folder_path]
gt = VeloEval.load_annotation(annotations)
###Output
Finished loading 246 annotations.
###Markdown
You should load your prediction in a similar way we load ground truth above in the following section. Since this is just a demonstration, we create a fake prediction by adding some random number to the ground truth.
###Code
pred = copy.deepcopy(gt)
for idx in range(len(pred)):
for j in range(len(pred[idx])):
pred[idx][j]["velocity"][0] += np.random.normal(0, 0.5)
pred[idx][j]["velocity"][1] += np.random.normal(0, 0.5)
pred[idx][j]["position"][0] += np.random.normal(0, 0.5)
pred[idx][j]["position"][1] += np.random.normal(0, 0.5)
VeloEval.accuracy(pred, gt)
###Output
Velocity Estimation error (Near): 0.56655
Velocity Estimation error (Medium): 0.47895
Velocity Estimation error (Far): 0.47028
Velocity Estimation error total: 0.505260
Position Estimation error (Near): 0.48858
Position Estimation error (Medium): 0.53651
Position Estimation error (Far): 0.57087
Position Estimation error total: 0.53199
|
4. Create_Recommender_Engine - Manila Grey.ipynb | ###Markdown
Content-Based Recommendation Engine
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import spotipy
from spotipy.oauth2 import SpotifyClientCredentials
import keyring
import time
from sklearn.preprocessing import MinMaxScaler
%matplotlib inline
client_credentials_manager = SpotifyClientCredentials(client_id=keyring.get_password('spotify', 'cid'),
client_secret=keyring.get_password('spotify', 'secret') )
sp = spotipy.Spotify(client_credentials_manager = client_credentials_manager)
###Output
_____no_output_____
###Markdown
1. Read the recommendation pool
###Code
#read data
chart_tracks_df = pd.read_csv("data/df_charts_tracks_artists.csv")
chart_tracks_df = chart_tracks_df.drop_duplicates(subset='track_id')
chart_tracks_df.head(3)
# #read data
# chart_tracks_df = pd.read_csv("data/spotify_daily_charts_tracks_predicted_genres.csv")
# tracks_data_df = pd.read_csv("data/df_charts_tracks_artists.csv")
# #drop duplicates
# tracks_data_df = tracks_data_df.drop_duplicates(subset='track_id')
# tracks_data_df = tracks_data_df[['track_id', 'loudness', 'tempo']]
# #normalize loudness and tempo
# # scaler = MinMaxScaler()
# # chart_tracks_df['loudness'] = scaler.fit_transform(chart_tracks_df[['loudness']])
# # chart_tracks_df['tempo'] = scaler.fit_transform(chart_tracks_df[['tempo']])
# chart_tracks_df = pd.merge(chart_tracks_df.drop(['loudness', 'tempo'], 1), tracks_data_df, on='track_id')
# chart_tracks_df.head(3)
chart_tracks_df.shape
###Output
_____no_output_____
###Markdown
2. Input Seed Track
###Code
def get_track_data(t_id):
track_data = sp.track(t_id)
track_features = sp.audio_features(t_id)
#get only main(first) artist
td_list = [t_id,\
track_data['name'],\
track_data['artists'][0]['id'],\
track_data['artists'][0]['name'],\
track_data['album']['uri'].split(":")[2],\
track_data['duration_ms'],\
track_data['album']['release_date'],\
track_data['popularity']]
data = pd.DataFrame([td_list], columns = ['track_id','track_name','artist_id','artist_name','album_id','duration','release_date','popularity'])
relevant_cols = ['danceability', 'energy', 'key', 'loudness', 'mode',\
'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo']
tf_data = pd.DataFrame(track_features)
tf_data = tf_data[relevant_cols]
data = pd.concat([data, tf_data], axis=1)
return data
feature_cols = ['danceability', 'energy', 'loudness', 'speechiness', 'acousticness', 'instrumentalness',\
'liveness', 'valence', 'tempo']
seed_track_id = input("Enter track id: ")
seed_track_data = get_track_data(seed_track_id)
# seed_track_data['loudness'] = scaler.fit_transform(seed_track_data[['loudness']])
# seed_track_data['tempo'] = scaler.fit_transform(seed_track_data[['tempo']])
seed_track_data.head()
# feature_cols = ['danceability', 'energy', 'loudness', 'speechiness', 'acousticness', 'instrumentalness',\
# 'liveness', 'valence', 'tempo']
# genre_lookup = dict(tracks_df.groupby('genre_id').head(1)[['genre_id','genre']].values)
# import pickle
# with open('genre-knn.pkl', 'rb') as file:
# knn_optimal = pickle.load(file)
# seed_track_data['predicted_genre_id'] = seed_track_data.apply(lambda x: knn_optimal.predict(x[feature_cols].values.reshape(1,-1))[0]\
# , axis=1)
# seed_track_data['predicted_genre'] = seed_track_data['predicted_genre_id'].apply(lambda x: genre_lookup[x])
# seed_track_data['predicted_genre_prob'] = seed_track_data.apply(lambda x: np.max(knn_optimal.predict_proba(x[feature_cols].values.reshape(1,-1)))\
# , axis=1)
# seed_track_data['all_genre_prob'] = seed_track_data.apply(lambda x: knn_optimal.predict_proba(x[feature_cols].values.reshape(1,-1))[0]\
# , axis=1)
###Output
_____no_output_____
###Markdown
3. Explore Similarity Measures
###Code
from sklearn.metrics.pairwise import euclidean_distances, manhattan_distances, cosine_similarity
###Output
_____no_output_____
###Markdown
Euclidean
###Code
chart_tracks_df['euclidean_dist'] = chart_tracks_df.apply(lambda x: euclidean_distances(x[feature_cols].values.reshape(-1, 1),\
seed_track_data[feature_cols].values.reshape(-1, 1))\
.flatten()[0], axis=1)
#get top 10 nearest to seed_track_data
recommendation_df = chart_tracks_df[chart_tracks_df['track_id']!=seed_track_id].sort_values('euclidean_dist')[:30]
recommendation_df[['track_id','track_name','artist_name','euclidean_dist']+feature_cols].head(10)
###Output
_____no_output_____
###Markdown
Manhattan
###Code
chart_tracks_df['manhattan_dist'] = chart_tracks_df.apply(lambda x: manhattan_distances(x[feature_cols].values.reshape(-1, 1),\
seed_track_data[feature_cols].values.reshape(-1, 1))\
.flatten()[0], axis=1)
#get top 10 nearest to seed_track_data
recommendation_df = chart_tracks_df[chart_tracks_df['track_id']!=seed_track_id].sort_values('manhattan_dist')[:30]
recommendation_df[['track_id','track_name','artist_name','manhattan_dist']+feature_cols].head(10)
###Output
_____no_output_____
###Markdown
Cosine
###Code
chart_tracks_df['cosine_dist'] = chart_tracks_df.apply(lambda x: 1-cosine_similarity(x[feature_cols].values.reshape(1, -1),\
seed_track_data[feature_cols].values.reshape(1, -1))\
.flatten()[0], axis=1)
#get top 10 nearest to seed_track_data
# recommendation_df = chart_tracks_df[chart_tracks_df['track_id']!=seed_track_data['track_id']].sort_values('cosine_dist')[:30]
recommendation_df = chart_tracks_df[chart_tracks_df['track_id']!=seed_track_id].sort_values('cosine_dist')[:10]
recommendation_df[['track_id','track_name','artist_name','cosine_dist']+feature_cols]
###Output
_____no_output_____
###Markdown
View histograms of the 3 similarity measures
###Code
chart_tracks_df[['euclidean_dist','manhattan_dist','cosine_dist']].hist()
# recommendation_df['artist_name'].to_csv('artists_like_manila_grey.csv')
###Output
_____no_output_____ |
01x_lazy.ipynb | ###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the NumPy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily — to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g.,```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](https://blog.dask.org/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
%run prep.py -d accounts
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
3000000
Wall time: 889 ms
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
%%time
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
total = delayed(sum)([na, nb, nc])
print(total.compute())
###Output
3000000
Wall time: 451 ms
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
%%time
files = [delayed(pd.read_csv)(f) for f in filenames]
lengths = [delayed(len)(f) for f in files]
total = delayed(sum)(lengths)
print(total.compute())
%%time
total = sum([delayed(len)(delayed(pd.read_csv)(f)) for f in filenames])
print(total.compute())
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
%%timeit
result = 0
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
339 µs ± 19.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
%%timeit
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
398 µs ± 7.17 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
Version using delayed
###Code
from dask.distributed import Client
client = Client(n_workers=4)
%%time
result = 0
with open('README.md', 'r') as f:
for line in f:
result += delayed(len)(splitter.findall(line))
result.compute()
# Deep graph, not efficient
result.visualize()
client.close()
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=('http://www.google.com', q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the lYou may need Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadsAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the Numpy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily - to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g., ```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](http://matthewrocklin.com/blog/work/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
import pandas as pd
import os
filenames = [os.path.join('data', f'accounts.{i}.csv') for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
3000000
CPU times: user 752 ms, sys: 65.1 ms, total: 817 ms
Wall time: 879 ms
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
la = delayed(len)(a)
lb = delayed(len)(b)
lc = delayed(len)(c)
total = delayed(sum)([la, lb, lc])
# execute
%time total.compute()
###Output
CPU times: user 828 ms, sys: 84.4 ms, total: 912 ms
Wall time: 406 ms
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
f = [delayed(pd.read_csv)(f) for f in filenames]
l = [delayed(len)(l) for l in f]
total = delayed(sum)(l)
%time total.compute()
total.visualize()
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=('http://www.google.com', q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the lYou may need Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadsAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the Numpy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily - to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g., ```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](http://matthewrocklin.com/blog/work/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
_____no_output_____
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
###Output
_____no_output_____
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
%load solutions/Foundations-03.py
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=('http://www.google.com', q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the lYou may need Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
惰性执行 在这里,我们将讨论dask背后的一些概念,以及代码的惰性执行。 如果您急切希望继续学习本教程,则无需阅读这些材料,但它可能有助于理解dask的基础概念、这些内容如何与您可能已经在使用的技术相适应,以及理解错误会如何发生。 序幕 作为 Python 程序员,您可能已经执行了某些*技巧* 来启用大于内存数据集的计算、并行执行或延迟/后台执行。也许用这种说法不够清楚,但一些例子应该会使事情更清楚。 Dask 的重点是让简单的事情变得简单,让复杂的事情成为可能!除了[详细介绍](http://dask.pydata.org/en/latest/),我们可以总结一下Dask的基础知识如下:- 通过将数据分成块并指定任务链来处理不适合内存的数据- 跨内核甚至集群节点并行执行任务- 将计算移至数据而不是相反,以最小化通信开销所有这些都可以让您充分利用计算资源,但仍以一种非常熟悉的方式进行编程:用于构建基本任务的 for 循环、Python 迭代器以及NumPy(Array)多维数组和 Pandas(DataFrame)表格数据。本笔记本的其余部分将带您了解这些编程范式中的第一个。这可能对于一些用户来说的太过细节,他们可以跳到迭代器、数组和Dataframe部分;但是会有一些数据处理任务不容易适应这些抽象,需要回退到这里的方法。我们在笔记本的末尾包含了一些示例,表明 Dask 的构建方式背后的想法实际上并不那么新颖,并且有经验的程序员之前会在其他情况下遇到过部分设计。这些例子留给感兴趣的人。 Dask 是一个图执行引擎 Dask 允许您为要执行的计算构建一个规划方案。 这听起来可能很奇怪,但一个简单的例子将证明您可以在使用非常普通的 Python 函数和 for 循环进行编程时实现这一点。 我们在之前的笔|记本中看到了这一点。
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
在这里,我们使用了delayed修饰来表明我们希望这些函数延迟运行——保存输入集并仅按需执行。 `dask.delayed` 也是一个可以做到这一点的函数,没有修饰,保持原始函数不变,例如,```python delayed_inc = delayed(inc)```
###Code
# 这看起来像普通代码
x = inc(15)
y = inc(30)
total = add(x, y)
# x、y 和 total 都是delayed对象。
# 它们包含如何进行计算的方案
###Output
_____no_output_____
###Markdown
调用延迟函数会创建一个可以交互检查的delayed对象 (`x, y, total`)。 制作这些对象在某种程度上等同于诸如 `lambda` 或函数包装器之类的构造(见下文)。 每个都有一个描述任务图的简单字典,一个关于如何执行计算的完整规范。我们可以将对象`total`对应的计算链可视化如下; 圆圈是函数,矩形是数据/结果。
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
但到目前为止,还没有实际执行任何功能。 这展示了 Dask 的图形创建部分(在本例中为`delayed()`)和 Dask 的图形执行部分之间的划分。要在可视化中运行"图"并实际获得结果,请执行以下操作:
###Code
# 执行所有任务
total.compute()
###Output
_____no_output_____
###Markdown
**你为什么要关心这个?**通过在执行任何操作之前构建我们想要执行的计算规范,我们可以将规范传递给*执行引擎*进行评估。在 Dask 的情况下,这个执行引擎可以在集群的许多节点上运行,因此您可以访问所有机器上的全部 CPU 内核和内存。 Dask 将智能地执行您的计算,以尽量减少内存中保存的数据量,同时并行化构成图形的任务。请注意,在下面的动画图中,四个worker正在处理(简单)图,执行首先垂直向上进行分支,以便在移动到新分支之前可以删除中间结果。使用`delayed` 和普通的pythonic 循环代码,可以构建非常复杂的图形并将其传递给Dask 执行。查看[模拟复杂ETL](https://blog.dask.org/2017/01/24/dask-custom) 工作流程的一个很好的例子。![this](images/grid_search_schedule.gif) 练习 我们将`delayed`应用于实际的数据处理任务,尽管是一个简单的任务。考虑使用 `pd.read_csv` 读取三个 CSV 文件,然后测量它们的总长度。 我们将考虑如何使用普通 Python 代码执行此操作,然后使用延迟为该过程构建一个图,最后使用 Dask 执行该图,以获得超过2倍加速(只有三个输入需要被并行化)。
###Code
%run prep.py -d accounts
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# 普通的串行代码
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
_____no_output_____
###Markdown
您的任务是使用原始 Python 代码上的delayed函数再次重新创建此图。 你想延迟的三个函数是`pd.read_csv`、`len`和`sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# 你的详细代码写在这里
###Output
_____no_output_____
###Markdown
接下来,使用循环重复此操作,而不是写出所有变量。
###Code
# 你的简洁代码写在这里
## 详细版本
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## 简介版本
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
_____no_output_____
###Markdown
**笔记**Delayed对象支持各种操作:```python x2 = x + 1``` 如果 `x` 是delayed结果(如上面的 `total`),那么 `x2` 也是。 支持的操作包括算术运算符、项目或切片选择、属性访问和方法调用——基本上任何可以被表述为"lambda"表达式的操作。*不*支持的操作包括变异、setter方法、迭代(for)和布尔值(判断式)。 附录:更多细节和示例 以下示例表明,在处理大数据时,Dask 所做的事情与正常的 Python 编程并没有太大区别。 这些示例**仅供专家使用**,普通用户可以继续使用教程中的下一个笔记本。 示例 1:简单的字数统计 该目录包含一个名为`README.md`的文件。 您将如何计算该文件中的单词数?最简单的方法是将所有数据加载到内存中,在空白处拆分并计算结果数量。 这里我们使用正则表达式来拆分单词。
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
这种方法的问题在于它不能扩展——如果文件非常大,它和生成的单词列表可能会填满内存。 我们可以很容易地避免这种情况,因为我们只需要一个简单的总和,并且每一行都完全独立于其他行。 现在我们评估每条数据并立即再次释放空间,以便我们可以对任意大的文件执行此操作。 请注意,时间效率和内存占用之间通常存在权衡:以下使用很少的内存,但对于未填充大量内存的文件可能会更慢。 一般来说,人们希望块足够小,不会给内存带来压力,但足够大以有效使用 CPU。
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
示例2:后台执行 有许多任务需要一段时间才能完成,但实际上并不需要太多的 CPU,例如任何需要通过网络进行通信或来自用户输入的任务。 在典型的串行编程中,需要在进程完成时停止执行,然后继续执行。 这对用户体验来说是可怕的(想象一下缓慢的进度条会锁定应用程序并且无法取消),并且浪费时间(同时 CPU 可能一直在做有用的工作)。例如,我们可以按如下方式启动进程并获取它们的输出:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` 任务在单独的进程中运行,返回码将保持为`None`,直到完成,那时它将被改为`0`时。 要返回结果,我们需要 `out = p.communicate()[0]`(如果进程未完成,它将阻塞)。 同样,我们可以在后台启动 Python 进程和线程。 有些方法允许映射多个输入并收集结果,稍后会详细介绍。 线程启动并且单元格立即完成,但与下载相关的数据仅在一段时间后出现在队列对象中。
###Code
# 编辑 sources.py 以配置源位置
import sources
sources.lazy_url
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=(sources.lazy_url, q))
t.start()
# 将结果取回此线程。 如果工作线程没有完成,这将等待。
q.get()
###Output
_____no_output_____
###Markdown
考虑:如果`get_webdata` 函数中出现异常,你会看到什么? 您可以取消注释上面的 `raise` 行,然后重新执行这两个单元格。 发生什么了? 有什么方法可以调试执行以找到错误的根本原因吗? 示例 3:延迟执行 Python 中有很多方法可以指定要执行的计算,但只能*稍后*运行。
###Code
def add(x, y):
return x + y
# 有时我们使用字符串推迟计算
x = 15
y = 30
z = "add(x, y)"
eval(z)
# 我们可以使用 lambda 或其他“闭包”
x = 15
y = 30
z = lambda: add(x, y)
z()
# functools.partial 中发生了非常相似的事情
import functools
z = functools.partial(add, x, y)
z()
# Python 生成器默认延迟执行
# 许多 Python 函数都期望这样的可迭代对象
def gen():
res = x
yield res
res += y
yield res
g = gen()
# 运行一次:我们得到一个值并在生成器中停止执行
# 再次运行,执行完成
next(g)
###Output
_____no_output_____
###Markdown
Dask 任务图 任何 Dask 对象,例如上面的`total`,都有一个属性来描述产生该结果所需的计算。 确实,这正是我们一直在谈论的图,它可以被可视化。 我们看到它是一个简单的字典,其中键是唯一的任务标识符,值是计算的函数和输入。`delayed` 是创建 Dask 任务图的一种方便的机制,但喜欢冒险的人可能希望通过直接构建图词典来充分发挥灵活性。 详细信息可以在[这里](http://dask.pydata.org/en/latest/graphs.html)找到。
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadsAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the Numpy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily - to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g., ```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](http://matthewrocklin.com/blog/work/2017/01/24/dask-custom) work flow. Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
_____no_output_____
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
###Output
_____no_output_____
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
%load solutions/Foundations-03.py
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=('http://www.google.com', q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the lYou may need Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the NumPy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in the previous notebook.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily — to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g.,```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# x, y and total are all delayed objects.
# They contain a prescription of how to carry out the computation
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`x, y, total`) which can be examined interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](https://blog.dask.org/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
%run prep.py -d accounts
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
_____no_output_____
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
###Output
_____no_output_____
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
###Output
_____no_output_____
###Markdown
You can check the solutions below:
###Code
%load solutions/01x_verbose_concise.py
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime).For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
# Edit sources.py to configure source locations
import sources
sources.lazy_url
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=(sources.lazy_url, q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the root cause of the error? Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield res
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, in which the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventurous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the NumPy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily — to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g.,```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](https://blog.dask.org/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
%run prep.py -d accounts
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
_____no_output_____
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
###Output
_____no_output_____
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=('http://www.google.com', q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the lYou may need Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadsAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the Numpy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily - to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g., ```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](http://matthewrocklin.com/blog/work/2017/01/24/dask-custom) work flow. Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
3000000
CPU times: user 2.45 s, sys: 235 ms, total: 2.68 s
Wall time: 1.18 s
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
%%time
# your verbose code here
a = delayed(pd.read_csv)(filenames[0])
b = delayed(pd.read_csv)(filenames[1])
c = delayed(pd.read_csv)(filenames[2])
na = delayed(len)(a)
nb = delayed(len)(b)
nc = delayed(len)(c)
total = delayed((sum([na, nb, nc])))
res = total.compute()
print(res)
###Output
3000000
CPU times: user 1.53 s, sys: 1.24 s, total: 2.77 s
Wall time: 676 ms
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
%%time
# your concise code here
lengths = []
for i in range(len(filenames)):
file = pd.read_csv(filenames[i])
length = delayed(len(file))
lengths.append(length)
total = sum(lengths)
res = total.compute()
print(res)
# %load solutions/Foundations-03.py
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
3000000
CPU times: user 2.07 s, sys: 945 ms, total: 3.01 s
Wall time: 661 ms
3000000
CPU times: user 2.15 s, sys: 1.2 s, total: 3.35 s
Wall time: 667 ms
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=('http://www.google.com', q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the lYou may need Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the NumPy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine
###Code
from dask.distributed import Client
# client = Client(n_workers=4)
client = Client('tcp://mlrun-zdask-67e3665c-f.default-tenant:8786', timeout=600)
###Output
/User/.conda/envs/py376/lib/python3.7/site-packages/distributed/client.py:1129: VersionMismatchWarning: Mismatched versions found
+---------+---------------+---------------+---------------+
| Package | client | scheduler | workers |
+---------+---------------+---------------+---------------+
| blosc | None | 1.10.6 | 1.10.6 |
| lz4 | None | 3.1.3 | 3.1.3 |
| numpy | 1.21.4 | 1.19.5 | 1.19.5 |
| python | 3.7.6.final.0 | 3.7.9.final.0 | 3.7.9.final.0 |
+---------+---------------+---------------+---------------+
warnings.warn(version_module.VersionMismatchWarning(msg[0]["warning"]))
###Markdown
Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in the previous notebook.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily — to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g.,```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# x, y and total are all delayed objects.
# They contain a prescription of how to carry out the computation
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`x, y, total`) which can be examined interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](https://blog.dask.org/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
%run prep.py -d accounts
cwd = os.getcwd()
print(cwd)
import pandas as pd
import os
filenames = [os.path.join(cwd, 'data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
3000000
CPU times: user 603 ms, sys: 0 ns, total: 603 ms
Wall time: 730 ms
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
###Output
3000000
CPU times: user 4.38 ms, sys: 330 µs, total: 4.71 ms
Wall time: 548 ms
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
3000000
CPU times: user 4.18 ms, sys: 472 µs, total: 4.65 ms
Wall time: 558 ms
3000000
CPU times: user 3.76 ms, sys: 0 ns, total: 3.76 ms
Wall time: 545 ms
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime).For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
# Edit sources.py to configure source locations
import sources
sources.lazy_url
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=(sources.lazy_url, q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the root cause of the error? Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield res
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, in which the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventurous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
student_score = { 'Mathew': { 'Math': 28,
'Science': 18,
'Econimics': 15},
'Ritika': { 'Math': 19,
'Science': 20,
'Econimics': 19},
'John': { 'Math': 11,
'Science': 22,
'Econimics': 17}
}
dict(student_score)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the NumPy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily — to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g.,```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](https://blog.dask.org/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
%run prep.py -d accounts
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
3000000
CPU times: user 919 ms, sys: 173 ms, total: 1.09 s
Wall time: 1.12 s
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
# na = len(a)
# nb = len(b)
# nc = len(c)
total = delayed(sum([a, b, c]))
# execute
%time
total.compute()
###Output
CPU times: user 3 µs, sys: 1e+03 ns, total: 4 µs
Wall time: 7.87 µs
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=('http://www.google.com', q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the lYou may need Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadsAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the Numpy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily - to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g., ```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
# total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](http://matthewrocklin.com/blog/work/2017/01/24/dask-custom) work flow. Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
import pandas as pd
import os
filenames = [os.path.join('data', f"accounts.{ i }.csv") for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
3000000
Wall time: 729 ms
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
%%time
# your verbose code here
a = delayed(pd.read_csv)(filenames[0])
b = delayed(pd.read_csv)(filenames[1])
c = delayed(pd.read_csv)(filenames[2])
na = delayed(len)(a)
nb = delayed(len)(b)
nc = delayed(len)(c)
total = delayed(sum)([na, nb, nc])
print(total.compute())
###Output
3000000
Wall time: 463 ms
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
%%time
# your concise code here
dfs = (delayed(pd.read_csv)(filenames[i]) for i in range(3))
total = delayed(sum)(delayed(len)(d) for d in dfs)
print(total.compute())
# %load solutions/Foundations-03.py
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
3000000
Wall time: 451 ms
3000000
Wall time: 465 ms
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=('http://www.google.com', q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the lYou may need Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
def power(x, p):
return x ** p
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
懒惰执行 这里我们讨论了dask背后的一些概念,以及代码的懒惰执行。如果你急于继续学习本教程,你不需要通过这些材料,但它可能有助于理解dask的基本概念,这些东西如何与你可能已经使用的技术相结合,以及如何理解可能出错的事情。 引言 作为Python程序员,你可能已经执行了某些*技巧来实现大于内存的数据集的计算、并行执行或延迟/后台执行。也许这样的措辞,并不清楚我们的意思,但几个例子应该能让事情变得更清楚。Dask的意义在于让简单的事情变得简单,让复杂的事情变得可能!除了[详细介绍](http://dask.pydata.org/en/latest/)之外,我们可以将Dask的基本原理总结为以下几点。- 通过将数据分解成块并指定任务链,来处理不适合放在内存中的数据。- 在集群的处理核心以及节点之间并行执行任务。- 将计算移到数据上,而不是相反,以减少通信开销。所有这些都让你能够最大限度地利用你的计算资源,但编程的方式是非常熟悉的:for-循环来构建基本任务,Python迭代器,以及NumPy(数组)和Pandas(数据框架)函数,分别用于多维或表格数据。本笔记本的剩余部分将带你了解这些编程范式中的第一种。这比一些用户想要的更详细,他们可以跳过前面的迭代器、数组和数据框架部分;但会有一些数据处理任务不容易适合这些抽象的东西,需要回到这里的方法。我们在笔记本的最后加入了一些例子,表明Dask如何构建背后的想法其实并没有那么新颖,有经验的程序员会在其他情况下遇到过部分设计。这些例子留给有兴趣的人。 Dask是一个图执行引擎 Dask 允许您为您要进行的计算构建一个处方。这可能听起来很奇怪,但一个简单的例子将证明你可以在使用完全普通的Python函数和for-loops编程时实现这一点。我们在之前的笔记本中已经看到了这一点。
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
在这里,我们使用了delayed注解来表明,我们希望这些函数的操作是懒惰的--保存输入集并仅在需求时执行。`dask.delayed`也是一个可以做到这一点的函数,不需要注释,保持原来的函数不变,例如。```python delayed_inc = delayed(inc)```
###Code
# 貌似普通的代码
x = inc(15)
y = inc(30)
total = add(x, y)
# x、y和total都是延迟对象。
# 它们载有如何进行计算的规定。
###Output
_____no_output_____
###Markdown
调用延迟函数会创建一个延迟对象(`x, y, total`),可以交互式地检查。制作这些对象在某种程度上等同于像`lambda`或函数包装器这样的结构(见下文)。每个对象都拥有一个描述任务图的简单字典,一个如何进行计算的完整规范。我们可以将对象`total`所对应的计算链可视化如下;圆圈是函数,矩形是数据/结果。
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
但到目前为止,还没有任何函数被实际执行。这证明了Dask的图形创建部分(`delayed()`,在这个例子中)和Dask的图形执行部分之间的分工。要在可视化中运行 "图",并实际得到一个结果,可以这样做。
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**为什么要关心这个?通过在执行任何事情之前建立一个我们要进行的计算规范,我们可以将该规范传递给一个*执行引擎*进行评估。在 Dask 的情况下,这个执行引擎可以在集群的许多节点上运行,因此您可以访问所有机器上的全部 CPU 核心和内存。Dask 将会智能地执行您的计算,并注意尽量减少内存中的数据量,同时对构成图形的任务进行并行处理。请注意,在下面的动画图中,四个工人正在处理(简单)图形,执行过程首先在分支上垂直进行,因此在进入新的分支之前,可以清除中间结果。通过`delayed`和正常的python循环代码,可以建立非常复杂的图形,并传递给Dask执行。请看[模拟复杂ETL](https://blog.dask.org/2017/01/24/dask-custom)工作流的一个很好的例子。![this](images/grid_search_schedule.gif) 练习 我们将把 "delayed "应用于一个真实的数据处理任务,尽管是一个简单的任务。考虑用`pd.read_csv`读取三个CSV文件,然后测量它们的总长度。我们将考虑如何用普通的Python代码来做这件事,然后用delayed为这个过程建立一个图,最后用Dask执行这个图,方便的加速因子超过两个(只有三个输入要并行处理)。
###Code
%run prep.py -d accounts
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
_____no_output_____
###Markdown
你的任务是在原始Python代码上使用delayed函数重新创建这个图。你想要延迟的三个函数是`pd.read_csv`、`len`和`sum`... ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
###Output
_____no_output_____
###Markdown
接下来,使用循环重复这个过程,而不是写出所有的变量。
###Code
# your concise code here
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
_____no_output_____
###Markdown
**注**:延迟对象支持各种操作。```python x2 = x + 1``` 如果`x`是一个延迟的结果(比如上面的`total`),那么`x2`也是如此。支持的操作包括算术运算符、项或分片选择、属性访问和方法调用--基本上任何可以表述为 "lambda "表达式的操作。不支持的操作包括突变、setter方法、迭代(for)和bool(谓词)。 附录:进一步的细节和实例 下面的例子表明,Dask在处理大数据时,所做的各种事情与普通的Python编程并没有太大的区别。这些例子**只针对专家**,典型用户可以继续看教程中的下一个笔记本。 例1:简单单词计数 这个目录下有一个叫 "README.md "的文件。如何计算该文件中的字数?最简单的方法是将所有的数据加载到内存中,在空白处进行分割,然后计算结果的数量。这里我们使用一个正则表达式来拆分单词。
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
这种方法的问题是不具有扩展性--如果文件非常大,产生的单词列表可能会耗尽内存。我们可以很容易地避免这种情况,因为我们只需要一个简单的和,而且每一行完全独立于其他行。现在,我们评估每一个数据,并立即再次释放空间,所以我们可以在任意大的文件上执行。请注意,在时间效率和内存占用之间经常会有一个权衡:下面使用的内存非常少,但对于那些没有填满很大一部分内存的文件来说,可能会比较慢。一般情况下,人们希望文件块足够小,不会对内存造成压力,但又足够大,以便有效地使用CPU。
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
例2:后台执行 有很多任务需要一段时间才能完成,但实际上并不需要耗费太多的CPU,例如任何需要通过网络通信的任务,或者需要用户输入的任务。在典型的顺序编程中,需要在进程完成时停止执行,然后继续执行。这对于用户体验来说是可怕的(想象一下缓慢的进度条,它锁定了应用程序,并且无法取消),并且浪费了时间(CPU在这期间本来可以做有用的工作)。例如,我们可以启动进程,并得到它们的输出,如下所示:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` 任务是在一个单独的进程中运行的,返回代码将保持`None`,直到它完成,这时它将变为`0`。为了得到返回结果,我们需要`out = p.community()[0]`(如果进程没有完成,就会阻塞)。 同样,我们可以在后台启动Python进程和线程。有些方法允许在多个输入上进行映射,并收集结果,稍后会详细介绍。 线程启动,单元格立即完成,但与下载相关的数据要过一段时间才会出现在队列对象中。
###Code
# 编辑sources.py来配置源位置
import sources
sources.lazy_url
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=(sources.lazy_url, q))
t.start()
# 取回结果到这个线程中。如果工作线程没有完成,这将会等待。
q.get()
###Output
_____no_output_____
###Markdown
考虑一下:如果在`get_webdata`函数中出现了异常,你会看到什么?你可以取消上面的 "raise "行,然后重新执行这两个单元。结果会怎样?有没有办法调试执行,找到错误的根本原因? 例3:延迟执行 在Python中,有很多方法可以指定你要执行的计算,但只能在*后运行它。
###Code
def add(x, y):
return x + y
# 有时,我们会用字符串来推迟计算
x = 15
y = 30
z = "add(x, y)"
eval(z)
# 我们可以使用lambda或其他 "闭合 "的方式。
x = 15
y = 30
z = lambda: add(x, y)
z()
# 类似的情况也发生在functools.partial中。
import functools
z = functools.partial(add, x, y)
z()
# Python生成器默认延迟执行
# 许多Python函数都期望有这样的可迭代对象
def gen():
res = x
yield res
res += y
yield res
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask图 任何Dask对象,如上面的 "total",都有一个属性,描述了产生该结果所需的计算。事实上,这正是我们一直在谈论的图,它可以被可视化。我们看到,这是一个简单的字典,其中键是唯一的任务标识,值是计算的函数和输入。`delayed`是一个创建Dask图的方便机制,但喜欢冒险的人可能希望通过直接构建图字典来发挥充分的灵活性。详细信息可以在[这里](http://dask.pydata.org/en/latest/graphs.html)找到。
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the NumPy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily — to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g.,```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](https://blog.dask.org/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
%run prep.py -d accounts
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
_____no_output_____
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
###Output
_____no_output_____
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
# Edit sources.py to configure source locations
import sources
sources.lazy_url
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=(sources.lazy_url, q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the root cause of the error? Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the NumPy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily — to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g.,```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](https://blog.dask.org/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
%run prep.py -d accounts
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
_____no_output_____
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
###Output
_____no_output_____
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=('http://www.google.com', q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the lYou may need Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadsAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the Numpy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily - to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g., ```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](http://matthewrocklin.com/blog/work/2017/01/24/dask-custom) work flow. Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
3000000
CPU times: user 759 ms, sys: 184 ms, total: 943 ms
Wall time: 952 ms
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
%%time
# your verbose code here
# normal, sequential code
a = delayed(pd.read_csv)(filenames[0])
b = delayed(pd.read_csv)(filenames[1])
c = delayed(pd.read_csv)(filenames[2])
na = delayed(len)(a)
nb = delayed(len)(b)
nc = delayed(len)(c)
total = delayed(sum)([na, nb, nc])
print(total.compute())
###Output
3000000
CPU times: user 877 ms, sys: 245 ms, total: 1.12 s
Wall time: 538 ms
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
delayed(pd.read_csv)
%%time
# your concise code here
total = 0
for filename in filenames:
df = delayed(pd.read_csv)(filename)
total = total + delayed(len)(df)
print(total.compute())
%%time
# clearer, more concise version
dfs = [delayed(pd.read_csv)(f) for f in filenames]
lens = [delayed(len)(df) for df in dfs]
total = delayed(sum)(lens)
print(total.compute())
%load solutions/Foundations-03.py
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=('http://www.google.com', q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the lYou may need Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield res
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadsAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the Numpy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily - to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g., ```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](http://matthewrocklin.com/blog/work/2017/01/24/dask-custom) work flow. Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
3000000
CPU times: user 2.25 s, sys: 266 ms, total: 2.51 s
Wall time: 1.22 s
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
###Output
_____no_output_____
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
%load solutions/Foundations-03.py
###Output
3000000
CPU times: user 1.15 s, sys: 1.12 s, total: 2.27 s
Wall time: 574 ms
3000000
CPU times: user 1.54 s, sys: 1.46 s, total: 3 s
Wall time: 631 ms
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=('http://www.google.com', q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the lYou may need Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the NumPy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in the previous notebook.
###Code
from dask.distributed import Client
client = Client(n_workers=4)
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily — to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g.,```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# x, y and total are all delayed objects.
# They contain a prescription of how to carry out the computation
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`x, y, total`) which can be examined interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](https://blog.dask.org/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
%run prep.py -d accounts
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
_____no_output_____
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
###Output
_____no_output_____
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime).For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
# Edit sources.py to configure source locations
import sources
sources.lazy_url
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=(sources.lazy_url, q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the root cause of the error? Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield res
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, in which the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventurous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the NumPy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in the previous notebook.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily — to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g.,```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# x, y and total are all delayed objects.
# They contain a prescription of how to carry out the computation
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`x, y, total`) which can be examined interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](https://blog.dask.org/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
%run prep.py -d accounts
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
_____no_output_____
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
###Output
_____no_output_____
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime).For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
# Edit sources.py to configure source locations
import sources
sources.lazy_url
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=(sources.lazy_url, q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the root cause of the error? Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield res
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, in which the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventurous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the NumPy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily — to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g.,```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](https://blog.dask.org/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
%run prep.py -d accounts
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
3000000
Wall time: 1.74 s
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
%%time
# your verbose code here
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
na = delayed(len)(a)
nb = delayed(len)(b)
nc = delayed(len)(c)
total = delayed(sum)([na, nb, nc])
print(total)
###Output
Delayed('sum-01764ecf-ac57-4ae8-8189-c436ad1c1a7e')
Wall time: 18.9 ms
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
%%time
# your concise code here
delayed_read_csv = delayed(pd.read_csv)
files = [delayed_read_csv(filename) for filename in filenames]
lens = [delayed(len)(file) for file in files]
total = delayed(sum)(lens)
total.compute()
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
3000000
Wall time: 786 ms
3000000
Wall time: 874 ms
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=('http://www.google.com', q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the lYou may need Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadsAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the Numpy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily - to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g., ```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
%time total.compute()
###Output
CPU times: user 445 µs, sys: 4.37 ms, total: 4.82 ms
Wall time: 2.56 ms
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](http://matthewrocklin.com/blog/work/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
3000000
CPU times: user 535 ms, sys: 93.2 ms, total: 628 ms
Wall time: 627 ms
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
%%time
# your verbose code here
import pandas as pd
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
total = delayed(sum)([na, nb, nc])
%time total.compute()
###Output
CPU times: user 633 ms, sys: 394 ms, total: 1.03 s
Wall time: 427 ms
CPU times: user 636 ms, sys: 399 ms, total: 1.04 s
Wall time: 433 ms
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
%%time
# your concise code here
lens = []
for i in range(3):
df = delayed_read_csv(filenames[i])
ndf = delayed_len(df)
lens.append(ndf)
total = delayed(sum)(lens)
%time total.compute()
%load solutions/Foundations-03.py
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=('http://www.google.com', q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()[:50]
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the lYou may need Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the NumPy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily — to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g.,```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](https://blog.dask.org/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
%run prep.py -d accounts
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
_____no_output_____
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
###Output
_____no_output_____
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
_____no_output_____
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
# Edit sources.py to configure source locations
import sources
sources.lazy_url
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=(sources.lazy_url, q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the root cause of the error? Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield res
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____
###Markdown
Lazy execution Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong. Prelude As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:- process data that doesn't fit into memory by breaking it into blocks and specifying task chains- parallelize execution of tasks across cores and even nodes of a cluster- move computation to the data rather than the other way around, to minimize communication overheadAll of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the NumPy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested. Dask is a graph execution engine Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in the previous notebook.
###Code
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
###Output
_____no_output_____
###Markdown
Here we have used the delayed annotation to show that we want these functions to operate lazily — to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g.,```python delayed_inc = delayed(inc)```
###Code
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# x, y and total are all delayed objects.
# They contain a prescription of how to carry out the computation
###Output
_____no_output_____
###Markdown
Calling a delayed function created a delayed object (`x, y, total`) which can be examined interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
###Code
total.visualize()
###Output
_____no_output_____
###Markdown
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.To run the "graph" in the visualization, and actually get a result, do:
###Code
# execute all tasks
total.compute()
###Output
_____no_output_____
###Markdown
**Why should you care about this?**By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](https://blog.dask.org/2017/01/24/dask-custom) work flow.![this](images/grid_search_schedule.gif) Exercise We will apply `delayed` to a real data processing task, albeit a simple one.Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
###Code
s3options = {'anon': True,
"client_kwargs":{
'endpoint_url': 'http://minio.minio:9000'
}}
import pandas as pd
import os
filenames = [os.path.join('s3://', 'sci-data', 'dask-tutorial-data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0], storage_options=s3options)
b = pd.read_csv(filenames[1], storage_options=s3options)
c = pd.read_csv(filenames[2], storage_options=s3options)
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
###Output
3000000
CPU times: user 1.28 s, sys: 634 ms, total: 1.92 s
Wall time: 3.88 s
###Markdown
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`.. ```pythondelayed_read_csv = delayed(pd.read_csv)a = delayed_read_csv(filenames[0])...total = ... execute%time total.compute() ```
###Code
# your verbose code here
###Output
_____no_output_____
###Markdown
Next, repeat this using loops, rather than writing out all the variables.
###Code
# your concise code here
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0], storage_options=s3options)
b = delayed_read_csv(filenames[1], storage_options=s3options)
c = delayed_read_csv(filenames[2], storage_options=s3options)
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn, storage_options=s3options) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
###Output
3000000
CPU times: user 1.06 s, sys: 357 ms, total: 1.42 s
Wall time: 1.42 s
3000000
CPU times: user 1.03 s, sys: 194 ms, total: 1.22 s
Wall time: 1.34 s
###Markdown
**Notes**Delayed objects support various operations:```python x2 = x + 1``` if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate). Appendix: Further detail and examples The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial. Example 1: simple word count This directory contains a file called `README.md`. How would you count the number of words in that file?The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
###Code
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
###Output
_____no_output_____
###Markdown
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
###Code
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
###Output
_____no_output_____
###Markdown
Example 2: background execution There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime).For example, we can launch processes and get their output as follows:```python import subprocess p = subprocess.Popen(command, stdout=subprocess.PIPE) p.returncode``` The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete). Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
###Code
# Edit sources.py to configure source locations
import sources
sources.lazy_url
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=(sources.lazy_url, q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
###Output
_____no_output_____
###Markdown
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the root cause of the error? Example 3: delayed execution There are many ways in Python to specify the computation you want to execute, but only run it *later*.
###Code
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield res
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
###Output
_____no_output_____
###Markdown
Dask graphs Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, in which the keys are unique task identifiers, and the values are the functions and inputs for calculation.`delayed` is a handy mechanism for creating the Dask graph, but the adventurous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
###Code
total.dask
dict(total.dask)
###Output
_____no_output_____ |
code/Elipsoide_Problema_Direto_Clark_Comparacao.ipynb | ###Markdown
Elipsoide Problema Direto_Clark_Comparacao Coisas para importar
###Code
import numpy as np
from scipy import linalg
from matplotlib import pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Importar minhas funções de um arquivo externo
###Code
import Elipsoide_Clark_FAT_Unificado as me
###Output
_____no_output_____
###Markdown
Input
###Code
Xp = np.array([-200., -100., 0., 100., 200.])
Yp = np.zeros_like(Xp)
Zp = np.zeros_like(Xp)
#xc posicao x , yc posição y e h profundidade reais
xc = 0.
yc = 0.
zc = 300.
# Orientacoes do elipsoide
azimuth = 50.
delta = 45.
gamma = -45.
# Eixos do elipsoide
a = 250.
b = 150.
c = 100.
# Set the inclination and declination of the regional field
inten, inc, dec = 60000., -65., -80.0 #nT, graus, graus
################################################################################################################################
################################################################################################################################
model = []
# Create a ellipsoid model (Prolate)
model.append(me.Ellipsoid(Xp, Yp, Zp, xc, yc, zc, a, b, c, azimuth, delta, gamma,
{'remanence': np.array([0, 90., 0.]),
'k1': np.array([0.001, 0., 90.]),
'k2': np.array([0.001, 0., 180.]),
'k3': np.array([0.001, 90., 0.])}
))
model[0].mcon
###Output
_____no_output_____
###Markdown
Cálculos
###Code
# Calculate the anomaly for a given regional field (Prolate)
JRD_cart = me.jrd_cartesiano (inten,inc,dec,model)
Bx = me.bx_c (Xp,Yp,Zp,inten,inc,dec,model)
By = me.by_c (Xp,Yp,Zp,inten,inc,dec,model)
Bz = me.bz_c (Xp,Yp,Zp,inten,inc,dec,model)
Tf = me.tf_c (Xp,Yp,Zp,inten,inc,dec,model)
JRD_cart
JRD_cart[0][2]+90.
Bz_Clark = np.array([-11.4, -32.2, -66.7, -72.0, -29.3])
Bt_Clark = np.array([5.9, 23.6, 57.1, 67.5, 28.3])
###Output
_____no_output_____
###Markdown
Resultado da minha função
###Code
plt.figure(figsize=(8,8))
plt.plot()
plt.plot(Xp, Bz, '-ko', label="Implementacao propria")
plt.plot(Xp, Bz_Clark, '--ks', label="Implementacao Clark")
plt.xlabel('Distancia (m)')
plt.ylabel('Campo magnetico (nT)')
plt.title('Bz')
plt.grid(True)
plt.legend()
#plt.savefig('Bz_Emerson.jpg', dpi=200)
plt.show()
plt.figure(figsize=(8,8))
plt.plot()
plt.plot(Xp, Tf, '-ko', label='Implementacao propria')
plt.plot(Xp, Bt_Clark, '--ks', label="Implementacao Clark")
plt.xlabel('Distancia (m)')
plt.ylabel('Campo magnetico (nT)')
plt.title('Anomalia de campo total aproximada')
plt.grid(True)
plt.legend()
#plt.savefig('Anomalia_Emerson.jpg', dpi=200)
plt.show()
###Output
_____no_output_____ |
Simulation/HMC_sim.ipynb | ###Markdown
1. Univariate normalModel specification:$$\mu\sim N(0,1)$$$$X|\mu\sim N(\mu, 1)$$We simulate data $X$ from $N(1, 1)$ and compare the posterior samples from HMC and the theoretical posterior distribution.
###Code
np.random.seed(2019)
sample_mean = 1
sample_sig2 = 1
X = np.random.normal(sample_mean, sample_sig2, size = 10000)
U = lambda mu: mu**2/2 + np.sum((X-mu)**2/2)
gradU = lambda mu: mu + np.sum(mu-X)
# theoretical distribution
sig2_pos = 1/(1/1 + len(X) / np.cov(X))
mean_pos = (0 + X.mean()*len(X)/np.cov(X))/(1/1 + len(X) / np.cov(X))
dist = multivariate_normal(mean_pos, (sig2_pos))
sim = np.random.normal(mean_pos, np.sqrt(sig2_pos), nsample)
def leapfrog(gradU, p, r, eps = 0.01, L = 100, M_i = np.array([[1,0],[0,1]])):
"""
Using leapfrog to discretize
Args:
gradU: gradient of potential energy (posterior)
p: position (parameters)
r: momentum (auxiliary)
eps: stepsize
L: # of steps
M_i: inversion of preconditioned mass matrix (omitted since assumed to be identity)
"""
r = r - eps/2 * gradU(p)
for i in range(L-1):
p = p + eps * r
r = r - eps * gradU(p)
p = p + eps * r
r = r - eps/2 * gradU(p)
return p, r
def log_r(U, p0, r0, p, r, M_i = np.array([[1,0],[0,1]])):
"""log of acceptance ratio"""
return (U(p0) + 1/2*r0.dot(r0)) - (U(p0) + 1/2*r.dot(r))
eps = 0.0005
L = 50
# M_i = np.array([[1]]) # for we assume Mass matrix M either to be identity or 1, it can be omitted.
samples = np.zeros(niter+1)
p = np.array([0.0])
samples[0] = p
np.random.seed(2019)
for k in range(niter):
r0 = np.random.normal(0,1,1)
p, r = leapfrog(gradU, p, r0, eps, L)
# M-H
p0 = samples[k]
a = np.exp(log_r(U, p0, r0, p, r))
u = np.random.rand()
if u < a:
samples[k+1] = p
else:
samples[k+1] = p0
print("%.2f %%" % np.round((k+1)/niter*100,2), end = "\r")
plt.figure(figsize=(10,6))
sns.kdeplot(samples[nburnin+1:], label = 'Samples with HMC')
sns.kdeplot(sim, label = 'Samples from true posterior')
plt.title("HMC (univariate normal)")
plt.savefig('HMC_1d.png');
###Output
_____no_output_____
###Markdown
2. Bivariate normalModel specification:$$\mu\sim N(\mathbf 0,\mathbf I_{2\times2})$$$$X|\mu\sim N(\mu, \begin{bmatrix}1&0.75\\0.75&1\end{bmatrix})$$We simulate data $X$ from $N(\begin{bmatrix}1\\-1\end{bmatrix}, \begin{bmatrix}1&0.75\\0.75&1\end{bmatrix})$ and compare the posterior samples from HMC and the theoretical posterior distribution.
###Code
mean_or = np.array([1,-1])
sig_or = np.array([[1,0.75],[0.75,1]])
sig_or_i = la.inv(sig_or)
np.random.seed(2019)
data = multivariate_normal(mean_or, sig_or).rvs(100)
Sig_pos = la.inv(len(data)*la.inv(np.cov(data.T)) + np.eye(2))
mean_pos = (la.inv(len(data)*la.inv(np.cov(data.T)) + np.eye(2)) @
(len(data)*la.inv(np.cov(data.T))@np.mean(data,0) + np.eye(2)@np.zeros(2)))
sim = multivariate_normal(mean_pos, Sig_pos).rvs(nsample)
U = lambda mu: np.sum(np.diag((data - mu)@sig_or_i@(data - mu).T/2)) + 1/2 * mu.T @ mu
gradU = lambda mu: -sig_or_i.dot((data-mu).T).sum(1) + mu
eps = 0.01
L = 100
np.random.seed(2019)
orbit = np.zeros((niter+1, 2))
p = np.array([0,0.0])
orbit[0] = p
for k in range(niter):
r0 = np.random.normal(0,1,2)
p, r = leapfrog(gradU, p, r0, 0.01, L)
# accept-reject
p0 = orbit[k]
a = np.exp(log_r(U, p0, r0, p, r))
u = np.random.rand()
if u < a:
orbit[k+1] = p
else:
orbit[k+1] = p0
print("%.2f %%" % np.round((k+1)/niter*100,2), end = "\r")
kde_stack(orbit[nburnin+1:,0], orbit[nburnin+1:,1], sim[:,0], sim[:,1])
###Output
_____no_output_____ |
notebooks/features/classification/Classification - Before and After SynapseML.ipynb | ###Markdown
Classification - Before and After SynapseML 1. IntroductionIn this tutorial, we perform the same classification task in twodifferent ways: once using plain **`pyspark`** and once using the**`synapseml`** library. The two methods yield the same performance,but one of the two libraries is drastically simpler to use and iterateon (can you guess which one?).The task is simple: Predict whether a user's review of a book sold onAmazon is good (rating > 3) or bad based on the text of the review. Weaccomplish this by training LogisticRegression learners with differenthyperparameters and choosing the best model.
###Code
import os
if os.environ.get("AZURE_SERVICE", None) == "Microsoft.ProjectArcadia":
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
###Output
_____no_output_____
###Markdown
2. Read the dataWe download and read in the data. We show a sample below:
###Code
rawData = spark.read.parquet("wasbs://[email protected]/BookReviewsFromAmazon10K.parquet")
rawData.show(5)
###Output
_____no_output_____
###Markdown
3. Extract more features and process dataReal data however is more complex than the above dataset. It is commonfor a dataset to have features of multiple types: text, numeric,categorical. To illustrate how difficult it is to work with thesedatasets, we add two numerical features to the dataset: the **wordcount** of the review and the **mean word length**.
###Code
from pyspark.sql.functions import udf
from pyspark.sql.types import *
def wordCount(s):
return len(s.split())
def wordLength(s):
import numpy as np
ss = [len(w) for w in s.split()]
return round(float(np.mean(ss)), 2)
wordLengthUDF = udf(wordLength, DoubleType())
wordCountUDF = udf(wordCount, IntegerType())
from synapse.ml.stages import UDFTransformer
wordLength = "wordLength"
wordCount = "wordCount"
wordLengthTransformer = UDFTransformer(inputCol="text", outputCol=wordLength, udf=wordLengthUDF)
wordCountTransformer = UDFTransformer(inputCol="text", outputCol=wordCount, udf=wordCountUDF)
from pyspark.ml import Pipeline
data = Pipeline(stages=[wordLengthTransformer, wordCountTransformer]) \
.fit(rawData).transform(rawData) \
.withColumn("label", rawData["rating"] > 3).drop("rating")
data.show(5)
###Output
_____no_output_____
###Markdown
4a. Classify using pysparkTo choose the best LogisticRegression classifier using the `pyspark`library, need to *explictly* perform the following steps:1. Process the features: * Tokenize the text column * Hash the tokenized column into a vector using hashing * Merge the numeric features with the vector in the step above2. Process the label column: cast it into the proper type.3. Train multiple LogisticRegression algorithms on the `train` dataset with different hyperparameters4. Compute the area under the ROC curve for each of the trained models and select the model with the highest metric as computed on the `test` dataset5. Evaluate the best model on the `validation` setAs you can see below, there is a lot of work involved and a lot ofsteps where something can go wrong!
###Code
from pyspark.ml.feature import Tokenizer, HashingTF
from pyspark.ml.feature import VectorAssembler
# Featurize text column
tokenizer = Tokenizer(inputCol="text", outputCol="tokenizedText")
numFeatures = 10000
hashingScheme = HashingTF(inputCol="tokenizedText",
outputCol="TextFeatures",
numFeatures=numFeatures)
tokenizedData = tokenizer.transform(data)
featurizedData = hashingScheme.transform(tokenizedData)
# Merge text and numeric features in one feature column
featureColumnsArray = ["TextFeatures", "wordCount", "wordLength"]
assembler = VectorAssembler(
inputCols = featureColumnsArray,
outputCol="features")
assembledData = assembler.transform(featurizedData)
# Select only columns of interest
# Convert rating column from boolean to int
processedData = assembledData \
.select("label", "features") \
.withColumn("label", assembledData.label.cast(IntegerType()))
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml.classification import LogisticRegression
# Prepare data for learning
train, test, validation = processedData.randomSplit([0.60, 0.20, 0.20], seed=123)
# Train the models on the 'train' data
lrHyperParams = [0.05, 0.1, 0.2, 0.4]
logisticRegressions = [LogisticRegression(regParam = hyperParam)
for hyperParam in lrHyperParams]
evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction",
metricName="areaUnderROC")
metrics = []
models = []
# Select the best model
for learner in logisticRegressions:
model = learner.fit(train)
models.append(model)
scoredData = model.transform(test)
metrics.append(evaluator.evaluate(scoredData))
bestMetric = max(metrics)
bestModel = models[metrics.index(bestMetric)]
# Get AUC on the validation dataset
scoredVal = bestModel.transform(validation)
print(evaluator.evaluate(scoredVal))
###Output
_____no_output_____
###Markdown
4b. Classify using synapsemlLife is a lot simpler when using `synapseml`!1. The **`TrainClassifier`** Estimator featurizes the data internally, as long as the columns selected in the `train`, `test`, `validation` dataset represent the features2. The **`FindBestModel`** Estimator find the best model from a pool of trained models by find the model which performs best on the `test` dataset given the specified metric3. The **`CompueModelStatistics`** Transformer computes the different metrics on a scored dataset (in our case, the `validation` dataset) at the same time
###Code
from synapse.ml.train import TrainClassifier, ComputeModelStatistics
from synapse.ml.automl import FindBestModel
# Prepare data for learning
train, test, validation = data.randomSplit([0.60, 0.20, 0.20], seed=123)
# Train the models on the 'train' data
lrHyperParams = [0.05, 0.1, 0.2, 0.4]
logisticRegressions = [LogisticRegression(regParam = hyperParam)
for hyperParam in lrHyperParams]
lrmodels = [TrainClassifier(model=lrm, labelCol="label", numFeatures=10000).fit(train)
for lrm in logisticRegressions]
# Select the best model
bestModel = FindBestModel(evaluationMetric="AUC", models=lrmodels).fit(test)
# Get AUC on the validation dataset
predictions = bestModel.transform(validation)
metrics = ComputeModelStatistics().transform(predictions)
print("Best model's AUC on validation set = "
+ "{0:.2f}%".format(metrics.first()["AUC"] * 100))
###Output
_____no_output_____
###Markdown
Classification - Before and After SynapseML 1. IntroductionIn this tutorial, we perform the same classification task in twodifferent ways: once using plain **`pyspark`** and once using the**`synapseml`** library. The two methods yield the same performance,but one of the two libraries is drastically simpler to use and iterateon (can you guess which one?).The task is simple: Predict whether a user's review of a book sold onAmazon is good (rating > 3) or bad based on the text of the review. Weaccomplish this by training LogisticRegression learners with differenthyperparameters and choosing the best model.
###Code
import os
if os.environ.get("AZURE_SERVICE", None) == "Microsoft.ProjectArcadia":
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
###Output
_____no_output_____
###Markdown
2. Read the dataWe download and read in the data. We show a sample below:
###Code
rawData = spark.read.parquet(
"wasbs://[email protected]/BookReviewsFromAmazon10K.parquet"
)
rawData.show(5)
###Output
_____no_output_____
###Markdown
3. Extract more features and process dataReal data however is more complex than the above dataset. It is commonfor a dataset to have features of multiple types: text, numeric,categorical. To illustrate how difficult it is to work with thesedatasets, we add two numerical features to the dataset: the **wordcount** of the review and the **mean word length**.
###Code
from pyspark.sql.functions import udf
from pyspark.sql.types import *
def wordCount(s):
return len(s.split())
def wordLength(s):
import numpy as np
ss = [len(w) for w in s.split()]
return round(float(np.mean(ss)), 2)
wordLengthUDF = udf(wordLength, DoubleType())
wordCountUDF = udf(wordCount, IntegerType())
from synapse.ml.stages import UDFTransformer
wordLength = "wordLength"
wordCount = "wordCount"
wordLengthTransformer = UDFTransformer(
inputCol="text", outputCol=wordLength, udf=wordLengthUDF
)
wordCountTransformer = UDFTransformer(
inputCol="text", outputCol=wordCount, udf=wordCountUDF
)
from pyspark.ml import Pipeline
data = (
Pipeline(stages=[wordLengthTransformer, wordCountTransformer])
.fit(rawData)
.transform(rawData)
.withColumn("label", rawData["rating"] > 3)
.drop("rating")
)
data.show(5)
###Output
_____no_output_____
###Markdown
4a. Classify using pysparkTo choose the best LogisticRegression classifier using the `pyspark`library, need to *explictly* perform the following steps:1. Process the features: * Tokenize the text column * Hash the tokenized column into a vector using hashing * Merge the numeric features with the vector in the step above2. Process the label column: cast it into the proper type.3. Train multiple LogisticRegression algorithms on the `train` dataset with different hyperparameters4. Compute the area under the ROC curve for each of the trained models and select the model with the highest metric as computed on the `test` dataset5. Evaluate the best model on the `validation` setAs you can see below, there is a lot of work involved and a lot ofsteps where something can go wrong!
###Code
from pyspark.ml.feature import Tokenizer, HashingTF
from pyspark.ml.feature import VectorAssembler
# Featurize text column
tokenizer = Tokenizer(inputCol="text", outputCol="tokenizedText")
numFeatures = 10000
hashingScheme = HashingTF(
inputCol="tokenizedText", outputCol="TextFeatures", numFeatures=numFeatures
)
tokenizedData = tokenizer.transform(data)
featurizedData = hashingScheme.transform(tokenizedData)
# Merge text and numeric features in one feature column
featureColumnsArray = ["TextFeatures", "wordCount", "wordLength"]
assembler = VectorAssembler(inputCols=featureColumnsArray, outputCol="features")
assembledData = assembler.transform(featurizedData)
# Select only columns of interest
# Convert rating column from boolean to int
processedData = assembledData.select("label", "features").withColumn(
"label", assembledData.label.cast(IntegerType())
)
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml.classification import LogisticRegression
# Prepare data for learning
train, test, validation = processedData.randomSplit([0.60, 0.20, 0.20], seed=123)
# Train the models on the 'train' data
lrHyperParams = [0.05, 0.1, 0.2, 0.4]
logisticRegressions = [
LogisticRegression(regParam=hyperParam) for hyperParam in lrHyperParams
]
evaluator = BinaryClassificationEvaluator(
rawPredictionCol="rawPrediction", metricName="areaUnderROC"
)
metrics = []
models = []
# Select the best model
for learner in logisticRegressions:
model = learner.fit(train)
models.append(model)
scoredData = model.transform(test)
metrics.append(evaluator.evaluate(scoredData))
bestMetric = max(metrics)
bestModel = models[metrics.index(bestMetric)]
# Get AUC on the validation dataset
scoredVal = bestModel.transform(validation)
print(evaluator.evaluate(scoredVal))
###Output
_____no_output_____
###Markdown
4b. Classify using synapsemlLife is a lot simpler when using `synapseml`!1. The **`TrainClassifier`** Estimator featurizes the data internally, as long as the columns selected in the `train`, `test`, `validation` dataset represent the features2. The **`FindBestModel`** Estimator find the best model from a pool of trained models by find the model which performs best on the `test` dataset given the specified metric3. The **`CompueModelStatistics`** Transformer computes the different metrics on a scored dataset (in our case, the `validation` dataset) at the same time
###Code
from synapse.ml.train import TrainClassifier, ComputeModelStatistics
from synapse.ml.automl import FindBestModel
# Prepare data for learning
train, test, validation = data.randomSplit([0.60, 0.20, 0.20], seed=123)
# Train the models on the 'train' data
lrHyperParams = [0.05, 0.1, 0.2, 0.4]
logisticRegressions = [
LogisticRegression(regParam=hyperParam) for hyperParam in lrHyperParams
]
lrmodels = [
TrainClassifier(model=lrm, labelCol="label", numFeatures=10000).fit(train)
for lrm in logisticRegressions
]
# Select the best model
bestModel = FindBestModel(evaluationMetric="AUC", models=lrmodels).fit(test)
# Get AUC on the validation dataset
predictions = bestModel.transform(validation)
metrics = ComputeModelStatistics().transform(predictions)
print(
"Best model's AUC on validation set = "
+ "{0:.2f}%".format(metrics.first()["AUC"] * 100)
)
###Output
_____no_output_____ |
end_to_end/2-lineage-train-assess-bias-tune-registry-e2e.ipynb | ###Markdown
Part 2: Train, Check Bias, Tune, Record Lineage, and Register a Model [Overview](./0-AutoClaimFraudDetection.ipynb)* [Notebook 0 : Overview, Architecture and Data Exploration](./0-AutoClaimFraudDetection.ipynb)* [Notebook 1: Data Prep, Process, Store Features](./1-data-prep-e2e.ipynb)* **[Notebook 2: Train, Check Bias, Tune, Record Lineage, and Register a Model](./2-lineage-train-assess-bias-tune-registry-e2e.ipynb)** * **[Architecture](train)** * **[Train a model using XGBoost](aud-train-model)** * **[Model lineage with artifacts and associations](model-lineage)** * **[Evaluate the model for bias with Clarify](check-bias)** * **[Deposit Model and Lineage in SageMaker Model Registry](model-registry)*** [Notebook 3: Mitigate Bias, Train New Model, Store in Registry](./3-mitigate-bias-train-model2-registry-e2e.ipynb)* [Notebook 4: Deploy Model, Run Predictions](./4-deploy-run-inference-e2e.ipynb)* [Notebook 5 : Create and Run an End-to-End Pipeline to Deploy the Model](./5-pipeline-e2e.ipynb) In this section we will show how you can assess pre-training and post-training bias with SageMaker Clarify, Train the Model using XGBoost on SageMaker, and then finally deposit it in the Model Registry, along with the Lineage of Artifacts that were created along the way: data, code and model metadata.In this second model, you will fix the gender imbalance in the dataset using SMOTE and train another model using XGBoost. This model will also be saved to our registry and eventually approved for deployment. Architecture for the ML Lifecycle Stage: Train, Check Bias, Tune, Record Lineage, Register Model[overview](overview)----![train-assess-tune-register](./images/e2e-2-pipeline-v3b.png) Install required and/or update libraries
###Code
!python -m pip install -Uq pip
!python -m pip install -q awswrangler==2.2.0 imbalanced-learn==0.7.0 sagemaker==2.23.1 boto3==1.16.48
###Output
_____no_output_____
###Markdown
To apply the update to the current kernel, run the following code to refresh the kernel.
###Code
import IPython
IPython.Application.instance().kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Load stored variablesRun the cell below to load any prevously created variables. You should see a print-out of the existing variables. If you don't see anything you may need to create them again or it may be your first time running this notebook.
###Code
%store -r
%store
###Output
_____no_output_____
###Markdown
**Important: You must have run the previous sequential notebooks to retrieve variables using the StoreMagic command.** Import libraries
###Code
import json
import time
import boto3
import sagemaker
import numpy as np
import pandas as pd
import awswrangler as wr
from sagemaker.xgboost.estimator import XGBoost
from model_package_src.inference_specification import InferenceSpecification
###Output
_____no_output_____
###Markdown
Set region, boto3 and SageMaker SDK variables
###Code
#You can change this to a region of your choice
import sagemaker
region = sagemaker.Session().boto_region_name
print("Using AWS Region: {}".format(region))
boto3.setup_default_session(region_name=region)
boto_session = boto3.Session(region_name=region)
s3_client = boto3.client('s3', region_name=region)
sagemaker_boto_client = boto_session.client('sagemaker')
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session,
sagemaker_client=sagemaker_boto_client)
sagemaker_role = sagemaker.get_execution_role()
account_id = boto3.client('sts').get_caller_identity()["Account"]
# variables used for parameterizing the notebook run
estimator_output_path = f's3://{bucket}/{prefix}/training_jobs'
train_instance_count = 1
train_instance_type = "ml.m4.xlarge"
bias_report_1_output_path = f's3://{bucket}/{prefix}/clarify-output/bias_1'
xgb_model_name = 'xgb-insurance-claims-fraud-model'
train_instance_count = 1
train_instance_type = "ml.m4.xlarge"
predictor_instance_count = 1
predictor_instance_type = "ml.c5.xlarge"
batch_transform_instance_count = 1
batch_transform_instance_type = "ml.c5.xlarge"
claify_instance_count = 1
clairfy_instance_type = 'ml.c5.xlarge'
###Output
_____no_output_____
###Markdown
Train a model using XGBoost[overview](overview)----Once the training and test datasets have been persisted in S3, you can start training a model by defining which SageMaker Estimator you'd like to use. For this guide, you will use the [XGBoost Open Source Framework](https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.html) to train your model. This estimator is accessed via the SageMaker SDK, but mirrors the open source version of the [XGBoost Python package](https://xgboost.readthedocs.io/en/latest/python/index.html). Any functioanlity provided by the XGBoost Python package can be implemented in your training script. Set the hyperparametersThese are the parameters which will be sent to our training script in order to train the model. Although they are all defined as "hyperparameters" here, they can encompass XGBoost's [Learning Task Parameters](https://xgboost.readthedocs.io/en/latest/parameter.htmllearning-task-parameters), [Tree Booster Parameters](https://xgboost.readthedocs.io/en/latest/parameter.htmlparameters-for-tree-booster), or any other parameters you'd like to configure for XGBoost.
###Code
hyperparameters = {
"max_depth": "3",
"eta": "0.2",
"objective": "binary:logistic",
"num_round": "100",
}
%store hyperparameters
###Output
_____no_output_____
###Markdown
Create and fit the estimatorIf you want to explore the breadth of functionailty offered by the SageMaker XGBoost Framework you can read about all the configuration parameters by referencing the inhereting classes. The XGBoost class inherets from the Framework class and Framework inherets from the EstimatorBase class:* [XGBoost Estimator documentation](https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.htmlsagemaker.xgboost.estimator.XGBoost)* [Framework documentation](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.htmlsagemaker.estimator.Framework)* [EstimatorBase documentation](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.htmlsagemaker.estimator.EstimatorBase)
###Code
xgb_estimator = XGBoost(
entry_point = "xgboost_starter_script.py",
output_path = estimator_output_path,
code_location = estimator_output_path,
hyperparameters = hyperparameters,
role = sagemaker_role,
instance_count = train_instance_count,
instance_type = train_instance_type,
framework_version = "1.0-1")
if 'training_job_1_name' not in locals():
xgb_estimator.fit(inputs = {'train': train_data_uri})
training_job_1_name = xgb_estimator.latest_training_job.job_name
%store training_job_1_name
else:
print(f'Using previous training job: {training_job_1_name}')
###Output
_____no_output_____
###Markdown
Model lineage with artifacts and associations[Overview](aud-overview)----Amazon SageMaker ML Lineage Tracking creates and stores information about the steps of a machine learning (ML) workflow from data preparation to model deployment. With the tracking information you can reproduce the workflow steps, track model and dataset lineage, and establish model governance and audit standards. With SageMaker Lineage Tracking data scientists and model builders can do the following:* Keep a running history of model discovery experiments.* Establish model governance by tracking model lineage artifacts for auditing and compliance verification.* Clone and rerun workflows to experiment with what-if scenarios while developing models.* Share a workflow that colleagues can reproduce and enhance (for example, while collaborating on solving a business problem).* Clone and rerun workflows with additional debugging or logging routines, or new input variations for troubleshooting issues in production models. Register artifacts Although the `xgb_estimator` object retains much the data we need to learn about how the model was trained, it is, in fact, an ephermeral object which SageMaker does not persist and cannot be re-instantiated at a later time. Although we lose some of its convieneces once it is gone, we can still get back all the data we need by accessing the training jobs it once created.
###Code
training_job_1_info = sagemaker_boto_client.describe_training_job(TrainingJobName=training_job_1_name)
###Output
_____no_output_____
###Markdown
Code artifact
###Code
# return any existing artifact which match the our training job's code arn
# ====>
# extract the training code uri and check if it's an exisiting artifact
code_s3_uri = training_job_1_info['HyperParameters']['sagemaker_submit_directory']
matching_artifacts = list(sagemaker.lineage.artifact.Artifact.list(
source_uri=code_s3_uri,
sagemaker_session=sagemaker_session))
# use existing arifact if it's already been created, otherwise create a new artifact
if matching_artifacts:
code_artifact = matching_artifacts[0]
print(f'Using existing artifact: {code_artifact.artifact_arn}')
else:
code_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name='TrainingScript',
source_uri=code_s3_uri,
artifact_type='Code',
sagemaker_session=sagemaker_session)
print(f'Create artifact {code_artifact.artifact_arn}: SUCCESSFUL')
###Output
_____no_output_____
###Markdown
Training data artifact
###Code
training_data_s3_uri = training_job_1_info['InputDataConfig'][0]['DataSource']['S3DataSource']['S3Uri']
matching_artifacts = list(sagemaker.lineage.artifact.Artifact.list(
source_uri=training_data_s3_uri,
sagemaker_session=sagemaker_session))
if matching_artifacts:
training_data_artifact = matching_artifacts[0]
print(f'Using existing artifact: {training_data_artifact.artifact_arn}')
else:
training_data_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name='TrainingData',
source_uri=training_data_s3_uri,
artifact_type='Dataset',
sagemaker_session=sagemaker_session)
print(f'Create artifact {training_data_artifact.artifact_arn}: SUCCESSFUL')
###Output
_____no_output_____
###Markdown
Model artifact
###Code
trained_model_s3_uri = training_job_1_info['ModelArtifacts']['S3ModelArtifacts']
matching_artifacts = list(sagemaker.lineage.artifact.Artifact.list(
source_uri=trained_model_s3_uri,
sagemaker_session=sagemaker_session))
if matching_artifacts:
model_artifact = matching_artifacts[0]
print(f'Using existing artifact: {model_artifact.artifact_arn}')
else:
model_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name='TrainedModel',
source_uri=trained_model_s3_uri,
artifact_type='Model',
sagemaker_session=sagemaker_session)
print(f'Create artifact {model_artifact.artifact_arn}: SUCCESSFUL')
###Output
_____no_output_____
###Markdown
Set artifact associations
###Code
trial_component = sagemaker_boto_client.describe_trial_component(TrialComponentName=training_job_1_name+'-aws-training-job')
trial_component_arn = trial_component['TrialComponentArn']
###Output
_____no_output_____
###Markdown
Input artifacts
###Code
input_artifacts = [code_artifact, training_data_artifact]
for a in input_artifacts:
try:
sagemaker.lineage.association.Association.create(
source_arn=a.artifact_arn,
destination_arn=trial_component_arn,
association_type='ContributedTo',
sagemaker_session=sagemaker_session)
print(f"Association with {a.artifact_type}: SUCCEESFUL")
except:
print(f"Association already exists with {a.artifact_type}")
###Output
_____no_output_____
###Markdown
Output artifacts
###Code
output_artifacts = [model_artifact]
for a in output_artifacts:
try:
sagemaker.lineage.association.Association.create(
source_arn=a.artifact_arn,
destination_arn=trial_component_arn,
association_type='Produced',
sagemaker_session=sagemaker_session)
print(f"Association with {a.artifact_type}: SUCCESSFUL")
except:
print(f"Association already exists with {a.artifact_type}")
###Output
_____no_output_____
###Markdown
Evaluate model for bias with Clarify[overview](aud-overview)----Amazon SageMaker Clarify helps improve your machine learning (ML) models by detecting potential bias and helping explain the predictions that models make. It helps you identify various types of bias in pretraining data and in posttraining that can emerge during model training or when the model is in production. SageMaker Clarify helps explain how these models make predictions using a feature attribution approach. It also monitors inferences models make in production for bias or feature attribution drift. The fairness and explainability functionality provided by SageMaker Clarify provides components that help AWS customers build less biased and more understandable machine learning models. It also provides tools to help you generate model governance reports which you can use to inform risk and compliance teams, and external regulators. You can reference the [SageMaker Developer Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-fairness-and-explainability.html) for more information about SageMaker Clarify. Create model from estimator
###Code
model_1_name = f'{prefix}-xgboost-pre-smote'
%store model_1_name
model_matches = sagemaker_boto_client.list_models(NameContains=model_1_name)['Models']
if not model_matches:
model_1 = sagemaker_session.create_model_from_job(
name=model_1_name,
training_job_name=training_job_1_info['TrainingJobName'],
role=sagemaker_role,
image_uri=training_job_1_info['AlgorithmSpecification']['TrainingImage'])
else:
print(f"Model {model_1_name} already exists.")
###Output
_____no_output_____
###Markdown
Check for data set bias and model biasWith SageMaker, we can check for pre-training and post-training bias. Pre-training metrics show pre-existing bias in that data, while post-training metrics show bias in the predictions from the model. Using the SageMaker SDK, we can specify which groups we want to check bias across and which metrics we'd like to show. To run the full Clarify job, you must un-comment the code in the cell below. Running the job will take ~15 minutes. If you wish to save time, you can view the results in the next cell after which loads a pre-generated output if no bias job was run.
###Code
train_cols = wr.s3.read_csv(training_data_s3_uri).columns.to_list()
clarify_processor = sagemaker.clarify.SageMakerClarifyProcessor(
role=sagemaker_role,
instance_count=1,
instance_type='ml.c4.xlarge',
sagemaker_session=sagemaker_session)
bias_data_config = sagemaker.clarify.DataConfig(
s3_data_input_path=train_data_uri,
s3_output_path=bias_report_1_output_path,
label='fraud',
headers=train_cols,
dataset_type='text/csv')
model_config = sagemaker.clarify.ModelConfig(
model_name=model_1_name,
instance_type=train_instance_type,
instance_count=1,
accept_type='text/csv')
predictions_config = sagemaker.clarify.ModelPredictedLabelConfig(probability_threshold=0.5)
bias_config = sagemaker.clarify.BiasConfig(
label_values_or_threshold=[0],
facet_name='customer_gender_female',
facet_values_or_threshold=[1])
# un-comment the code below to run the whole job
# if 'clarify_bias_job_1_name' not in locals():
# clarify_processor.run_bias(
# data_config=bias_data_config,
# bias_config=bias_config,
# model_config=model_config,
# model_predicted_label_config=predictions_config,
# pre_training_methods='all',
# post_training_methods='all')
# clarify_bias_job_1_name = clarify_processor.latest_job.name
# %store clarify_bias_job_1_name
# else:
# print(f'Clarify job {clarify_bias_job_name} has already run successfully.')
###Output
_____no_output_____
###Markdown
Results will be stored in `/opt/ml/processing/output/report.pdf`Training to achieve over 90 percent classification accuracy, may be easily possible on an imbalanced classification problem.Thus, expectations developed regarding classification accuracy that are in reality contingent on balanced class distributions will lead to wrong, misleading assumptions and conclusions : misleading the data scientist and viewers into believing that a model has extremely performance when , actually, it does not. View results of Clarify job (shortcut)Running Clarify on your dataset or model can take ~15 minutes. If you don't have time to run the job, you can view the pre-generated results included with this demo. Otherwise, you can run the job by un-commenting the code in the cell above.
###Code
if 'clarify_bias_job_name' in locals():
s3_client.download_file(Bucket=bucket, Key=f'{prefix}/clarify-output/bias-1/analysis.json', Filename='clarify_output/bias_1/analysis.json')
print(f'Downloaded analysis from previous Clarify job: {clarify_bias_job_name}')
else:
print(f'Loading pre-generated analysis file...')
with open('clarify_output/bias_1/analysis.json', 'r') as f:
bias_analysis = json.load(f)
results = bias_analysis['pre_training_bias_metrics']['facets']['customer_gender_female'][0]['metrics'][1]
print(json.dumps(results, indent=4))
###Output
_____no_output_____
###Markdown
In this example dataset, the data is biased against females with only 38.9% of the data samples from female customers. We will address this in the next notebook where we show how we mitigate this class imbalance bias. Although we are only addressing Class Imbalance as an exemplar of bias statistics, you can also take into consideration many other factors of bias. For more detail, see : [Fairness Measures for Machine Learning in Finance](https://pages.awscloud.com/rs/112-TZM-766/images/Fairness.Measures.for.Machine.Learning.in.Finance.pdf)for a more detailed example look at [this](https://github.com/aws/amazon-sagemaker-examples/blob/master/sagemaker_processing/fairness_and_explainability/fairness_and_explainability.ipynb) github example. For more detailed resulst let's look at the generated report, that can be found here: `s3://{bucket}/e2e-fraud-detect/clarify/bias-2/report.pdf`
###Code
#uncomment to copy report and view
#!aws s3 cp s3://{bucket}/{prefix}/clarify-output/bias_1/report.pdf ./clarify_output
###Output
_____no_output_____
###Markdown
Deposit Model and Lineage in SageMaker Model Registry[overview](aud-overview)----Once a useful model has been trained and its artifacts properly associated, the next step is to save the model in a registry for future reference and possible deployment. Create Model Package GroupA Model Package Groups holds multiple versions or iterations of a model. Though it is not required to create them for every model in the registry, they help organize various models which all have the same purpose and provide automatic versioning.
###Code
if 'mpg_name' not in locals():
mpg_name = prefix
%store mpg_name
print(f'Model Package Group name: {mpg_name}')
mpg_input_dict = {
'ModelPackageGroupName': mpg_name,
'ModelPackageGroupDescription': 'Insurance claim fraud detection'
}
matching_mpg = sagemaker_boto_client.list_model_package_groups(NameContains=mpg_name)['ModelPackageGroupSummaryList']
if matching_mpg:
print(f'Using existing Model Package Group: {mpg_name}')
else:
mpg_response = sagemaker_boto_client.create_model_package_group(**mpg_input_dict)
print(f'Create Model Package Group {mpg_name}: SUCCESSFUL')
%store mpg_name
###Output
_____no_output_____
###Markdown
Create Model Package for trained model Create and upload a metrics report
###Code
model_metrics_report = {'classification_metrics': {}}
for metric in training_job_1_info['FinalMetricDataList']:
stat = {metric['MetricName']: {'value': metric['Value']}}
model_metrics_report['classification_metrics'].update(stat)
with open('training_metrics.json', 'w') as f:
json.dump(model_metrics_report, f)
metrics_s3_key = f"{prefix}/training_jobs/{training_job_1_info['TrainingJobName']}/training_metrics.json"
s3_client.upload_file(Filename='training_metrics.json', Bucket=bucket, Key=metrics_s3_key)
###Output
_____no_output_____
###Markdown
Define the inference spec
###Code
mp_inference_spec = InferenceSpecification().get_inference_specification_dict(
ecr_image=training_job_1_info['AlgorithmSpecification']['TrainingImage'],
supports_gpu=False,
supported_content_types=['text/csv'],
supported_mime_types=['text/csv'])
mp_inference_spec['InferenceSpecification']['Containers'][0]['ModelDataUrl'] = training_job_1_info['ModelArtifacts']['S3ModelArtifacts']
###Output
_____no_output_____
###Markdown
Define model metricsMetrics other than model quality and bias can be defined. See the Boto3 documentation for [creating a model package](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.htmlSageMaker.Client.create_model_package).
###Code
model_metrics = {
'ModelQuality': {
'Statistics': {
'ContentType': 'application/json',
'S3Uri': f's3://{bucket}/{prefix}/{metrics_s3_key}'
}
},
'Bias': {
'Report': {
'ContentType': 'application/json',
'S3Uri': f'{bias_report_1_output_path}/analysis.json'
}
}
}
mp_input_dict = {
'ModelPackageGroupName': mpg_name,
'ModelPackageDescription': 'XGBoost classifier to detect insurance fraud.',
'ModelApprovalStatus': 'PendingManualApproval',
'ModelMetrics': model_metrics
}
mp_input_dict.update(mp_inference_spec)
mp1_response = sagemaker_boto_client.create_model_package(**mp_input_dict)
###Output
_____no_output_____
###Markdown
Wait until model package is completed
###Code
mp_info = sagemaker_boto_client.describe_model_package(ModelPackageName=mp1_response['ModelPackageArn'])
mp_status = mp_info['ModelPackageStatus']
while mp_status not in ['Completed', 'Failed']:
time.sleep(5)
mp_info = sagemaker_boto_client.describe_model_package(ModelPackageName=mp1_response['ModelPackageArn'])
mp_status = mp_info['ModelPackageStatus']
print(f'model package status: {mp_status}')
print(f'model package status: {mp_status}')
###Output
_____no_output_____
###Markdown
View model package in registry
###Code
sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)['ModelPackageSummaryList']
###Output
_____no_output_____
###Markdown
Part 2: Train, Check Bias, Tune, Record Lineage, and Register a Model [Overview](./0-AutoClaimFraudDetection.ipynb)* [Notebook 0 : Overview, Architecture and Data Exploration](./0-AutoClaimFraudDetection.ipynb)* [Notebook 1: Data Prep, Process, Store Features](./1-data-prep-e2e.ipynb)* **[Notebook 2: Train, Check Bias, Tune, Record Lineage, and Register a Model](./2-lineage-train-assess-bias-tune-registry-e2e.ipynb)** * **[Architecture](train)** * **[Train a model using XGBoost](aud-train-model)** * **[Model lineage with artifacts and associations](model-lineage)** * **[Evaluate the model for bias with Clarify](check-bias)** * **[Deposit Model and Lineage in SageMaker Model Registry](model-registry)*** [Notebook 3: Mitigate Bias, Train New Model, Store in Registry](./3-mitigate-bias-train-model2-registry-e2e.ipynb)* [Notebook 4: Deploy Model, Run Predictions](./4-deploy-run-inference-e2e.ipynb)* [Notebook 5 : Create and Run an End-to-End Pipeline to Deploy the Model](./5-pipeline-e2e.ipynb) In this section we will show how you can assess pre-training and post-training bias with SageMaker Clarify, Train the Model using XGBoost on SageMaker, and then finally deposit it in the Model Registry, along with the Lineage of Artifacts that were created along the way: data, code and model metadata.In this second model, you will fix the gender imbalance in the dataset using SMOTE and train another model using XGBoost. This model will also be saved to our registry and eventually approved for deployment. Architecture for the ML Lifecycle Stage: Train, Check Bias, Tune, Record Lineage, Register Model[overview](overview)----![train-assess-tune-register](./images/e2e-2-pipeline-v3b.png) Install required and/or update libraries
###Code
!python -m pip install -Uq pip
!python -m pip install -q awswrangler==2.2.0 imbalanced-learn==0.7.0 sagemaker==2.41.0 boto3==1.17.70
###Output
_____no_output_____
###Markdown
To apply the update to the current kernel, run the following code to refresh the kernel.
###Code
import IPython
IPython.Application.instance().kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Load stored variablesRun the cell below to load any prevously created variables. You should see a print-out of the existing variables. If you don't see anything you may need to create them again or it may be your first time running this notebook.
###Code
%store -r
%store
###Output
_____no_output_____
###Markdown
**Important: You must have run the previous sequential notebooks to retrieve variables using the StoreMagic command.** Import libraries
###Code
import json
import time
import boto3
import sagemaker
import numpy as np
import pandas as pd
import awswrangler as wr
from sagemaker.xgboost.estimator import XGBoost
from model_package_src.inference_specification import InferenceSpecification
###Output
_____no_output_____
###Markdown
Set region, boto3 and SageMaker SDK variables
###Code
# You can change this to a region of your choice
import sagemaker
region = sagemaker.Session().boto_region_name
print("Using AWS Region: {}".format(region))
boto3.setup_default_session(region_name=region)
boto_session = boto3.Session(region_name=region)
s3_client = boto3.client("s3", region_name=region)
sagemaker_boto_client = boto_session.client("sagemaker")
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session, sagemaker_client=sagemaker_boto_client
)
sagemaker_role = sagemaker.get_execution_role()
account_id = boto3.client("sts").get_caller_identity()["Account"]
# variables used for parameterizing the notebook run
estimator_output_path = f"s3://{bucket}/{prefix}/training_jobs"
train_instance_count = 1
train_instance_type = "ml.m4.xlarge"
bias_report_1_output_path = f"s3://{bucket}/{prefix}/clarify-output/bias_1"
xgb_model_name = "xgb-insurance-claims-fraud-model"
train_instance_count = 1
train_instance_type = "ml.m4.xlarge"
predictor_instance_count = 1
predictor_instance_type = "ml.c5.xlarge"
batch_transform_instance_count = 1
batch_transform_instance_type = "ml.c5.xlarge"
claify_instance_count = 1
clairfy_instance_type = "ml.c5.xlarge"
###Output
_____no_output_____
###Markdown
Train a model using XGBoost[overview](overview)----Once the training and test datasets have been persisted in S3, you can start training a model by defining which SageMaker Estimator you'd like to use. For this guide, you will use the [XGBoost Open Source Framework](https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.html) to train your model. This estimator is accessed via the SageMaker SDK, but mirrors the open source version of the [XGBoost Python package](https://xgboost.readthedocs.io/en/latest/python/index.html). Any functioanlity provided by the XGBoost Python package can be implemented in your training script. Set the hyperparametersThese are the parameters which will be sent to our training script in order to train the model. Although they are all defined as "hyperparameters" here, they can encompass XGBoost's [Learning Task Parameters](https://xgboost.readthedocs.io/en/latest/parameter.htmllearning-task-parameters), [Tree Booster Parameters](https://xgboost.readthedocs.io/en/latest/parameter.htmlparameters-for-tree-booster), or any other parameters you'd like to configure for XGBoost.
###Code
hyperparameters = {
"max_depth": "3",
"eta": "0.2",
"objective": "binary:logistic",
"num_round": "100",
}
%store hyperparameters
###Output
_____no_output_____
###Markdown
Create and fit the estimatorIf you want to explore the breadth of functionailty offered by the SageMaker XGBoost Framework you can read about all the configuration parameters by referencing the inhereting classes. The XGBoost class inherets from the Framework class and Framework inherets from the EstimatorBase class:* [XGBoost Estimator documentation](https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.htmlsagemaker.xgboost.estimator.XGBoost)* [Framework documentation](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.htmlsagemaker.estimator.Framework)* [EstimatorBase documentation](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.htmlsagemaker.estimator.EstimatorBase)
###Code
xgb_estimator = XGBoost(
entry_point="xgboost_starter_script.py",
output_path=estimator_output_path,
code_location=estimator_output_path,
hyperparameters=hyperparameters,
role=sagemaker_role,
instance_count=train_instance_count,
instance_type=train_instance_type,
framework_version="1.0-1",
)
if 'training_job_1_name' not in locals():
xgb_estimator.fit(inputs = {'train': train_data_uri})
training_job_1_name = xgb_estimator.latest_training_job.job_name
%store training_job_1_name
else:
print(f'Using previous training job: {training_job_1_name}')
###Output
_____no_output_____
###Markdown
Model lineage with artifacts and associations[Overview](aud-overview)----Amazon SageMaker ML Lineage Tracking creates and stores information about the steps of a machine learning (ML) workflow from data preparation to model deployment. With the tracking information you can reproduce the workflow steps, track model and dataset lineage, and establish model governance and audit standards. With SageMaker Lineage Tracking data scientists and model builders can do the following:* Keep a running history of model discovery experiments.* Establish model governance by tracking model lineage artifacts for auditing and compliance verification.* Clone and rerun workflows to experiment with what-if scenarios while developing models.* Share a workflow that colleagues can reproduce and enhance (for example, while collaborating on solving a business problem).* Clone and rerun workflows with additional debugging or logging routines, or new input variations for troubleshooting issues in production models. Register artifacts Although the `xgb_estimator` object retains much the data we need to learn about how the model was trained, it is, in fact, an ephermeral object which SageMaker does not persist and cannot be re-instantiated at a later time. Although we lose some of its convieneces once it is gone, we can still get back all the data we need by accessing the training jobs it once created.
###Code
training_job_1_info = sagemaker_boto_client.describe_training_job(
TrainingJobName=training_job_1_name
)
###Output
_____no_output_____
###Markdown
Code artifact
###Code
# return any existing artifact which match the our training job's code arn
# ====>
# extract the training code uri and check if it's an exisiting artifact
code_s3_uri = training_job_1_info["HyperParameters"]["sagemaker_submit_directory"]
matching_artifacts = list(
sagemaker.lineage.artifact.Artifact.list(
source_uri=code_s3_uri, sagemaker_session=sagemaker_session
)
)
# use existing arifact if it's already been created, otherwise create a new artifact
if matching_artifacts:
code_artifact = matching_artifacts[0]
print(f"Using existing artifact: {code_artifact.artifact_arn}")
else:
code_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name="TrainingScript",
source_uri=code_s3_uri,
artifact_type="Code",
sagemaker_session=sagemaker_session,
)
print(f"Create artifact {code_artifact.artifact_arn}: SUCCESSFUL")
###Output
_____no_output_____
###Markdown
Training data artifact
###Code
training_data_s3_uri = training_job_1_info["InputDataConfig"][0]["DataSource"]["S3DataSource"][
"S3Uri"
]
matching_artifacts = list(
sagemaker.lineage.artifact.Artifact.list(
source_uri=training_data_s3_uri, sagemaker_session=sagemaker_session
)
)
if matching_artifacts:
training_data_artifact = matching_artifacts[0]
print(f"Using existing artifact: {training_data_artifact.artifact_arn}")
else:
training_data_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name="TrainingData",
source_uri=training_data_s3_uri,
artifact_type="Dataset",
sagemaker_session=sagemaker_session,
)
print(f"Create artifact {training_data_artifact.artifact_arn}: SUCCESSFUL")
###Output
_____no_output_____
###Markdown
Model artifact
###Code
trained_model_s3_uri = training_job_1_info["ModelArtifacts"]["S3ModelArtifacts"]
matching_artifacts = list(
sagemaker.lineage.artifact.Artifact.list(
source_uri=trained_model_s3_uri, sagemaker_session=sagemaker_session
)
)
if matching_artifacts:
model_artifact = matching_artifacts[0]
print(f"Using existing artifact: {model_artifact.artifact_arn}")
else:
model_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name="TrainedModel",
source_uri=trained_model_s3_uri,
artifact_type="Model",
sagemaker_session=sagemaker_session,
)
print(f"Create artifact {model_artifact.artifact_arn}: SUCCESSFUL")
###Output
_____no_output_____
###Markdown
Set artifact associations
###Code
trial_component = sagemaker_boto_client.describe_trial_component(
TrialComponentName=training_job_1_name + "-aws-training-job"
)
trial_component_arn = trial_component["TrialComponentArn"]
###Output
_____no_output_____
###Markdown
Input artifacts
###Code
input_artifacts = [code_artifact, training_data_artifact]
for a in input_artifacts:
try:
sagemaker.lineage.association.Association.create(
source_arn=a.artifact_arn,
destination_arn=trial_component_arn,
association_type="ContributedTo",
sagemaker_session=sagemaker_session,
)
print(f"Association with {a.artifact_type}: SUCCEESFUL")
except:
print(f"Association already exists with {a.artifact_type}")
###Output
_____no_output_____
###Markdown
Output artifacts
###Code
output_artifacts = [model_artifact]
for a in output_artifacts:
try:
sagemaker.lineage.association.Association.create(
source_arn=a.artifact_arn,
destination_arn=trial_component_arn,
association_type="Produced",
sagemaker_session=sagemaker_session,
)
print(f"Association with {a.artifact_type}: SUCCESSFUL")
except:
print(f"Association already exists with {a.artifact_type}")
###Output
_____no_output_____
###Markdown
Evaluate model for bias with Clarify[overview](aud-overview)----Amazon SageMaker Clarify helps improve your machine learning (ML) models by detecting potential bias and helping explain the predictions that models make. It helps you identify various types of bias in pretraining data and in posttraining that can emerge during model training or when the model is in production. SageMaker Clarify helps explain how these models make predictions using a feature attribution approach. It also monitors inferences models make in production for bias or feature attribution drift. The fairness and explainability functionality provided by SageMaker Clarify provides components that help AWS customers build less biased and more understandable machine learning models. It also provides tools to help you generate model governance reports which you can use to inform risk and compliance teams, and external regulators. You can reference the [SageMaker Developer Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-fairness-and-explainability.html) for more information about SageMaker Clarify. Create model from estimator
###Code
model_1_name = f"{prefix}-xgboost-pre-smote"
%store model_1_name
model_matches = sagemaker_boto_client.list_models(NameContains=model_1_name)["Models"]
if not model_matches:
model_1 = sagemaker_session.create_model_from_job(
name=model_1_name,
training_job_name=training_job_1_info["TrainingJobName"],
role=sagemaker_role,
image_uri=training_job_1_info["AlgorithmSpecification"]["TrainingImage"],
)
else:
print(f"Model {model_1_name} already exists.")
###Output
_____no_output_____
###Markdown
Check for data set bias and model biasWith SageMaker, we can check for pre-training and post-training bias. Pre-training metrics show pre-existing bias in that data, while post-training metrics show bias in the predictions from the model. Using the SageMaker SDK, we can specify which groups we want to check bias across and which metrics we'd like to show. To run the full Clarify job, you must un-comment the code in the cell below. Running the job will take ~15 minutes. If you wish to save time, you can view the results in the next cell after which loads a pre-generated output if no bias job was run.
###Code
train_cols = wr.s3.read_csv(training_data_s3_uri).columns.to_list()
clarify_processor = sagemaker.clarify.SageMakerClarifyProcessor(
role=sagemaker_role,
instance_count=1,
instance_type="ml.c4.xlarge",
sagemaker_session=sagemaker_session,
)
bias_data_config = sagemaker.clarify.DataConfig(
s3_data_input_path=train_data_uri,
s3_output_path=bias_report_1_output_path,
label="fraud",
headers=train_cols,
dataset_type="text/csv",
)
model_config = sagemaker.clarify.ModelConfig(
model_name=model_1_name,
instance_type=train_instance_type,
instance_count=1,
accept_type="text/csv",
)
predictions_config = sagemaker.clarify.ModelPredictedLabelConfig(probability_threshold=0.5)
bias_config = sagemaker.clarify.BiasConfig(
label_values_or_threshold=[0],
facet_name="customer_gender_female",
facet_values_or_threshold=[1],
)
# un-comment the code below to run the whole job
# if 'clarify_bias_job_1_name' not in locals():
# clarify_processor.run_bias(
# data_config=bias_data_config,
# bias_config=bias_config,
# model_config=model_config,
# model_predicted_label_config=predictions_config,
# pre_training_methods='all',
# post_training_methods='all')
# clarify_bias_job_1_name = clarify_processor.latest_job.name
# %store clarify_bias_job_1_name
# else:
# print(f'Clarify job {clarify_bias_job_name} has already run successfully.')
###Output
_____no_output_____
###Markdown
Results will be stored in `/opt/ml/processing/output/report.pdf`Training to achieve over 90 percent classification accuracy, may be easily possible on an imbalanced classification problem.Thus, expectations developed regarding classification accuracy that are in reality contingent on balanced class distributions will lead to wrong, misleading assumptions and conclusions : misleading the data scientist and viewers into believing that a model has extremely performance when , actually, it does not. View results of Clarify job (shortcut)Running Clarify on your dataset or model can take ~15 minutes. If you don't have time to run the job, you can view the pre-generated results included with this demo. Otherwise, you can run the job by un-commenting the code in the cell above.
###Code
if "clarify_bias_job_1_name" in locals():
s3_client.download_file(
Bucket=bucket,
Key=f"{prefix}/clarify-output/bias_1/analysis.json",
Filename="clarify_output/bias_1/analysis.json",
)
print(f"Downloaded analysis from previous Clarify job: {clarify_bias_job_1_name}")
else:
print(f"Loading pre-generated analysis file...")
with open("clarify_output/bias_1/analysis.json", "r") as f:
bias_analysis = json.load(f)
results = bias_analysis["pre_training_bias_metrics"]["facets"]["customer_gender_female"][0][
"metrics"
][1]
print(json.dumps(results, indent=4))
###Output
_____no_output_____
###Markdown
In this example dataset, the data is biased against females with only 38.9% of the data samples from female customers. We will address this in the next notebook where we show how we mitigate this class imbalance bias. Although we are only addressing Class Imbalance as an exemplar of bias statistics, you can also take into consideration many other factors of bias. For more detail, see : [Fairness Measures for Machine Learning in Finance](https://pages.awscloud.com/rs/112-TZM-766/images/Fairness.Measures.for.Machine.Learning.in.Finance.pdf)for a more detailed example look at [this](https://github.com/aws/amazon-sagemaker-examples/blob/master/sagemaker_processing/fairness_and_explainability/fairness_and_explainability.ipynb) github example. For more detailed resulst let's look at the generated report, that can be found here: `s3://{bucket}/e2e-fraud-detect/clarify/bias-2/report.pdf`
###Code
# uncomment to copy report and view
#!aws s3 cp s3://{bucket}/{prefix}/clarify-output/bias_1/report.pdf ./clarify_output
###Output
_____no_output_____
###Markdown
Deposit Model and Lineage in SageMaker Model Registry[overview](aud-overview)----Once a useful model has been trained and its artifacts properly associated, the next step is to save the model in a registry for future reference and possible deployment. Create Model Package GroupA Model Package Groups holds multiple versions or iterations of a model. Though it is not required to create them for every model in the registry, they help organize various models which all have the same purpose and provide automatic versioning.
###Code
if 'mpg_name' not in locals():
mpg_name = prefix
%store mpg_name
print(f'Model Package Group name: {mpg_name}')
mpg_input_dict = {
"ModelPackageGroupName": mpg_name,
"ModelPackageGroupDescription": "Insurance claim fraud detection",
}
matching_mpg = sagemaker_boto_client.list_model_package_groups(NameContains=mpg_name)['ModelPackageGroupSummaryList']
if matching_mpg:
print(f'Using existing Model Package Group: {mpg_name}')
else:
mpg_response = sagemaker_boto_client.create_model_package_group(**mpg_input_dict)
print(f'Create Model Package Group {mpg_name}: SUCCESSFUL')
%store mpg_name
###Output
_____no_output_____
###Markdown
Create Model Package for trained model Create and upload a metrics report
###Code
model_metrics_report = {"binary_classification_metrics": {}}
for metric in training_job_1_info["FinalMetricDataList"]:
stat = {metric["MetricName"]: {"value": metric["Value"], "standard_deviation": "NaN"}}
model_metrics_report["binary_classification_metrics"].update(stat)
with open("training_metrics.json", "w") as f:
json.dump(model_metrics_report, f)
metrics_s3_key = (
f"{prefix}/training_jobs/{training_job_1_info['TrainingJobName']}/training_metrics.json"
)
s3_client.upload_file(Filename="training_metrics.json", Bucket=bucket, Key=metrics_s3_key)
###Output
_____no_output_____
###Markdown
Define the inference spec
###Code
mp_inference_spec = InferenceSpecification().get_inference_specification_dict(
ecr_image=training_job_1_info["AlgorithmSpecification"]["TrainingImage"],
supports_gpu=False,
supported_content_types=["text/csv"],
supported_mime_types=["text/csv"],
)
mp_inference_spec["InferenceSpecification"]["Containers"][0]["ModelDataUrl"] = training_job_1_info[
"ModelArtifacts"
]["S3ModelArtifacts"]
###Output
_____no_output_____
###Markdown
Define model metricsMetrics other than model quality and bias can be defined. See the Boto3 documentation for [creating a model package](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.htmlSageMaker.Client.create_model_package).
###Code
model_metrics = {
"ModelQuality": {
"Statistics": {
"ContentType": "application/json",
"S3Uri": f"s3://{bucket}/{metrics_s3_key}",
}
},
"Bias": {
"Report": {
"ContentType": "application/json",
"S3Uri": f"{bias_report_1_output_path}/analysis.json",
}
},
}
mp_input_dict = {
"ModelPackageGroupName": mpg_name,
"ModelPackageDescription": "XGBoost classifier to detect insurance fraud.",
"ModelApprovalStatus": "PendingManualApproval",
"ModelMetrics": model_metrics,
}
mp_input_dict.update(mp_inference_spec)
mp1_response = sagemaker_boto_client.create_model_package(**mp_input_dict)
###Output
_____no_output_____
###Markdown
Wait until model package is completed
###Code
mp_info = sagemaker_boto_client.describe_model_package(
ModelPackageName=mp1_response["ModelPackageArn"]
)
mp_status = mp_info["ModelPackageStatus"]
while mp_status not in ["Completed", "Failed"]:
time.sleep(5)
mp_info = sagemaker_boto_client.describe_model_package(
ModelPackageName=mp1_response["ModelPackageArn"]
)
mp_status = mp_info["ModelPackageStatus"]
print(f"model package status: {mp_status}")
print(f"model package status: {mp_status}")
###Output
_____no_output_____
###Markdown
View model package in registry
###Code
sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)["ModelPackageSummaryList"]
###Output
_____no_output_____
###Markdown
Part 2: Train, Check Bias, Tune, Record Lineage, and Register a Model [Overview](./0-AutoClaimFraudDetection.ipynb)* [Notebook 0 : Overview, Architecture and Data Exploration](./0-AutoClaimFraudDetection.ipynb)* [Notebook 1: Data Prep, Process, Store Features](./1-data-prep-e2e.ipynb)* **[Notebook 2: Train, Check Bias, Tune, Record Lineage, and Register a Model](./2-lineage-train-assess-bias-tune-registry-e2e.ipynb)** * **[Architecture](train)** * **[Train a model using XGBoost](aud-train-model)** * **[Model lineage with artifacts and associations](model-lineage)** * **[Evaluate the model for bias with Clarify](check-bias)** * **[Deposit Model and Lineage in SageMaker Model Registry](model-registry)*** [Notebook 3: Mitigate Bias, Train New Model, Store in Registry](./3-mitigate-bias-train-model2-registry-e2e.ipynb)* [Notebook 4: Deploy Model, Run Predictions](./4-deploy-run-inference-e2e.ipynb)* [Notebook 5 : Create and Run an End-to-End Pipeline to Deploy the Model](./5-pipeline-e2e.ipynb) In this section we will show how you can assess pre-training and post-training bias with SageMaker Clarify, Train the Model using XGBoost on SageMaker, and then finally deposit it in the Model Registry, along with the Lineage of Artifacts that were created along the way: data, code and model metadata.In this second model, you will fix the gender imbalance in the dataset using SMOTE and train another model using XGBoost. This model will also be saved to our registry and eventually approved for deployment. Architecture for the ML Lifecycle Stage: Train, Check Bias, Tune, Record Lineage, Register Model[overview](overview)----![train-assess-tune-register](./images/e2e-2-pipeline-v3b.png) Install required and/or update libraries
###Code
!python -m pip install -Uq pip
!python -m pip install -q awswrangler==2.2.0 imbalanced-learn==0.7.0 sagemaker==2.23.1 boto3==1.16.48
###Output
_____no_output_____
###Markdown
To apply the update to the current kernel, run the following code to refresh the kernel.
###Code
import IPython
IPython.Application.instance().kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Load stored variablesRun the cell below to load any prevously created variables. You should see a print-out of the existing variables. If you don't see anything you may need to create them again or it may be your first time running this notebook.
###Code
%store -r
%store
###Output
_____no_output_____
###Markdown
**Important: You must have run the previous sequential notebooks to retrieve variables using the StoreMagic command.** Import libraries
###Code
import json
import time
import boto3
import sagemaker
import numpy as np
import pandas as pd
import awswrangler as wr
from sagemaker.xgboost.estimator import XGBoost
from model_package_src.inference_specification import InferenceSpecification
###Output
_____no_output_____
###Markdown
Set region, boto3 and SageMaker SDK variables
###Code
# You can change this to a region of your choice
import sagemaker
region = sagemaker.Session().boto_region_name
print("Using AWS Region: {}".format(region))
boto3.setup_default_session(region_name=region)
boto_session = boto3.Session(region_name=region)
s3_client = boto3.client("s3", region_name=region)
sagemaker_boto_client = boto_session.client("sagemaker")
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session, sagemaker_client=sagemaker_boto_client
)
sagemaker_role = sagemaker.get_execution_role()
account_id = boto3.client("sts").get_caller_identity()["Account"]
# variables used for parameterizing the notebook run
estimator_output_path = f"s3://{bucket}/{prefix}/training_jobs"
train_instance_count = 1
train_instance_type = "ml.m4.xlarge"
bias_report_1_output_path = f"s3://{bucket}/{prefix}/clarify-output/bias_1"
xgb_model_name = "xgb-insurance-claims-fraud-model"
train_instance_count = 1
train_instance_type = "ml.m4.xlarge"
predictor_instance_count = 1
predictor_instance_type = "ml.c5.xlarge"
batch_transform_instance_count = 1
batch_transform_instance_type = "ml.c5.xlarge"
claify_instance_count = 1
clairfy_instance_type = "ml.c5.xlarge"
###Output
_____no_output_____
###Markdown
Train a model using XGBoost[overview](overview)----Once the training and test datasets have been persisted in S3, you can start training a model by defining which SageMaker Estimator you'd like to use. For this guide, you will use the [XGBoost Open Source Framework](https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.html) to train your model. This estimator is accessed via the SageMaker SDK, but mirrors the open source version of the [XGBoost Python package](https://xgboost.readthedocs.io/en/latest/python/index.html). Any functioanlity provided by the XGBoost Python package can be implemented in your training script. Set the hyperparametersThese are the parameters which will be sent to our training script in order to train the model. Although they are all defined as "hyperparameters" here, they can encompass XGBoost's [Learning Task Parameters](https://xgboost.readthedocs.io/en/latest/parameter.htmllearning-task-parameters), [Tree Booster Parameters](https://xgboost.readthedocs.io/en/latest/parameter.htmlparameters-for-tree-booster), or any other parameters you'd like to configure for XGBoost.
###Code
hyperparameters = {
"max_depth": "3",
"eta": "0.2",
"objective": "binary:logistic",
"num_round": "100",
}
%store hyperparameters
###Output
_____no_output_____
###Markdown
Create and fit the estimatorIf you want to explore the breadth of functionailty offered by the SageMaker XGBoost Framework you can read about all the configuration parameters by referencing the inhereting classes. The XGBoost class inherets from the Framework class and Framework inherets from the EstimatorBase class:* [XGBoost Estimator documentation](https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.htmlsagemaker.xgboost.estimator.XGBoost)* [Framework documentation](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.htmlsagemaker.estimator.Framework)* [EstimatorBase documentation](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.htmlsagemaker.estimator.EstimatorBase)
###Code
xgb_estimator = XGBoost(
entry_point="xgboost_starter_script.py",
output_path=estimator_output_path,
code_location=estimator_output_path,
hyperparameters=hyperparameters,
role=sagemaker_role,
instance_count=train_instance_count,
instance_type=train_instance_type,
framework_version="1.0-1",
)
if 'training_job_1_name' not in locals():
xgb_estimator.fit(inputs = {'train': train_data_uri})
training_job_1_name = xgb_estimator.latest_training_job.job_name
%store training_job_1_name
else:
print(f'Using previous training job: {training_job_1_name}')
###Output
_____no_output_____
###Markdown
Model lineage with artifacts and associations[Overview](aud-overview)----Amazon SageMaker ML Lineage Tracking creates and stores information about the steps of a machine learning (ML) workflow from data preparation to model deployment. With the tracking information you can reproduce the workflow steps, track model and dataset lineage, and establish model governance and audit standards. With SageMaker Lineage Tracking data scientists and model builders can do the following:* Keep a running history of model discovery experiments.* Establish model governance by tracking model lineage artifacts for auditing and compliance verification.* Clone and rerun workflows to experiment with what-if scenarios while developing models.* Share a workflow that colleagues can reproduce and enhance (for example, while collaborating on solving a business problem).* Clone and rerun workflows with additional debugging or logging routines, or new input variations for troubleshooting issues in production models. Register artifacts Although the `xgb_estimator` object retains much the data we need to learn about how the model was trained, it is, in fact, an ephermeral object which SageMaker does not persist and cannot be re-instantiated at a later time. Although we lose some of its convieneces once it is gone, we can still get back all the data we need by accessing the training jobs it once created.
###Code
training_job_1_info = sagemaker_boto_client.describe_training_job(
TrainingJobName=training_job_1_name
)
###Output
_____no_output_____
###Markdown
Code artifact
###Code
# return any existing artifact which match the our training job's code arn
# ====>
# extract the training code uri and check if it's an exisiting artifact
code_s3_uri = training_job_1_info["HyperParameters"]["sagemaker_submit_directory"]
matching_artifacts = list(
sagemaker.lineage.artifact.Artifact.list(
source_uri=code_s3_uri, sagemaker_session=sagemaker_session
)
)
# use existing arifact if it's already been created, otherwise create a new artifact
if matching_artifacts:
code_artifact = matching_artifacts[0]
print(f"Using existing artifact: {code_artifact.artifact_arn}")
else:
code_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name="TrainingScript",
source_uri=code_s3_uri,
artifact_type="Code",
sagemaker_session=sagemaker_session,
)
print(f"Create artifact {code_artifact.artifact_arn}: SUCCESSFUL")
###Output
_____no_output_____
###Markdown
Training data artifact
###Code
training_data_s3_uri = training_job_1_info["InputDataConfig"][0]["DataSource"]["S3DataSource"][
"S3Uri"
]
matching_artifacts = list(
sagemaker.lineage.artifact.Artifact.list(
source_uri=training_data_s3_uri, sagemaker_session=sagemaker_session
)
)
if matching_artifacts:
training_data_artifact = matching_artifacts[0]
print(f"Using existing artifact: {training_data_artifact.artifact_arn}")
else:
training_data_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name="TrainingData",
source_uri=training_data_s3_uri,
artifact_type="Dataset",
sagemaker_session=sagemaker_session,
)
print(f"Create artifact {training_data_artifact.artifact_arn}: SUCCESSFUL")
###Output
_____no_output_____
###Markdown
Model artifact
###Code
trained_model_s3_uri = training_job_1_info["ModelArtifacts"]["S3ModelArtifacts"]
matching_artifacts = list(
sagemaker.lineage.artifact.Artifact.list(
source_uri=trained_model_s3_uri, sagemaker_session=sagemaker_session
)
)
if matching_artifacts:
model_artifact = matching_artifacts[0]
print(f"Using existing artifact: {model_artifact.artifact_arn}")
else:
model_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name="TrainedModel",
source_uri=trained_model_s3_uri,
artifact_type="Model",
sagemaker_session=sagemaker_session,
)
print(f"Create artifact {model_artifact.artifact_arn}: SUCCESSFUL")
###Output
_____no_output_____
###Markdown
Set artifact associations
###Code
trial_component = sagemaker_boto_client.describe_trial_component(
TrialComponentName=training_job_1_name + "-aws-training-job"
)
trial_component_arn = trial_component["TrialComponentArn"]
###Output
_____no_output_____
###Markdown
Input artifacts
###Code
input_artifacts = [code_artifact, training_data_artifact]
for a in input_artifacts:
try:
sagemaker.lineage.association.Association.create(
source_arn=a.artifact_arn,
destination_arn=trial_component_arn,
association_type="ContributedTo",
sagemaker_session=sagemaker_session,
)
print(f"Association with {a.artifact_type}: SUCCEESFUL")
except:
print(f"Association already exists with {a.artifact_type}")
###Output
_____no_output_____
###Markdown
Output artifacts
###Code
output_artifacts = [model_artifact]
for a in output_artifacts:
try:
sagemaker.lineage.association.Association.create(
source_arn=a.artifact_arn,
destination_arn=trial_component_arn,
association_type="Produced",
sagemaker_session=sagemaker_session,
)
print(f"Association with {a.artifact_type}: SUCCESSFUL")
except:
print(f"Association already exists with {a.artifact_type}")
###Output
_____no_output_____
###Markdown
Evaluate model for bias with Clarify[overview](aud-overview)----Amazon SageMaker Clarify helps improve your machine learning (ML) models by detecting potential bias and helping explain the predictions that models make. It helps you identify various types of bias in pretraining data and in posttraining that can emerge during model training or when the model is in production. SageMaker Clarify helps explain how these models make predictions using a feature attribution approach. It also monitors inferences models make in production for bias or feature attribution drift. The fairness and explainability functionality provided by SageMaker Clarify provides components that help AWS customers build less biased and more understandable machine learning models. It also provides tools to help you generate model governance reports which you can use to inform risk and compliance teams, and external regulators. You can reference the [SageMaker Developer Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-fairness-and-explainability.html) for more information about SageMaker Clarify. Create model from estimator
###Code
model_1_name = f"{prefix}-xgboost-pre-smote"
%store model_1_name
model_matches = sagemaker_boto_client.list_models(NameContains=model_1_name)["Models"]
if not model_matches:
model_1 = sagemaker_session.create_model_from_job(
name=model_1_name,
training_job_name=training_job_1_info["TrainingJobName"],
role=sagemaker_role,
image_uri=training_job_1_info["AlgorithmSpecification"]["TrainingImage"],
)
else:
print(f"Model {model_1_name} already exists.")
###Output
_____no_output_____
###Markdown
Check for data set bias and model biasWith SageMaker, we can check for pre-training and post-training bias. Pre-training metrics show pre-existing bias in that data, while post-training metrics show bias in the predictions from the model. Using the SageMaker SDK, we can specify which groups we want to check bias across and which metrics we'd like to show. To run the full Clarify job, you must un-comment the code in the cell below. Running the job will take ~15 minutes. If you wish to save time, you can view the results in the next cell after which loads a pre-generated output if no bias job was run.
###Code
train_cols = wr.s3.read_csv(training_data_s3_uri).columns.to_list()
clarify_processor = sagemaker.clarify.SageMakerClarifyProcessor(
role=sagemaker_role,
instance_count=1,
instance_type="ml.c4.xlarge",
sagemaker_session=sagemaker_session,
)
bias_data_config = sagemaker.clarify.DataConfig(
s3_data_input_path=train_data_uri,
s3_output_path=bias_report_1_output_path,
label="fraud",
headers=train_cols,
dataset_type="text/csv",
)
model_config = sagemaker.clarify.ModelConfig(
model_name=model_1_name,
instance_type=train_instance_type,
instance_count=1,
accept_type="text/csv",
)
predictions_config = sagemaker.clarify.ModelPredictedLabelConfig(probability_threshold=0.5)
bias_config = sagemaker.clarify.BiasConfig(
label_values_or_threshold=[0],
facet_name="customer_gender_female",
facet_values_or_threshold=[1],
)
# un-comment the code below to run the whole job
# if 'clarify_bias_job_1_name' not in locals():
# clarify_processor.run_bias(
# data_config=bias_data_config,
# bias_config=bias_config,
# model_config=model_config,
# model_predicted_label_config=predictions_config,
# pre_training_methods='all',
# post_training_methods='all')
# clarify_bias_job_1_name = clarify_processor.latest_job.name
# %store clarify_bias_job_1_name
# else:
# print(f'Clarify job {clarify_bias_job_name} has already run successfully.')
###Output
_____no_output_____
###Markdown
Results will be stored in `/opt/ml/processing/output/report.pdf`Training to achieve over 90 percent classification accuracy, may be easily possible on an imbalanced classification problem.Thus, expectations developed regarding classification accuracy that are in reality contingent on balanced class distributions will lead to wrong, misleading assumptions and conclusions : misleading the data scientist and viewers into believing that a model has extremely performance when , actually, it does not. View results of Clarify job (shortcut)Running Clarify on your dataset or model can take ~15 minutes. If you don't have time to run the job, you can view the pre-generated results included with this demo. Otherwise, you can run the job by un-commenting the code in the cell above.
###Code
if "clarify_bias_job_name" in locals():
s3_client.download_file(
Bucket=bucket,
Key=f"{prefix}/clarify-output/bias-1/analysis.json",
Filename="clarify_output/bias_1/analysis.json",
)
print(f"Downloaded analysis from previous Clarify job: {clarify_bias_job_name}")
else:
print(f"Loading pre-generated analysis file...")
with open("clarify_output/bias_1/analysis.json", "r") as f:
bias_analysis = json.load(f)
results = bias_analysis["pre_training_bias_metrics"]["facets"]["customer_gender_female"][0][
"metrics"
][1]
print(json.dumps(results, indent=4))
###Output
_____no_output_____
###Markdown
In this example dataset, the data is biased against females with only 38.9% of the data samples from female customers. We will address this in the next notebook where we show how we mitigate this class imbalance bias. Although we are only addressing Class Imbalance as an exemplar of bias statistics, you can also take into consideration many other factors of bias. For more detail, see : [Fairness Measures for Machine Learning in Finance](https://pages.awscloud.com/rs/112-TZM-766/images/Fairness.Measures.for.Machine.Learning.in.Finance.pdf)for a more detailed example look at [this](https://github.com/aws/amazon-sagemaker-examples/blob/master/sagemaker_processing/fairness_and_explainability/fairness_and_explainability.ipynb) github example. For more detailed resulst let's look at the generated report, that can be found here: `s3://{bucket}/e2e-fraud-detect/clarify/bias-2/report.pdf`
###Code
# uncomment to copy report and view
#!aws s3 cp s3://{bucket}/{prefix}/clarify-output/bias_1/report.pdf ./clarify_output
###Output
_____no_output_____
###Markdown
Deposit Model and Lineage in SageMaker Model Registry[overview](aud-overview)----Once a useful model has been trained and its artifacts properly associated, the next step is to save the model in a registry for future reference and possible deployment. Create Model Package GroupA Model Package Groups holds multiple versions or iterations of a model. Though it is not required to create them for every model in the registry, they help organize various models which all have the same purpose and provide automatic versioning.
###Code
if 'mpg_name' not in locals():
mpg_name = prefix
%store mpg_name
print(f'Model Package Group name: {mpg_name}')
mpg_input_dict = {
"ModelPackageGroupName": mpg_name,
"ModelPackageGroupDescription": "Insurance claim fraud detection",
}
matching_mpg = sagemaker_boto_client.list_model_package_groups(NameContains=mpg_name)['ModelPackageGroupSummaryList']
if matching_mpg:
print(f'Using existing Model Package Group: {mpg_name}')
else:
mpg_response = sagemaker_boto_client.create_model_package_group(**mpg_input_dict)
print(f'Create Model Package Group {mpg_name}: SUCCESSFUL')
%store mpg_name
###Output
_____no_output_____
###Markdown
Create Model Package for trained model Create and upload a metrics report
###Code
model_metrics_report = {"classification_metrics": {}}
for metric in training_job_1_info["FinalMetricDataList"]:
stat = {metric["MetricName"]: {"value": metric["Value"]}}
model_metrics_report["classification_metrics"].update(stat)
with open("training_metrics.json", "w") as f:
json.dump(model_metrics_report, f)
metrics_s3_key = (
f"{prefix}/training_jobs/{training_job_1_info['TrainingJobName']}/training_metrics.json"
)
s3_client.upload_file(Filename="training_metrics.json", Bucket=bucket, Key=metrics_s3_key)
###Output
_____no_output_____
###Markdown
Define the inference spec
###Code
mp_inference_spec = InferenceSpecification().get_inference_specification_dict(
ecr_image=training_job_1_info["AlgorithmSpecification"]["TrainingImage"],
supports_gpu=False,
supported_content_types=["text/csv"],
supported_mime_types=["text/csv"],
)
mp_inference_spec["InferenceSpecification"]["Containers"][0]["ModelDataUrl"] = training_job_1_info[
"ModelArtifacts"
]["S3ModelArtifacts"]
###Output
_____no_output_____
###Markdown
Define model metricsMetrics other than model quality and bias can be defined. See the Boto3 documentation for [creating a model package](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.htmlSageMaker.Client.create_model_package).
###Code
model_metrics = {
"ModelQuality": {
"Statistics": {
"ContentType": "application/json",
"S3Uri": f"s3://{bucket}/{prefix}/{metrics_s3_key}",
}
},
"Bias": {
"Report": {
"ContentType": "application/json",
"S3Uri": f"{bias_report_1_output_path}/analysis.json",
}
},
}
mp_input_dict = {
"ModelPackageGroupName": mpg_name,
"ModelPackageDescription": "XGBoost classifier to detect insurance fraud.",
"ModelApprovalStatus": "PendingManualApproval",
"ModelMetrics": model_metrics,
}
mp_input_dict.update(mp_inference_spec)
mp1_response = sagemaker_boto_client.create_model_package(**mp_input_dict)
###Output
_____no_output_____
###Markdown
Wait until model package is completed
###Code
mp_info = sagemaker_boto_client.describe_model_package(
ModelPackageName=mp1_response["ModelPackageArn"]
)
mp_status = mp_info["ModelPackageStatus"]
while mp_status not in ["Completed", "Failed"]:
time.sleep(5)
mp_info = sagemaker_boto_client.describe_model_package(
ModelPackageName=mp1_response["ModelPackageArn"]
)
mp_status = mp_info["ModelPackageStatus"]
print(f"model package status: {mp_status}")
print(f"model package status: {mp_status}")
###Output
_____no_output_____
###Markdown
View model package in registry
###Code
sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)["ModelPackageSummaryList"]
###Output
_____no_output_____
###Markdown
Part 2: Train, Check Bias, Tune, Record Lineage, and Register a Model [Overview](./0-AutoClaimFraudDetection.ipynb)* [Notebook 0 : Overview, Architecture and Data Exploration](./0-AutoClaimFraudDetection.ipynb)* [Notebook 1: Data Prep, Process, Store Features](./1-data-prep-e2e.ipynb)* **[Notebook 2: Train, Check Bias, Tune, Record Lineage, and Register a Model](./2-lineage-train-assess-bias-tune-registry-e2e.ipynb)** * **[Architecture](train)** * **[Train a model using XGBoost](aud-train-model)** * **[Model lineage with artifacts and associations](model-lineage)** * **[Evaluate the model for bias with Clarify](check-bias)** * **[Deposit Model and Lineage in SageMaker Model Registry](model-registry)*** [Notebook 3: Mitigate Bias, Train New Model, Store in Registry](./3-mitigate-bias-train-model2-registry-e2e.ipynb)* [Notebook 4: Deploy Model, Run Predictions](./4-deploy-run-inference-e2e.ipynb)* [Notebook 5 : Create and Run an End-to-End Pipeline to Deploy the Model](./5-pipeline-e2e.ipynb) In this section we will show how you can assess pre-training and post-training bias with SageMaker Clarify, Train the Model using XGBoost on SageMaker, and then finally deposit it in the Model Registry, along with the Lineage of Artifacts that were created along the way: data, code and model metadata.In this second model, you will fix the gender imbalance in the dataset using SMOTE and train another model using XGBoost. This model will also be saved to our registry and eventually approved for deployment. Architecture for the ML Lifecycle Stage: Train, Check Bias, Tune, Record Lineage, Register Model[overview](overview)___![train-assess-tune-register](./images/e2e-2-pipeline-v3b.png) Install required and/or update libraries
###Code
!python -m pip install -Uq pip
!python -m pip install -q awswrangler==2.2.0 imbalanced-learn==0.7.0 sagemaker==2.23.1 boto3==1.16.48
###Output
_____no_output_____
###Markdown
To apply the update to the current kernel, run the following code to refresh the kernel.
###Code
import IPython
IPython.Application.instance().kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Load stored variablesRun the cell below to load any prevously created variables. You should see a print-out of the existing variables. If you don't see anything you may need to create them again or it may be your first time running this notebook.
###Code
%store -r
%store
###Output
_____no_output_____
###Markdown
**Important: You must have run the previous sequential notebooks to retrieve variables using the StoreMagic command.** Import libraries
###Code
import json
import time
import boto3
import sagemaker
import numpy as np
import pandas as pd
import awswrangler as wr
from sagemaker.xgboost.estimator import XGBoost
from model_package_src.inference_specification import InferenceSpecification
###Output
_____no_output_____
###Markdown
Set region, boto3 and SageMaker SDK variables
###Code
#You can change this to a region of your choice
import sagemaker
region = sagemaker.Session().boto_region_name
print("Using AWS Region: {}".format(region))
boto3.setup_default_session(region_name=region)
boto_session = boto3.Session(region_name=region)
s3_client = boto3.client('s3', region_name=region)
sagemaker_boto_client = boto_session.client('sagemaker')
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session,
sagemaker_client=sagemaker_boto_client)
sagemaker_role = sagemaker.get_execution_role()
account_id = boto3.client('sts').get_caller_identity()["Account"]
# variables used for parameterizing the notebook run
estimator_output_path = f's3://{bucket}/{prefix}/training_jobs'
train_instance_count = 1
train_instance_type = "ml.m4.xlarge"
bias_report_1_output_path = f's3://{bucket}/{prefix}/clarify-output/bias_1'
xgb_model_name = 'xgb-insurance-claims-fraud-model'
train_instance_count = 1
train_instance_type = "ml.m4.xlarge"
predictor_instance_count = 1
predictor_instance_type = "ml.c5.xlarge"
batch_transform_instance_count = 1
batch_transform_instance_type = "ml.c5.xlarge"
claify_instance_count = 1
clairfy_instance_type = 'ml.c5.xlarge'
###Output
_____no_output_____
###Markdown
Train a model using XGBoost[overview](overview)___Once the training and test datasets have been persisted in S3, you can start training a model by defining which SageMaker Estimator you'd like to use. For this guide, you will use the [XGBoost Open Source Framework](https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.html) to train your model. This estimator is accessed via the SageMaker SDK, but mirrors the open source version of the [XGBoost Python package](https://xgboost.readthedocs.io/en/latest/python/index.html). Any functioanlity provided by the XGBoost Python package can be implemented in your training script. Set the hyperparametersThese are the parameters which will be sent to our training script in order to train the model. Although they are all defined as "hyperparameters" here, they can encompass XGBoost's [Learning Task Parameters](https://xgboost.readthedocs.io/en/latest/parameter.htmllearning-task-parameters), [Tree Booster Parameters](https://xgboost.readthedocs.io/en/latest/parameter.htmlparameters-for-tree-booster), or any other parameters you'd like to configure for XGBoost.
###Code
hyperparameters = {
"max_depth": "3",
"eta": "0.2",
"objective": "binary:logistic",
"num_round": "100",
}
%store hyperparameters
###Output
_____no_output_____
###Markdown
Create and fit the estimatorIf you want to explore the breadth of functionailty offered by the SageMaker XGBoost Framework you can read about all the configuration parameters by referencing the inhereting classes. The XGBoost class inherets from the Framework class and Framework inherets from the EstimatorBase class:* [XGBoost Estimator documentation](https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.htmlsagemaker.xgboost.estimator.XGBoost)* [Framework documentation](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.htmlsagemaker.estimator.Framework)* [EstimatorBase documentation](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.htmlsagemaker.estimator.EstimatorBase)
###Code
xgb_estimator = XGBoost(
entry_point = "xgboost_starter_script.py",
output_path = estimator_output_path,
code_location = estimator_output_path,
hyperparameters = hyperparameters,
role = sagemaker_role,
instance_count = train_instance_count,
instance_type = train_instance_type,
framework_version = "1.0-1")
if 'training_job_1_name' not in locals():
xgb_estimator.fit(inputs = {'train': train_data_uri})
training_job_1_name = xgb_estimator.latest_training_job.job_name
%store training_job_1_name
else:
print(f'Using previous training job: {training_job_1_name}')
###Output
_____no_output_____
###Markdown
Model lineage with artifacts and associations[Overview](aud-overview)___Amazon SageMaker ML Lineage Tracking creates and stores information about the steps of a machine learning (ML) workflow from data preparation to model deployment. With the tracking information you can reproduce the workflow steps, track model and dataset lineage, and establish model governance and audit standards. With SageMaker Lineage Tracking data scientists and model builders can do the following:* Keep a running history of model discovery experiments.* Establish model governance by tracking model lineage artifacts for auditing and compliance verification.* Clone and rerun workflows to experiment with what-if scenarios while developing models.* Share a workflow that colleagues can reproduce and enhance (for example, while collaborating on solving a business problem).* Clone and rerun workflows with additional debugging or logging routines, or new input variations for troubleshooting issues in production models. Register artifacts Although the `xgb_estimator` object retains much the data we need to learn about how the model was trained, it is, in fact, an ephermeral object which SageMaker does not persist and cannot be re-instantiated at a later time. Although we lose some of its convieneces once it is gone, we can still get back all the data we need by accessing the training jobs it once created.
###Code
training_job_1_info = sagemaker_boto_client.describe_training_job(TrainingJobName=training_job_1_name)
###Output
_____no_output_____
###Markdown
Code artifact
###Code
# return any existing artifact which match the our training job's code arn
# ====>
# extract the training code uri and check if it's an exisiting artifact
code_s3_uri = training_job_1_info['HyperParameters']['sagemaker_submit_directory']
matching_artifacts = list(sagemaker.lineage.artifact.Artifact.list(
source_uri=code_s3_uri,
sagemaker_session=sagemaker_session))
# use existing arifact if it's already been created, otherwise create a new artifact
if matching_artifacts:
code_artifact = matching_artifacts[0]
print(f'Using existing artifact: {code_artifact.artifact_arn}')
else:
code_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name='TrainingScript',
source_uri=code_s3_uri,
artifact_type='Code',
sagemaker_session=sagemaker_session)
print(f'Create artifact {code_artifact.artifact_arn}: SUCCESSFUL')
###Output
_____no_output_____
###Markdown
Training data artifact
###Code
training_data_s3_uri = training_job_1_info['InputDataConfig'][0]['DataSource']['S3DataSource']['S3Uri']
matching_artifacts = list(sagemaker.lineage.artifact.Artifact.list(
source_uri=training_data_s3_uri,
sagemaker_session=sagemaker_session))
if matching_artifacts:
training_data_artifact = matching_artifacts[0]
print(f'Using existing artifact: {training_data_artifact.artifact_arn}')
else:
training_data_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name='TrainingData',
source_uri=training_data_s3_uri,
artifact_type='Dataset',
sagemaker_session=sagemaker_session)
print(f'Create artifact {training_data_artifact.artifact_arn}: SUCCESSFUL')
###Output
_____no_output_____
###Markdown
Model artifact
###Code
trained_model_s3_uri = training_job_1_info['ModelArtifacts']['S3ModelArtifacts']
matching_artifacts = list(sagemaker.lineage.artifact.Artifact.list(
source_uri=trained_model_s3_uri,
sagemaker_session=sagemaker_session))
if matching_artifacts:
model_artifact = matching_artifacts[0]
print(f'Using existing artifact: {model_artifact.artifact_arn}')
else:
model_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name='TrainedModel',
source_uri=trained_model_s3_uri,
artifact_type='Model',
sagemaker_session=sagemaker_session)
print(f'Create artifact {model_artifact.artifact_arn}: SUCCESSFUL')
###Output
_____no_output_____
###Markdown
Set artifact associations
###Code
trial_component = sagemaker_boto_client.describe_trial_component(TrialComponentName=training_job_1_name+'-aws-training-job')
trial_component_arn = trial_component['TrialComponentArn']
###Output
_____no_output_____
###Markdown
Input artifacts
###Code
input_artifacts = [code_artifact, training_data_artifact]
for a in input_artifacts:
try:
sagemaker.lineage.association.Association.create(
source_arn=a.artifact_arn,
destination_arn=trial_component_arn,
association_type='ContributedTo',
sagemaker_session=sagemaker_session)
print(f"Association with {a.artifact_type}: SUCCEESFUL")
except:
print(f"Association already exists with {a.artifact_type}")
###Output
_____no_output_____
###Markdown
Output artifacts
###Code
output_artifacts = [model_artifact]
for a in output_artifacts:
try:
sagemaker.lineage.association.Association.create(
source_arn=a.artifact_arn,
destination_arn=trial_component_arn,
association_type='Produced',
sagemaker_session=sagemaker_session)
print(f"Association with {a.artifact_type}: SUCCESSFUL")
except:
print(f"Association already exists with {a.artifact_type}")
###Output
_____no_output_____
###Markdown
Evaluate model for bias with Clarify[overview](aud-overview)___Amazon SageMaker Clarify helps improve your machine learning (ML) models by detecting potential bias and helping explain the predictions that models make. It helps you identify various types of bias in pretraining data and in posttraining that can emerge during model training or when the model is in production. SageMaker Clarify helps explain how these models make predictions using a feature attribution approach. It also monitors inferences models make in production for bias or feature attribution drift. The fairness and explainability functionality provided by SageMaker Clarify provides components that help AWS customers build less biased and more understandable machine learning models. It also provides tools to help you generate model governance reports which you can use to inform risk and compliance teams, and external regulators. You can reference the [SageMaker Developer Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-fairness-and-explainability.html) for more information about SageMaker Clarify. Create model from estimator
###Code
model_1_name = f'{prefix}-xgboost-pre-smote'
%store model_1_name
model_matches = sagemaker_boto_client.list_models(NameContains=model_1_name)['Models']
if not model_matches:
model_1 = sagemaker_session.create_model_from_job(
name=model_1_name,
training_job_name=training_job_1_info['TrainingJobName'],
role=sagemaker_role,
image_uri=training_job_1_info['AlgorithmSpecification']['TrainingImage'])
else:
print(f"Model {model_1_name} already exists.")
###Output
_____no_output_____
###Markdown
Check for data set bias and model biasWith SageMaker, we can check for pre-training and post-training bias. Pre-training metrics show pre-existing bias in that data, while post-training metrics show bias in the predictions from the model. Using the SageMaker SDK, we can specify which groups we want to check bias across and which metrics we'd like to show. To run the full Clarify job, you must un-comment the code in the cell below. Running the job will take ~15 minutes. If you wish to save time, you can view the results in the next cell after which loads a pre-generated output if no bias job was run.
###Code
train_cols = wr.s3.read_csv(training_data_s3_uri).columns.to_list()
clarify_processor = sagemaker.clarify.SageMakerClarifyProcessor(
role=sagemaker_role,
instance_count=1,
instance_type='ml.c4.xlarge',
sagemaker_session=sagemaker_session)
bias_data_config = sagemaker.clarify.DataConfig(
s3_data_input_path=train_data_uri,
s3_output_path=bias_report_1_output_path,
label='fraud',
headers=train_cols,
dataset_type='text/csv')
model_config = sagemaker.clarify.ModelConfig(
model_name=model_1_name,
instance_type=train_instance_type,
instance_count=1,
accept_type='text/csv')
predictions_config = sagemaker.clarify.ModelPredictedLabelConfig(probability_threshold=0.5)
bias_config = sagemaker.clarify.BiasConfig(
label_values_or_threshold=[0],
facet_name='customer_gender_female',
facet_values_or_threshold=[1])
# un-comment the code below to run the whole job
# if 'clarify_bias_job_1_name' not in locals():
# clarify_processor.run_bias(
# data_config=bias_data_config,
# bias_config=bias_config,
# model_config=model_config,
# model_predicted_label_config=predictions_config,
# pre_training_methods='all',
# post_training_methods='all')
# clarify_bias_job_1_name = clarify_processor.latest_job.name
# %store clarify_bias_job_1_name
# else:
# print(f'Clarify job {clarify_bias_job_name} has already run successfully.')
###Output
_____no_output_____
###Markdown
Results will be stored in `/opt/ml/processing/output/report.pdf`Training to achieve over 90 percent classification accuracy, may be easily possible on an imbalanced classification problem.Thus, expectations developed regarding classification accuracy that are in reality contingent on balanced class distributions will lead to wrong, misleading assumptions and conclusions : misleading the data scientist and viewers into believing that a model has extremely performance when , actually, it does not. View results of Clarify job (shortcut)Running Clarify on your dataset or model can take ~15 minutes. If you don't have time to run the job, you can view the pre-generated results included with this demo. Otherwise, you can run the job by un-commenting the code in the cell above.
###Code
if 'clarify_bias_job_name' in locals():
s3_client.download_file(Bucket=bucket, Key=f'{prefix}/clarify-output/bias-1/analysis.json', Filename='clarify_output/bias_1/analysis.json')
print(f'Downloaded analysis from previous Clarify job: {clarify_bias_job_name}')
else:
print(f'Loading pre-generated analysis file...')
with open('clarify_output/bias_1/analysis.json', 'r') as f:
bias_analysis = json.load(f)
results = bias_analysis['pre_training_bias_metrics']['facets']['customer_gender_female'][0]['metrics'][1]
print(json.dumps(results, indent=4))
###Output
_____no_output_____
###Markdown
In this example dataset, the data is biased against females with only 38.9% of the data samples from female customers. We will address this in the next notebook where we show how we mitigate this class imbalance bias. Although we are only addressing Class Imbalance as an exemplar of bias statistics, you can also take into consideration many other factors of bias. For more detail, see : [Fairness Measures for Machine Learning in Finance](https://pages.awscloud.com/rs/112-TZM-766/images/Fairness.Measures.for.Machine.Learning.in.Finance.pdf)for a more detailed example look at [this](https://github.com/aws/amazon-sagemaker-examples/blob/master/sagemaker_processing/fairness_and_explainability/fairness_and_explainability.ipynb) github example. For more detailed resulst let's look at the generated report, that can be found here: `s3://{bucket}/e2e-fraud-detect/clarify/bias-2/report.pdf`
###Code
#uncomment to copy report and view
#!aws s3 cp s3://{bucket}/{prefix}/clarify-output/bias_1/report.pdf ./clarify_output
###Output
_____no_output_____
###Markdown
Deposit Model and Lineage in SageMaker Model Registry[overview](aud-overview)____Once a useful model has been trained and its artifacts properly associated, the next step is to save the model in a registry for future reference and possible deployment. Create Model Package GroupA Model Package Groups holds multiple versions or iterations of a model. Though it is not required to create them for every model in the registry, they help organize various models which all have the same purpose and provide autiomatic versioning.
###Code
if 'mpg_name' not in locals():
mpg_name = prefix
%store mpg_name
print(f'Model Package Group name: {mpg_name}')
mpg_input_dict = {
'ModelPackageGroupName': mpg_name,
'ModelPackageGroupDescription': 'Insurance claim fraud detection'
}
matching_mpg = sagemaker_boto_client.list_model_package_groups(NameContains=mpg_name)['ModelPackageGroupSummaryList']
if matching_mpg:
print(f'Using existing Model Package Group: {mpg_name}')
else:
mpg_response = sagemaker_boto_client.create_model_package_group(**mpg_input_dict)
print(f'Create Model Package Group {mpg_name}: SUCCESSFUL')
%store mpg_name
###Output
_____no_output_____
###Markdown
Create Model Package for trained model Create and upload a metrics report
###Code
model_metrics_report = {'classification_metrics': {}}
for metric in training_job_1_info['FinalMetricDataList']:
stat = {metric['MetricName']: {'value': metric['Value']}}
model_metrics_report['classification_metrics'].update(stat)
with open('training_metrics.json', 'w') as f:
json.dump(model_metrics_report, f)
metrics_s3_key = f"{prefix}/training_jobs/{training_job_1_info['TrainingJobName']}/training_metrics.json"
s3_client.upload_file(Filename='training_metrics.json', Bucket=bucket, Key=metrics_s3_key)
###Output
_____no_output_____
###Markdown
Define the inference spec
###Code
mp_inference_spec = InferenceSpecification().get_inference_specification_dict(
ecr_image=training_job_1_info['AlgorithmSpecification']['TrainingImage'],
supports_gpu=False,
supported_content_types=['text/csv'],
supported_mime_types=['text/csv'])
mp_inference_spec['InferenceSpecification']['Containers'][0]['ModelDataUrl'] = training_job_1_info['ModelArtifacts']['S3ModelArtifacts']
###Output
_____no_output_____
###Markdown
Define model metricsMetrics other than model quality and bias can be defined. See the Boto3 documentation for [creating a model package](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.htmlSageMaker.Client.create_model_package).
###Code
model_metrics = {
'ModelQuality': {
'Statistics': {
'ContentType': 'application/json',
'S3Uri': f's3://{bucket}/{prefix}/{metrics_s3_key}'
}
},
'Bias': {
'Report': {
'ContentType': 'application/json',
'S3Uri': f'{bias_report_1_output_path}/analysis.json'
}
}
}
mp_input_dict = {
'ModelPackageGroupName': mpg_name,
'ModelPackageDescription': 'XGBoost classifier to detect insurance fraud.',
'ModelApprovalStatus': 'PendingManualApproval',
'ModelMetrics': model_metrics
}
mp_input_dict.update(mp_inference_spec)
mp1_response = sagemaker_boto_client.create_model_package(**mp_input_dict)
###Output
_____no_output_____
###Markdown
Wait until model package is completed
###Code
mp_info = sagemaker_boto_client.describe_model_package(ModelPackageName=mp1_response['ModelPackageArn'])
mp_status = mp_info['ModelPackageStatus']
while mp_status not in ['Completed', 'Failed']:
time.sleep(5)
mp_info = sagemaker_boto_client.describe_model_package(ModelPackageName=mp1_response['ModelPackageArn'])
mp_status = mp_info['ModelPackageStatus']
print(f'model package status: {mp_status}')
print(f'model package status: {mp_status}')
###Output
_____no_output_____
###Markdown
View model package in registry
###Code
sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)['ModelPackageSummaryList']
###Output
_____no_output_____
###Markdown
Part 2: Train, Check Bias, Tune, Record Lineage, and Register a Model [Overview](./0-AutoClaimFraudDetection.ipynb)* [Notebook 0 : Overview, Architecture and Data Exploration](./0-AutoClaimFraudDetection.ipynb)* [Notebook 1: Data Prep, Process, Store Features](./1-data-prep-e2e.ipynb)* **[Notebook 2: Train, Check Bias, Tune, Record Lineage, and Register a Model](./2-lineage-train-assess-bias-tune-registry-e2e.ipynb)** * **[Architecture](train)** * **[Train a model using XGBoost](aud-train-model)** * **[Model lineage with artifacts and associations](model-lineage)** * **[Evaluate the model for bias with Clarify](check-bias)** * **[Deposit Model and Lineage in SageMaker Model Registry](model-registry)*** [Notebook 3: Mitigate Bias, Train New Model, Store in Registry](./3-mitigate-bias-train-model2-registry-e2e.ipynb)* [Notebook 4: Deploy Model, Run Predictions](./4-deploy-run-inference-e2e.ipynb)* [Notebook 5 : Create and Run an End-to-End Pipeline to Deploy the Model](./5-pipeline-e2e.ipynb) In this section we will show how you can assess pre-training and post-training bias with SageMaker Clarify, Train the Model using XGBoost on SageMaker, and then finally deposit it in the Model Registry, along with the Lineage of Artifacts that were created along the way: data, code and model metadata.In this second model, you will fix the gender imbalance in the dataset using SMOTE and train another model using XGBoost. This model will also be saved to our registry and eventually approved for deployment. Architecture for the ML Lifecycle Stage: Train, Check Bias, Tune, Record Lineage, Register Model[overview](overview)___![train-assess-tune-register](./images/e2e-2-pipeline-v3b.png) Install required and/or update libraries
###Code
!python -m pip install -Uq pip
!python -m pip install -q awswrangler==2.2.0 imbalanced-learn==0.7.0 sagemaker==2.23.1 boto3==1.16.48
###Output
_____no_output_____
###Markdown
To apply the update to the current kernel, run the following code to refresh the kernel.
###Code
import IPython
IPython.Application.instance().kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Load stored variablesRun the cell below to load any prevously created variables. You should see a print-out of the existing variables. If you don't see anything you may need to create them again or it may be your first time running this notebook.
###Code
%store -r
%store
###Output
_____no_output_____
###Markdown
**Important: You must have run the previous sequancial notebooks to retrieve variables using the StoreMagic command.** Import libraries
###Code
import json
import time
import boto3
import sagemaker
import numpy as np
import pandas as pd
import awswrangler as wr
from sagemaker.xgboost.estimator import XGBoost
from model_package_src.inference_specification import InferenceSpecification
###Output
_____no_output_____
###Markdown
Set region, boto3 and SageMaker SDK variables
###Code
#You can change this to a region of your choice
import sagemaker
region = sagemaker.Session().boto_region_name
print("Using AWS Region: {}".format(region))
boto3.setup_default_session(region_name=region)
boto_session = boto3.Session(region_name=region)
s3_client = boto3.client('s3', region_name=region)
sagemaker_boto_client = boto_session.client('sagemaker')
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session,
sagemaker_client=sagemaker_boto_client)
sagemaker_role = sagemaker.get_execution_role()
account_id = boto3.client('sts').get_caller_identity()["Account"]
# variables used for parameterizing the notebook run
estimator_output_path = f's3://{bucket}/{prefix}/training_jobs'
train_instance_count = 1
train_instance_type = "ml.m4.xlarge"
bias_report_1_output_path = f's3://{bucket}/{prefix}/clarify-output/bias_1'
xgb_model_name = 'xgb-insurance-claims-fraud-model'
train_instance_count = 1
train_instance_type = "ml.m4.xlarge"
predictor_instance_count = 1
predictor_instance_type = "ml.c5.xlarge"
batch_transform_instance_count = 1
batch_transform_instance_type = "ml.c5.xlarge"
claify_instance_count = 1
clairfy_instance_type = 'ml.c5.xlarge'
###Output
_____no_output_____
###Markdown
Train a model using XGBoost[overview](overview)___Once the training and test datasets have been persisted in S3, you can start training a model by defining which SageMaker Estimator you'd like to use. For this guide, you will use the [XGBoost Open Source Framework](https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.html) to train your model. This estimator is accessed via the SageMaker SDK, but mirrors the open source version of the [XGBoost Python package](https://xgboost.readthedocs.io/en/latest/python/index.html). Any functioanlity provided by the XGBoost Python package can be implemented in your training script. Set the hyperparametersThese are the parameters which will be sent to our training script in order to train the model. Although they are all defined as "hyperparameters" here, they can encompass XGBoost's [Learning Task Parameters](https://xgboost.readthedocs.io/en/latest/parameter.htmllearning-task-parameters), [Tree Booster Parameters](https://xgboost.readthedocs.io/en/latest/parameter.htmlparameters-for-tree-booster), or any other parameters you'd like to configure for XGBoost.
###Code
hyperparameters = {
"max_depth": "3",
"eta": "0.2",
"objective": "binary:logistic",
"num_round": "100",
}
%store hyperparameters
###Output
_____no_output_____
###Markdown
Create and fit the estimatorIf you want to explore the breadth of functionailty offered by the SageMaker XGBoost Framework you can read about all the configuration parameters by referencing the inhereting classes. The XGBoost class inherets from the Framework class and Framework inherets from the EstimatorBase class:* [XGBoost Estimator documentation](https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.htmlsagemaker.xgboost.estimator.XGBoost)* [Framework documentation](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.htmlsagemaker.estimator.Framework)* [EstimatorBase documentation](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.htmlsagemaker.estimator.EstimatorBase)
###Code
xgb_estimator = XGBoost(
entry_point = "xgboost_starter_script.py",
output_path = estimator_output_path,
code_location = estimator_output_path,
hyperparameters = hyperparameters,
role = sagemaker_role,
instance_count = train_instance_count,
instance_type = train_instance_type,
framework_version = "1.0-1")
if 'training_job_1_name' not in locals():
xgb_estimator.fit(inputs = {'train': train_data_uri})
training_job_1_name = xgb_estimator.latest_training_job.job_name
%store training_job_1_name
else:
print(f'Using previous training job: {training_job_1_name}')
###Output
_____no_output_____
###Markdown
Model lineage with artifacts and associations[Overview](aud-overview)___Amazon SageMaker ML Lineage Tracking creates and stores information about the steps of a machine learning (ML) workflow from data preparation to model deployment. With the tracking information you can reproduce the workflow steps, track model and dataset lineage, and establish model governance and audit standards. With SageMaker Lineage Tracking data scientists and model builders can do the following:* Keep a running history of model discovery experiments.* Establish model governance by tracking model lineage artifacts for auditing and compliance verification.* Clone and rerun workflows to experiment with what-if scenarios while developing models.* Share a workflow that colleagues can reproduce and enhance (for example, while collaborating on solving a business problem).* Clone and rerun workflows with additional debugging or logging routines, or new input variations for troubleshooting issues in production models. Register artifacts Although the `xgb_estimator` object retains much the data we need to learn about how the model was trained, it is, in fact, an ephermeral object which SageMaker does not persist and cannot be re-instantiated at a later time. Although we lose some of its convieneces once it is gone, we can still get back all the data we need by accessing the training jobs it once created.
###Code
training_job_1_info = sagemaker_boto_client.describe_training_job(TrainingJobName=training_job_1_name)
###Output
_____no_output_____
###Markdown
Code artifact
###Code
# return any existing artifact which match the our training job's code arn
# ====>
# extract the training code uri and check if it's an exisiting artifact
code_s3_uri = training_job_1_info['HyperParameters']['sagemaker_submit_directory']
matching_artifacts = list(sagemaker.lineage.artifact.Artifact.list(
source_uri=code_s3_uri,
sagemaker_session=sagemaker_session))
# use existing arifact if it's already been created, otherwise create a new artifact
if matching_artifacts:
code_artifact = matching_artifacts[0]
print(f'Using existing artifact: {code_artifact.artifact_arn}')
else:
code_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name='TrainingScript',
source_uri=code_s3_uri,
artifact_type='Code',
sagemaker_session=sagemaker_session)
print(f'Create artifact {code_artifact.artifact_arn}: SUCCESSFUL')
###Output
_____no_output_____
###Markdown
Training data artifact
###Code
training_data_s3_uri = training_job_1_info['InputDataConfig'][0]['DataSource']['S3DataSource']['S3Uri']
matching_artifacts = list(sagemaker.lineage.artifact.Artifact.list(
source_uri=training_data_s3_uri,
sagemaker_session=sagemaker_session))
if matching_artifacts:
training_data_artifact = matching_artifacts[0]
print(f'Using existing artifact: {training_data_artifact.artifact_arn}')
else:
training_data_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name='TrainingData',
source_uri=training_data_s3_uri,
artifact_type='Dataset',
sagemaker_session=sagemaker_session)
print(f'Create artifact {training_data_artifact.artifact_arn}: SUCCESSFUL')
###Output
_____no_output_____
###Markdown
Model artifact
###Code
trained_model_s3_uri = training_job_1_info['ModelArtifacts']['S3ModelArtifacts']
matching_artifacts = list(sagemaker.lineage.artifact.Artifact.list(
source_uri=trained_model_s3_uri,
sagemaker_session=sagemaker_session))
if matching_artifacts:
model_artifact = matching_artifacts[0]
print(f'Using existing artifact: {model_artifact.artifact_arn}')
else:
model_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name='TrainedModel',
source_uri=trained_model_s3_uri,
artifact_type='Model',
sagemaker_session=sagemaker_session)
print(f'Create artifact {model_artifact.artifact_arn}: SUCCESSFUL')
###Output
_____no_output_____
###Markdown
Set artifact associations
###Code
trial_component = sagemaker_boto_client.describe_trial_component(TrialComponentName=training_job_1_name+'-aws-training-job')
trial_component_arn = trial_component['TrialComponentArn']
###Output
_____no_output_____
###Markdown
Input artifacts
###Code
input_artifacts = [code_artifact, training_data_artifact]
for a in input_artifacts:
try:
sagemaker.lineage.association.Association.create(
source_arn=a.artifact_arn,
destination_arn=trial_component_arn,
association_type='ContributedTo',
sagemaker_session=sagemaker_session)
print(f"Association with {a.artifact_type}: SUCCEESFUL")
except:
print(f"Association already exists with {a.artifact_type}")
###Output
_____no_output_____
###Markdown
Output artifacts
###Code
output_artifacts = [model_artifact]
for a in output_artifacts:
try:
sagemaker.lineage.association.Association.create(
source_arn=a.artifact_arn,
destination_arn=trial_component_arn,
association_type='Produced',
sagemaker_session=sagemaker_session)
print(f"Association with {a.artifact_type}: SUCCESSFUL")
except:
print(f"Association already exists with {a.artifact_type}")
###Output
_____no_output_____
###Markdown
Evaluate model for bias with Clarify[overview](aud-overview)___Amazon SageMaker Clarify helps improve your machine learning (ML) models by detecting potential bias and helping explain the predictions that models make. It helps you identify various types of bias in pretraining data and in posttraining that can emerge during model training or when the model is in production. SageMaker Clarify helps explain how these models make predictions using a feature attribution approach. It also monitors inferences models make in production for bias or feature attribution drift. The fairness and explainability functionality provided by SageMaker Clarify provides components that help AWS customers build less biased and more understandable machine learning models. It also provides tools to help you generate model governance reports which you can use to inform risk and compliance teams, and external regulators. You can reference the [SageMaker Developer Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-fairness-and-explainability.html) for more information about SageMaker Clarify. Create model from estimator
###Code
model_1_name = f'{prefix}-xgboost-pre-smote'
%store model_1_name
model_matches = sagemaker_boto_client.list_models(NameContains=model_1_name)['Models']
if not model_matches:
model_1 = sagemaker_session.create_model_from_job(
name=model_1_name,
training_job_name=training_job_1_info['TrainingJobName'],
role=sagemaker_role,
image_uri=training_job_1_info['AlgorithmSpecification']['TrainingImage'])
else:
print(f"Model {model_1_name} already exists.")
###Output
_____no_output_____
###Markdown
Check for data set bias and model biasWith SageMaker, we can check for pre-training and post-training bias. Pre-training metrics show pre-existing bias in that data, while post-training metrics show bias in the predictions from the model. Using the SageMaker SDK, we can specify which groups we want to check bias across and which metrics we'd like to show. To run the full Clarify job, you must un-comment the code in the cell below. Running the job will take ~15 minutes. If you wish to save time, you can view the results in the next cell after which loads a pre-generated output if no bias job was run.
###Code
train_cols = wr.s3.read_csv(training_data_s3_uri).columns.to_list()
clarify_processor = sagemaker.clarify.SageMakerClarifyProcessor(
role=sagemaker_role,
instance_count=1,
instance_type='ml.c4.xlarge',
sagemaker_session=sagemaker_session)
bias_data_config = sagemaker.clarify.DataConfig(
s3_data_input_path=train_data_uri,
s3_output_path=bias_report_1_output_path,
label='fraud',
headers=train_cols,
dataset_type='text/csv')
model_config = sagemaker.clarify.ModelConfig(
model_name=model_1_name,
instance_type=train_instance_type,
instance_count=1,
accept_type='text/csv')
predictions_config = sagemaker.clarify.ModelPredictedLabelConfig(probability_threshold=0.5)
bias_config = sagemaker.clarify.BiasConfig(
label_values_or_threshold=[0],
facet_name='customer_gender_female',
facet_values_or_threshold=[1])
# un-comment the code below to run the whole job
# if 'clarify_bias_job_1_name' not in locals():
# clarify_processor.run_bias(
# data_config=bias_data_config,
# bias_config=bias_config,
# model_config=model_config,
# model_predicted_label_config=predictions_config,
# pre_training_methods='all',
# post_training_methods='all')
# clarify_bias_job_1_name = clarify_processor.latest_job.name
# %store clarify_bias_job_1_name
# else:
# print(f'Clarify job {clarify_bias_job_name} has already run successfully.')
###Output
_____no_output_____
###Markdown
Results will be stored in `/opt/ml/processing/output/report.pdf`Training to achieve over 90 percent classification accuracy, may be easily possible on an imbalanced classification problem.Thus, expectations developed regarding classification accuracy that are in reality contingent on balanced class distributions will lead to wrong, misleading assumptions and conclusions : misleading the data scientist and viewers into believing that a model has extremely performance when , actually, it does not. View results of Clarify job (shortcut)Running Clarify on your dataset or model can take ~15 minutes. If you don't have time to run the job, you can view the pre-generated results included with this demo. Otherwise, you can run the job by un-commenting the code in the cell above.
###Code
if 'clarify_bias_job_name' in locals():
s3_client.download_file(Bucket=bucket, Key=f'{prefix}/clarify-output/bias-1/analysis.json', Filename='clarify_output/bias_1/analysis.json')
print(f'Downloaded analysis from previous Clarify job: {clarify_bias_job_name}')
else:
print(f'Loading pre-generated analysis file...')
with open('clarify_output/bias_1/analysis.json', 'r') as f:
bias_analysis = json.load(f)
results = bias_analysis['pre_training_bias_metrics']['facets']['customer_gender_female'][0]['metrics'][1]
print(json.dumps(results, indent=4))
###Output
_____no_output_____
###Markdown
In this example dataset, the data is biased against females with only 38.9% of the data samples from female customers. We will address this in the next notebook where we show how we mitigate this class imbalance bias. Although we are only addressing Class Imbalance as an exemplar of bias statistics, you can also take into consideration many other factors of bias. For more detail, see : [Fairness Measures for Machine Learning in Finance](https://pages.awscloud.com/rs/112-TZM-766/images/Fairness.Measures.for.Machine.Learning.in.Finance.pdf)for a more detailed example look at [this](https://github.com/aws/amazon-sagemaker-examples/blob/master/sagemaker_processing/fairness_and_explainability/fairness_and_explainability.ipynb) github example. For more detailed resulst let's look at the generated report, that can be found here: `s3://{bucket}/e2e-fraud-detect/clarify/bias-2/report.pdf`
###Code
#uncomment to copy report and view
#!aws s3 cp s3://{bucket}/e2e-fraud-detect/clarify/bias-2/report.pdf ./clarify_output
###Output
_____no_output_____
###Markdown
Deposit Model and Lineage in SageMaker Model Registry[overview](aud-overview)____Once a useful model has been trained and its artifacts properly associated, the next step is to save the model in a registry for future reference and possible deployment. Create Model Package GroupA Model Package Groups holds multiple versions or iterations of a model. Though it is not required to create them for every model in the registry, they help organize various models which all have the same purpose and provide autiomatic versioning.
###Code
if 'mpg_name' not in locals():
mpg_name = prefix
%store mpg_name
print(f'Model Package Group name: {mpg_name}')
mpg_input_dict = {
'ModelPackageGroupName': mpg_name,
'ModelPackageGroupDescription': 'Insurance claim fraud detection'
}
matching_mpg = sagemaker_boto_client.list_model_package_groups(NameContains=mpg_name)['ModelPackageGroupSummaryList']
if matching_mpg:
print(f'Using existing Model Package Group: {mpg_name}')
else:
mpg_response = sagemaker_boto_client.create_model_package_group(**mpg_input_dict)
print(f'Create Model Package Group {mpg_name}: SUCCESSFUL')
%store mpg_name
###Output
_____no_output_____
###Markdown
Create Model Package for trained model Create and upload a metrics report
###Code
model_metrics_report = {'classification_metrics': {}}
for metric in training_job_1_info['FinalMetricDataList']:
stat = {metric['MetricName']: {'value': metric['Value']}}
model_metrics_report['classification_metrics'].update(stat)
with open('training_metrics.json', 'w') as f:
json.dump(model_metrics_report, f)
metrics_s3_key = f"{prefix}/training_jobs/{training_job_1_info['TrainingJobName']}/training_metrics.json"
s3_client.upload_file(Filename='training_metrics.json', Bucket=bucket, Key=metrics_s3_key)
###Output
_____no_output_____
###Markdown
Define the inference spec
###Code
mp_inference_spec = InferenceSpecification().get_inference_specification_dict(
ecr_image=training_job_1_info['AlgorithmSpecification']['TrainingImage'],
supports_gpu=False,
supported_content_types=['text/csv'],
supported_mime_types=['text/csv'])
mp_inference_spec['InferenceSpecification']['Containers'][0]['ModelDataUrl'] = training_job_1_info['ModelArtifacts']['S3ModelArtifacts']
###Output
_____no_output_____
###Markdown
Define model metricsMetrics other than model quality and bias can be defined. See the Boto3 documentation for [creating a model package](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.htmlSageMaker.Client.create_model_package).
###Code
model_metrics = {
'ModelQuality': {
'Statistics': {
'ContentType': 'application/json',
'S3Uri': f's3://{bucket}/{prefix}/{metrics_s3_key}'
}
},
'Bias': {
'Report': {
'ContentType': 'application/json',
'S3Uri': f'{bias_report_1_output_path}/analysis.json'
}
}
}
mp_input_dict = {
'ModelPackageGroupName': mpg_name,
'ModelPackageDescription': 'XGBoost classifier to detect insurance fraud.',
'ModelApprovalStatus': 'PendingManualApproval',
'ModelMetrics': model_metrics
}
mp_input_dict.update(mp_inference_spec)
mp1_response = sagemaker_boto_client.create_model_package(**mp_input_dict)
###Output
_____no_output_____
###Markdown
Wait until model package is completed
###Code
mp_info = sagemaker_boto_client.describe_model_package(ModelPackageName=mp1_response['ModelPackageArn'])
mp_status = mp_info['ModelPackageStatus']
while mp_status not in ['Completed', 'Failed']:
time.sleep(5)
mp_info = sagemaker_boto_client.describe_model_package(ModelPackageName=mp1_response['ModelPackageArn'])
mp_status = mp_info['ModelPackageStatus']
print(f'model package status: {mp_status}')
print(f'model package status: {mp_status}')
###Output
_____no_output_____
###Markdown
View model package in registry
###Code
sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)['ModelPackageSummaryList']
###Output
_____no_output_____ |
notebooks/mnist-dataset.ipynb | ###Markdown
MNIST The MNIST database is a collection of handwritten images which are used to train a neural network to recognize digits from a paper. MNIST also provides a database for testing the neural network. The training imeages file contains 60000 images and the test images file contains 10000 images. These files are in a special format therefore they have to be read byte by byte.
###Code
# Gzip for unzipping the downloaded file
import gzip
# Io for file reading
import io
# Urlopen for downloading the fiel
from urllib.request import urlopen
#Import numpy for creating the base array
import numpy as np
#Inline complant for jupyter compatibility
%matplotlib inline
#Import matplotlib
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
Read the training image fileThe file has to be read as big endian. The first four bite is the magic number which identifies the fyle type.The data in this file is a 3 dimensional array:* Bytes 4 to 8 is the number of pictures in the datatset (The size of the outer array)* Bytes 8 to 12 is the number of rows (Size of the middle array) * Bytes 12 to 16 is the number of collumns (Size of the last array)* From byte 16 until the end of the file are the pizels of the picures. Open the fileThere is two ways are provided to aquire the file and its content: Open a local file with gzipThis cell is commented out and here for presentation purposes. The file should be loaded only one way.
###Code
#https://docs.python.org/3/library/gzip.html
#import gzip
#Unzip the training images
#with gzip.open('data/train-images-idx3-ubyte.gz', 'rb') as f:
# file_content = f.read()
#print('File read in')
###Output
_____no_output_____
###Markdown
Open a url to download the file and open the downloaded gzip file
###Code
#https://stackoverflow.com/questions/2695152/in-python-how-do-i-decode-gzip-encoding
#It is a modified version of Michal Niklas's answer
#Download the file
inmemory_file=urlopen('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz').read()
#Open the file
with gzip.open(io.BytesIO(inmemory_file),'rb') as f:
train_image_file_content = f.read()
print('File read in')
###Output
File read in
###Markdown
Do byte confirmation and read the first 16 bytes
###Code
#Confirm if the first four byteis 2051
is_it_the_right_bytes=int.from_bytes(train_image_file_content[0:4], byteorder='big')==2051
print('Is the magic number correct: %r' % is_it_the_right_bytes)
#Number of pictures should be from bytes 4 to 8 and should be read in big endian
pictures_number=int.from_bytes(train_image_file_content[4:8], byteorder='big')
print('Number of pictures: %d' % pictures_number)
#Number of rows should be from 8 to 12
rows_number=int.from_bytes(train_image_file_content[8:12], byteorder='big')
print ('Number of rows: %d' % rows_number)
#Number of columns should be from 12 to 16
columns_number=int.from_bytes(train_image_file_content[12:16], byteorder='big')
print ('Number of columns: %d' % columns_number)
###Output
Number of columns: 28
###Markdown
The pixels are from byte 16 until the end of the file. Each 784 batch is a picture(28*28) Read the data into an arrayThe esiest way to read the bytes is to do a while loop. The loop has to start from position 16 as until this position the bytes are set for metadata purpose. The while loop should run until the last byte is consumed, therefore it runs until the size of the ```file_content``` byte array.The iteration step is the size of the number of columns in a picture array: 28.Slices of bytes are taken and converted to int at each iteration from ```file_content```. This converted array is added to a row and the row counter is updated. Once a the row counter exceeds the size of pixel rows of each pixel it is reset to the first row and the picture counter is increased by one.
###Code
#Function taking the file content,number of pictures, number of columns, number of rows and the starting offset
#It converst the file content byte by byte into an int and collects it into a 3D array and returns the array
def load_pictures_to_array(file_content,pictures_number,columns_number,rows_number,offset):
# Set up an array for picture storage
pictures=np.zeros((pictures_number,rows_number,columns_number),dtype=int)
#The current row a picture 1-28
current_row=1
#The current picture 1-59999
current_image=0
#The iteration index
i=offset
#Run a loop until the end of the byte array
while i<len(file_content):
#Convert a row to int types
a=np.frombuffer(file_content[i:i+columns_number],dtype=np.uint8)
#Set the row the current picture
pictures[current_image][current_row-1]=a
#Go to next row
current_row+=1
#If the current row is the same as the size of the rows
if(current_row>rows_number):
#Set the row to number 1
current_row=1
#Go to the next picture
current_image+=1
#Increase the counter with the size of the columns
i+=columns_number
return pictures
#Import time for the speed measurement
import time
#Print out the size of file_content
print('Original content length:'+str(len(train_image_file_content)))
#The starting time of the algorythm
start_time = time.time()
#Read pictures into an array
training_pictures = load_pictures_to_array(train_image_file_content,pictures_number,columns_number,rows_number,16)
#Print out the running time of the algorithm
print("Run for %s seconds." % (time.time() - start_time))
###Output
Original content length:47040016
Run for 5.6527626514434814 seconds.
###Markdown
Confirm dataTo confirm the data was read in correctly, ```matplotlib``` can be used. This library can plot an array of pixels as a picture.Three different item can be printed from three different position of the arrray. * The first one at index 0* The middle one at index 30000* The last one at index 59999If all three pictures are numbers at the same position on the picture, that means the reading was succesful and there wasn't shifting in the rows or columns.
###Code
#Create figure
fig = plt.figure(figsize=(17,8))
#Set pyplot to gray scale as the pixels are 0 to 255 on graye scale
plt.gray()
p=fig.add_subplot(131)
#Plot the first image
p.imshow(training_pictures[0], interpolation='nearest')
#Plot the middle image
p2=fig.add_subplot(132)
p2.imshow(training_pictures[30000], interpolation='nearest')
#Plot the last image
p3=fig.add_subplot(133)
p3.imshow(training_pictures[59999], interpolation='nearest')
###Output
_____no_output_____
###Markdown
Read the labels for the training imagesThe file has to be read as big endian. The first four bite is the magic number which identifies the fyle type.The data in this file is an array:* Bytes 5 to 8 is the number of labels in the datatset (The size of the outer array)* From byte 9 until the end of the file each byte is a label. Open the file
###Code
#Download the file
inmemory_file=urlopen('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz').read()
#Open the file
with gzip.open(io.BytesIO(inmemory_file),'rb') as f:
#Overides the previously used file_content
train_label_file_content = f.read()
print('File read in')
###Output
File read in
###Markdown
Do byte confirmation and read the first 8 bytes
###Code
#Confirm if the first four byteis 2049
is_it_the_right_bytes=int.from_bytes(train_label_file_content[0:4], byteorder='big')==2049
print('Is the magic number correct: %r' % is_it_the_right_bytes)
#Number of pictures should be from bytes 4 to 8 and should be read in big endian
label_number=int.from_bytes(train_label_file_content[4:8], byteorder='big')
print('Number of Labels: %d' % label_number)
###Output
Number of Labels: 60000
###Markdown
Read the data into an arrayThe reading happens the same way as the reading of pictures. This time a simple int array is loaded in.
###Code
#Function taking the file content,number of labels, and the starting offset
#It converst the file content byte by byte into an int and collects it into an array and returns the array
def load_labels_to_array(file_content,label_number,offset=0):
# Collect the files into an array.
labels=np.frombuffer(file_content[offset:label_number+offset],dtype=np.uint8)
return labels
#Print out the size of file_content
print('Origonal content length:'+str(len(train_label_file_content)))
#The starting time of the algorythm
start_time = time.time()
#Convert label bytes to numbers and collect to an array
training_labels=load_labels_to_array(train_label_file_content,label_number,8)
#Print out the running time of the algorithm
print("Run for %s seconds." % (time.time() - start_time))
###Output
Origonal content length:60008
Run for 0.0 seconds.
###Markdown
Confirm dataTo confirm the data was read in correctly, the same indexes should be checked in the label array as in the training picture array:Three different item can be printed from three different positions of the arrray. * The first one at index 0* The middle one at index 30000* The last one at index 59999If all three numbers are the same numbers as the pictures presented above then the conversion was succesful.
###Code
print("Number of labels: %d" % len(training_labels))
print("Label at index 0: %d" % training_labels[0])
print("Label at index 30000: %d" % training_labels[30000])
print("Label at index 59999: %d" % training_labels[59999])
###Output
Number of labels: 60000
Label at index 0: 5
Label at index 30000: 3
Label at index 59999: 8
###Markdown
Read the test image fileThis file is the same format as the training image file.The data in this file is a 3 dimensional array:* Bytes 4 to 8 is the number of pictures in the datatset (The size of the outer array)* Bytes 8 to 12 is the number of rows (Size of the middle array) * Bytes 12 to 16 is the number of collumns (Size of the last array)* From byte 16 until the end of the file are the pizels of the picures. Open the file Open a url to download the file and open the downloaded gzip file
###Code
#Download the file
inmemory_file=urlopen('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz').read()
#Open the file
with gzip.open(io.BytesIO(inmemory_file),'rb') as f:
#Overides the previously used file_content
test_image_file_content = f.read()
print('File read in')
###Output
File read in
###Markdown
Do byte confirmation and read the first 16 bytes
###Code
#Confirm if the first four byteis 2051
is_it_the_right_bytes=int.from_bytes(test_image_file_content[0:4], byteorder='big')==2051
print('Is the magic number correct: %r' % is_it_the_right_bytes)
#Number of pictures should be from bytes 4 to 8 and should be read in big endian
test_pictures_number=int.from_bytes(test_image_file_content[4:8], byteorder='big')
print('Number of pictures: %d' % test_pictures_number)
#Number of rows should be from 8 to 12
test_rows_number=int.from_bytes(test_image_file_content[8:12], byteorder='big')
print ('Number of rows: %d' % test_rows_number)
#Number of columns should be from 12 to 16
test_columns_number=int.from_bytes(test_image_file_content[12:16], byteorder='big')
print ('Number of columns: %d' % test_columns_number)
###Output
Number of columns: 28
###Markdown
The pixels are from byte 16 until the end of the file. Each 784 batch is a picture(28*28) Read the data into an arrayThe reading is done with the above defined function the exact same way as the reading of training images.
###Code
#Print out the size of file_content
print('Origonal content length:'+str(len(test_image_file_content)))
#The starting time of the algorythm
start_time = time.time()
#Read pictures into an array
test_pictures = load_pictures_to_array(test_image_file_content,test_pictures_number,test_columns_number,test_rows_number,16)
#Print out the running time of the algorithm
print("Run for %s seconds." % (time.time() - start_time))
###Output
Origonal content length:7840016
Run for 1.0086684226989746 seconds.
###Markdown
Confirm dataTo confirm the data was read in correctly, ```matplotlib``` can be used the same way as above with the training data set. Three different item can be printed from three different positions of the arrray. * The first one at index 0* The middle one at index 5000* The last one at index 9999
###Code
#Create figure
fig = plt.figure(figsize=(17,8))
#Set pyplot to gray scale as the pixels are 0 to 255 on graye scale
plt.gray()
p=fig.add_subplot(131)
#Plot the first image
p.imshow(test_pictures[0], interpolation='nearest')
#Plot the middle image
p2=fig.add_subplot(132)
p2.imshow(test_pictures[5000], interpolation='nearest')
#Plot the last image
p3=fig.add_subplot(133)
p3.imshow(test_pictures[9999], interpolation='nearest')
###Output
_____no_output_____
###Markdown
Read the labels for the test imagesThis file is the same format as the training label file.The data in this file is an array:* Bytes 5 to 8 is the number of labels in the datatset (The size of the outer array)* From byte 9 until the end of the file each byte is a label. Open the file
###Code
#Download the file
inmemory_file=urlopen('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz').read()
#Open the file
with gzip.open(io.BytesIO(inmemory_file),'rb') as f:
#Overides the previously used file_content
test_label_file_content = f.read()
print('File read in')
###Output
File read in
###Markdown
Do byte confirmation and read the first 8 bytes
###Code
#Confirm if the first four byteis 2049
is_it_the_right_bytes=int.from_bytes(test_label_file_content[0:4], byteorder='big')==2049
print('Is the magic number correct: %r' % is_it_the_right_bytes)
#Number of pictures should be from bytes 4 to 8 and should be read in big endian
label_number=int.from_bytes(test_label_file_content[4:8], byteorder='big')
print('Number of Labels: %d' % label_number)
###Output
Number of Labels: 10000
###Markdown
Read the data into an arrayThe reading is done with the above defined function the exact same way as the reading of training labels.
###Code
#Print out the size of file_content
print('Origonal content length:'+str(len(test_label_file_content)))
#The starting time of the algorythm
start_time = time.time()
#Convert label bytes to numbers and collect to an array
test_labels=load_labels_to_array(test_label_file_content,label_number,8)
#Print out the running time of the algorithm
print("Run for %s seconds." % (time.time() - start_time))
###Output
Origonal content length:10008
Run for 0.0 seconds.
###Markdown
Confirm dataTo confirm the data was read in correctly, the same indexes should be checked in the label array as in the training picture array:Three different item can be printed from three different positions of the arrray. * The first one at index 0* The middle one at index 30000* The last one at index 59999If all three numbers are the same numbers as the pictures presented above then the conversion was succesful.
###Code
print("Number of labels: %d" % len(test_labels))
print("Label at index 0: %d" % test_labels[0])
print("Label at index 30000: %d" % test_labels[5000])
print("Label at index 59999: %d" % test_labels[9999])
###Output
Number of labels: 10000
Label at index 0: 7
Label at index 30000: 3
Label at index 59999: 6
###Markdown
PerformanceIn this section I compare the different versions of parsing algorihtm I developed with numpy and the example from lecture notes. It is kind of "evolution" of algorithm. New algorithmThe above methods do not perform well. 2.6 seconds for 60000 pictures is slow. In the digit recognition script I optimalized the image and label parsing process.The new algorithm merges the image and label reading into one loop. At each iteration 784 items are taken from the raw images file and added to the image array, at the same time 1 label is taken from the label file and added to the label array. This process produces faster reading times.
###Code
def loadimagesAndLabelsToArrays(image_file_content, images_number: int, columns_number: int, rows_number: int, images_offset: int, label_file_content, labels_offset: int):
"""
Loads a set of images and labels into two arrays.
The number of images and labels has to match
The method does not reads in each image flat as columns_number*rows_number.
"""
# Set up an array for image storage
images = np.zeros(
(images_number, columns_number*rows_number), dtype=float)
labels = np.zeros(
(images_number), dtype=int)
# The current image 1-59999
current_image = 0
# The iteration index
i = images_offset
print("Converting images and labels to array. Number of items: %d" %
images_number)
# Run a loop until the end of the byte array
while i < len(image_file_content):
# Convert a row to float types and normalise it for better machine learning performance
a = np.frombuffer(
image_file_content[i:i+columns_number*rows_number], dtype=np.uint8)
# Set the current image
images[current_image] = a
# Normalise the numbers to be between 0 and 1
images[current_image] /= 255
# Read in the label for this image
labels[current_image] = int.from_bytes(
label_file_content[current_image+labels_offset:current_image+labels_offset+1], byteorder='big')
# Go to the next image
current_image += 1
# Increase the counter with the size of the columns
i += columns_number*rows_number
return images, labels
#The starting time of the algorythm
start_time = time.time()
#Read pictures and labels into an array
training_pictures, training_labels= loadimagesAndLabelsToArrays(train_image_file_content,60000,columns_number,rows_number,16,train_label_file_content,8)
test_pictures, test_labels= loadimagesAndLabelsToArrays(test_image_file_content,10000,columns_number,rows_number,16,test_label_file_content,8)
own_algo_time=time.time() - start_time
#Print out the running time of the algorithm
print("Run for %s seconds." % own_algo_time)
###Output
Converting images and labels to array. Number of items: 60000
Converting images and labels to array. Number of items: 10000
Run for 1.4839863777160645 seconds.
###Markdown
Example reading way from classThe reading is a one liner for each file, which is a simple way, but if we break it down to what happens:1. Convert bytes to a list2. Convert list to np.array3. Reshape the array4. Change type of itemsIt loops the content four times(and who nows how many inner loop can be there), which is not efficient at all.
###Code
#The starting time of the algorythm
start_time = time.time()
training_pictures = ~np.array(list(train_image_file_content[16:])).reshape(60000, 784).astype(np.uint8)
training_labels = np.array(list(train_label_file_content[ 8:])).astype(np.uint8)
test_pictures = ~np.array(list(test_image_file_content[16:])).reshape(10000, 784).astype(np.uint8)
test_labels = np.array(list(test_label_file_content[ 8:])).astype(np.uint8)
example_code_time=time.time() - start_time
#Print out the running time of the algorithm
print("Run for %s seconds." % (time.time() - start_time))
###Output
Run for 5.905964374542236 seconds.
###Markdown
Quickest solution with numpyI changed the above array and list conversion to `np.frombuffer`. This solutions parses the files under ~0.3 seconds
###Code
def __loadimagesAndLabelsToArrays(image_file_content, images_number: int, columns_number: int, rows_number: int, images_offset: int, label_file_content, labels_offset: int):
"""
Loads a set of images and labels into two arrays.
The number of images and labels has to match
The method reads in each image flat as columns_number*rows_number.
"""
images = np.frombuffer(image_file_content[images_offset:], dtype=np.uint8).reshape(
images_number, columns_number*rows_number)/255
labels = np.frombuffer(
label_file_content[labels_offset:], dtype=np.uint8)
return images, labels
#The starting time of the algorythm
start_time = time.time()
#Read pictures and labels into an array
training_pictures, training_labels= __loadimagesAndLabelsToArrays(train_image_file_content,60000,columns_number,rows_number,16,train_label_file_content,8)
test_pictures, test_labels= __loadimagesAndLabelsToArrays(test_image_file_content,10000,columns_number,rows_number,16,test_label_file_content,8)
own_algo_time=time.time() - start_time
#Print out the running time of the algorithm
print("Run for %s seconds." % own_algo_time)
###Output
Run for 0.33021974563598633 seconds.
|
notebooks/Galaxies.ipynb | ###Markdown
Galaxies Rotation curve
###Code
%pylab nbagg
from astropy import units as u
from astropy import constants as c
G = c.G # gravitational constant
G.to('km**3/(Msun*s**2)')
M = 10**11 * c.M_sun # mass of a galaxy
M.to('Msun')
Rsun = 8*u.kpc # distance of the sun from
# the galactic center
Rsun.to('km')
vc = sqrt(G*M/Rsun)
vc.to('km/s') # velocity of sun orbiting the
# galactic center
v_sun = Rsun*2*pi/(220*u.Myr)
v_sun.to('km/s')
# M = P**2 / a**3
a = 8.5*u.kpc
P = 240 *u.Myr
aa = a.to('au').value
PP = P.to('yr').value
M = aa**3 / PP**2
M/1e9
###Output
_____no_output_____
###Markdown
Mass of Milky Way galaxy
###Code
from astropy import units as u
from astropy import constants as c
import numpy as np
a = 8.5*u.kpc
P = 240 *u.Myr
M = 4*np.pi**2 * a**3 /(c.G*P**2)
M.to('M_sun')
###Output
_____no_output_____
###Markdown
Mass of central black hole
###Code
alpha = 0.11*u.arcsec
a = alpha.to('rad').value * 8.42*u.kpc
alpha.to('rad')
a.to('AU')
P = 15.78*u.yr
M = 4*np.pi**2 * a**3 /(c.G*P**2)
M.to('M_sun')
###Output
_____no_output_____ |
tutorials/W1D3_ModelFitting/W1D3_Tutorial4.ipynb | ###Markdown
Neuromatch Academy: Week 1, Day 3, Tutorial 4 Model Fitting: Multiple linear regression and polynomial regression**Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom ---Tutorial ObjectivesThis is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
#@title Video 1: Multiple Linear Regression and Polynomial Regression
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
--- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
#@title
#@markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
# Uncomment below to test your function
# theta_hat = ordinary_least_squares(X, y)
# print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
_____no_output_____
###Markdown
After filling in this function, you should see that $\hat{\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
_____no_output_____
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane.
###Code
#@title
#@markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial Regression So far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression.Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y_{n}$ given the input values $x_{n}$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:$$y_{n}= \theta_0 + \theta x_{n} + \epsilon_{n}$$With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:$$y_{n}= \theta_0 + \theta_1 x_{n} + \theta_2 x_{n}^2 + \theta_3 x_{n}^3 + \epsilon_{n}$$We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x_{n}^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
#@title
#@markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regressionNow we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. For linear regression, we used $X = x$ as the input data. To add a constant bias (a y-intercept in a 2-D plot), we use $X = \big[ \boldsymbol 1, x \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). We also learn a weight for every column, or every feature of x, as we learned in Section 1.This matrix $X$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $x^2, x^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:$$X = \big[ \boldsymbol 1 , x^1, x^2 , \ldots , x^k \big],$$where $\boldsymbol{1}$ is the vector the same length as $x$ consisting of of all ones, and $x^p$ is the vector or matrix $x$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = x^0$ and $x^1 = x$ Exercise 2: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
# Uncomment to test your function
order = 5
# X_design = make_design_matrix(x, order)
# print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
_____no_output_____
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Exercise 3: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
# Uncomment to test your function
max_order = 5
# theta_hats = solve_poly_reg(x, y, max_order)
# plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
# Create design matrix
X_design = make_design_matrix(x, order)
# Fit polynomial model
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.4: Evaluating fit qualityAs with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:$$ MSE = \frac 1 N ||y - \hat y||^2 = \frac 1 N \sum_{i=1}^N (y_i - \hat y_i)^2 $$where the predicted values for each model are given by $ \hat y = X \hat \theta$. *Which model (i.e. which polynomial order) do you think will have the best MSE?* Exercise 4: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
############################################################################
# TODO for students: Compute MSE (fill in ... sections)
#############################################################################
# Get prediction for the polynomial regression model of this order
y_hat = ...
# Compute the residuals
residuals = ...
# Compute the MSE
mse = ...
mse_list.append(mse)
fig, ax = plt.subplots()
# Uncomment once above exercise is complete
# ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
# to_remove solution
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse = np.mean(residuals ** 2)
mse_list.append(mse)
with plt.xkcd():
fig, ax = plt.subplots()
ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
###Output
_____no_output_____
###Markdown
Tutorial 4: Multiple linear regression and polynomial regression**Week 1, Day 3: Model Fitting****By Neuromatch Academy****Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial Objectives*Estimated timing of tutorial: 35 minutes*This is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/2mkq4/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# @title Video 1: Multiple Linear Regression and Polynomial Regression
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV11Z4y1u7cf", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
--- Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Plotting Functions
def evaluate_fits(order_list, mse_list):
""" Compare the quality of multiple polynomial fits
by plotting their MSE values.
Args:
order_list (list): list of the order of polynomials to be compared
mse_list (list): list of the MSE values for the corresponding polynomial fit
"""
fig, ax = plt.subplots()
ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression*Estimated timing to here from start of tutorial: 8 min*This video covers linear regression with multiple inputs (more than 1D) and polynomial regression. Click here for text recap of video Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)".We want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen for a single data point as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or for multiple data points in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}When we refer to $x_{i, j}$, we mean that it is the i-th data point and the j-th feature of that data point.For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
# @markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Coding Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
_____no_output_____
###Markdown
After filling in this function, you should see that $\boldsymbol{\hat\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\boldsymbol{\hat\theta}$, we can obtain $\hat{\mathbf{y}}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
_____no_output_____
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane.
###Code
# @markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial RegressionSo far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression. Click here for text recap of relevant part of video Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y$ given the input values $x$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:\begin{align}y = \theta_0 + \theta x + \epsilon\end{align}With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:\begin{align}y & = \theta_0 + \theta_1 x + \theta_2 x^2 + \theta_3 x^3 + \epsilon\end{align}We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
# @markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regression*Estimated timing to here from start of tutorial: 16 min*Now we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. Let's go back to one feature for each data point. For linear regression, we used $\mathbf{X} = \mathbf{x}$ as the input data, where $\mathbf{x}$ is a vector where each element is the input for a single data point. To add a constant bias (a y-intercept in a 2-D plot), we use $\mathbf{X} = \big[ \boldsymbol 1, \mathbf{x} \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). This matrix $\mathbf{X}$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $\mathbf{x}^2, \mathbf{x}^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:\begin{align}\mathbf{X} = \big[ \boldsymbol 1 , \mathbf{x}^1, \mathbf{x}^2 , \ldots , \mathbf{x}^k \big],\end{align}where $\boldsymbol{1}$ is the vector the same length as $\mathbf{x}$ consisting of of all ones, and $\mathbf{x}^p$ is the vector $\mathbf{x}$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = \mathbf{x}^0$ and $\mathbf{x}^1 = \mathbf{x}$. If we have inputs with more than one feature, we can use a similar design matrix but include all features raised to each power. Imagine that we have two features per data point: $\mathbf{x}_m$ is a vector of one feature per data point and $\mathbf{x}_n$ is another. Our design matrix for a polynomial regression would be:\begin{align}\mathbf{X} = \big[ \boldsymbol 1 , \mathbf{x}_m^1, \mathbf{x}_n^1, \mathbf{x}_m^2 , \mathbf{x}_n^2\ldots , \mathbf{x}_m^k , \mathbf{x}_n^k \big],\end{align} Coding Exercise 2.1: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
_____no_output_____
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models*Estimated timing to here from start of tutorial: 24 min*Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Coding Exercise 2.2: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
# Visualize
plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
# Create design matrix
X_design = make_design_matrix(x, order)
# Fit polynomial model
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
# Visualize
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.3: Evaluating fit quality*Estimated timing to here from start of tutorial: 29 min*As with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:\begin{align}\mathrm{MSE} = \frac 1 N ||\mathbf{y} - \hat{\mathbf{y}}||^2 = \frac 1 N \sum_{i=1}^N (y_i - \hat y_i)^2 \end{align}where the predicted values for each model are given by $ \hat{\mathbf{y}} = \mathbf{X}\boldsymbol{\hat\theta}$.*Which model (i.e. which polynomial order) do you think will have the best MSE?* Coding Exercise 2.3: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
########################################################################
## TODO for students
# Fill out function and remove
raise NotImplementedError("Student exercise: compute MSE")
########################################################################
# Get prediction for the polynomial regression model of this order
y_hat = ...
# Compute the residuals
residuals = ...
# Compute the MSE
mse = ...
mse_list.append(mse)
# Visualize MSE of fits
evaluate_fits(order_list, mse_list)
# to_remove solution
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse = np.mean(residuals ** 2)
mse_list.append(mse)
# Visualize MSE of fits
with plt.xkcd():
evaluate_fits(order_list, mse_list)
###Output
_____no_output_____
###Markdown
Tutorial 4: Multiple linear regression and polynomial regression**Week 1, Day 3: Model Fitting****By Neuromatch Academy****Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom ---Tutorial ObjectivesThis is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
#@title Video 1: Multiple Linear Regression and Polynomial Regression
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
--- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
#@title
#@markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
# Uncomment below to test your function
# theta_hat = ordinary_least_squares(X, y)
# print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
_____no_output_____
###Markdown
After filling in this function, you should see that $\hat{\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
_____no_output_____
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane.
###Code
#@title
#@markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial Regression So far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression.Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y_{n}$ given the input values $x_{n}$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:$$y_{n}= \theta_0 + \theta x_{n} + \epsilon_{n}$$With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:$$y_{n}= \theta_0 + \theta_1 x_{n} + \theta_2 x_{n}^2 + \theta_3 x_{n}^3 + \epsilon_{n}$$We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x_{n}^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
#@title
#@markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regressionNow we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. For linear regression, we used $X = x$ as the input data. To add a constant bias (a y-intercept in a 2-D plot), we use $X = \big[ \boldsymbol 1, x \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). We also learn a weight for every column, or every feature of x, as we learned in Section 1.This matrix $X$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $x^2, x^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:$$X = \big[ \boldsymbol 1 , x^1, x^2 , \ldots , x^k \big],$$where $\boldsymbol{1}$ is the vector the same length as $x$ consisting of of all ones, and $x^p$ is the vector or matrix $x$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = x^0$ and $x^1 = x$ Exercise 2: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
# Uncomment to test your function
order = 5
# X_design = make_design_matrix(x, order)
# print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
_____no_output_____
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Exercise 3: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
# Uncomment to test your function
max_order = 5
# theta_hats = solve_poly_reg(x, y, max_order)
# plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
# Create design matrix
X_design = make_design_matrix(x, order)
# Fit polynomial model
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.4: Evaluating fit qualityAs with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:$$ MSE = \frac 1 N ||y - \hat y||^2 = \frac 1 N \sum_{i=1}^N (y_i - \hat y_i)^2 $$where the predicted values for each model are given by $ \hat y = X \hat \theta$. *Which model (i.e. which polynomial order) do you think will have the best MSE?* Exercise 4: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
############################################################################
# TODO for students: Compute MSE (fill in ... sections)
#############################################################################
# Get prediction for the polynomial regression model of this order
y_hat = ...
# Compute the residuals
residuals = ...
# Compute the MSE
mse = ...
mse_list.append(mse)
fig, ax = plt.subplots()
# Uncomment once above exercise is complete
# ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
# to_remove solution
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse = np.mean(residuals ** 2)
mse_list.append(mse)
with plt.xkcd():
fig, ax = plt.subplots()
ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
###Output
_____no_output_____
###Markdown
Tutorial 4: Multiple linear regression and polynomial regression**Week 1, Day 3: Model Fitting****By Neuromatch Academy****Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThis is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
# @title Video 1: Multiple Linear Regression and Polynomial Regression
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV11Z4y1u7cf", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
--- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
#@title
#@markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
# Uncomment below to test your function
# theta_hat = ordinary_least_squares(X, y)
# print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
_____no_output_____
###Markdown
After filling in this function, you should see that $\hat{\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
_____no_output_____
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane.
###Code
#@title
#@markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial Regression So far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression.Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y_{n}$ given the input values $x_{n}$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:$$y_{n}= \theta_0 + \theta x_{n} + \epsilon_{n}$$With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:$$y_{n}= \theta_0 + \theta_1 x_{n} + \theta_2 x_{n}^2 + \theta_3 x_{n}^3 + \epsilon_{n}$$We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x_{n}^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
#@title
#@markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regressionNow we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. For linear regression, we used $X = x$ as the input data. To add a constant bias (a y-intercept in a 2-D plot), we use $X = \big[ \boldsymbol 1, x \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). We also learn a weight for every column, or every feature of x, as we learned in Section 1.This matrix $X$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $x^2, x^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:$$X = \big[ \boldsymbol 1 , x^1, x^2 , \ldots , x^k \big],$$where $\boldsymbol{1}$ is the vector the same length as $x$ consisting of of all ones, and $x^p$ is the vector or matrix $x$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = x^0$ and $x^1 = x$ Exercise 2: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
# Uncomment to test your function
order = 5
# X_design = make_design_matrix(x, order)
# print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
_____no_output_____
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Exercise 3: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
# Uncomment to test your function
max_order = 5
# theta_hats = solve_poly_reg(x, y, max_order)
# plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
# Create design matrix
X_design = make_design_matrix(x, order)
# Fit polynomial model
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.4: Evaluating fit qualityAs with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:$$ MSE = \frac 1 N ||y - \hat y||^2 = \frac 1 N \sum_{i=1}^N (y_i - \hat y_i)^2 $$where the predicted values for each model are given by $ \hat y = X \hat \theta$. *Which model (i.e. which polynomial order) do you think will have the best MSE?* Exercise 4: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
############################################################################
# TODO for students: Compute MSE (fill in ... sections)
#############################################################################
# Get prediction for the polynomial regression model of this order
y_hat = ...
# Compute the residuals
residuals = ...
# Compute the MSE
mse = ...
mse_list.append(mse)
fig, ax = plt.subplots()
# Uncomment once above exercise is complete
# ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
# to_remove solution
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse = np.mean(residuals ** 2)
mse_list.append(mse)
with plt.xkcd():
fig, ax = plt.subplots()
ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
###Output
_____no_output_____
###Markdown
[![Kaggle](https://kaggle.com/static/images/open-in-kaggle.svg)](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D3_ModelFitting/W1D3_Tutorial4.ipynb) Tutorial 4: Multiple linear regression and polynomial regression**Week 1, Day 3: Model Fitting****By Neuromatch Academy****Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial Objectives*Estimated timing of tutorial: 35 minutes*This is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/2mkq4/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# @title Video 1: Multiple Linear Regression and Polynomial Regression
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV11Z4y1u7cf", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
--- Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Plotting Functions
def evaluate_fits(order_list, mse_list):
""" Compare the quality of multiple polynomial fits
by plotting their MSE values.
Args:
order_list (list): list of the order of polynomials to be compared
mse_list (list): list of the MSE values for the corresponding polynomial fit
"""
fig, ax = plt.subplots()
ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression*Estimated timing to here from start of tutorial: 8 min*This video covers linear regression with multiple inputs (more than 1D) and polynomial regression. Click here for text recap of video Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)".We want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen for a single data point as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or for multiple data points in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}When we refer to $x_{i, j}$, we mean that it is the i-th data point and the j-th feature of that data point.For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
# @markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Coding Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
_____no_output_____
###Markdown
After filling in this function, you should see that $\boldsymbol{\hat\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\boldsymbol{\hat\theta}$, we can obtain $\hat{\mathbf{y}}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
_____no_output_____
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane.
###Code
# @markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial RegressionSo far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression. Click here for text recap of relevant part of video Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y$ given the input values $x$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:\begin{align}y = \theta_0 + \theta x + \epsilon\end{align}With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:\begin{align}y & = \theta_0 + \theta_1 x + \theta_2 x^2 + \theta_3 x^3 + \epsilon\end{align}We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
# @markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regression*Estimated timing to here from start of tutorial: 16 min*Now we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. Let's go back to one feature for each data point. For linear regression, we used $\mathbf{X} = \mathbf{x}$ as the input data, where $\mathbf{x}$ is a vector where each element is the input for a single data point. To add a constant bias (a y-intercept in a 2-D plot), we use $\mathbf{X} = \big[ \boldsymbol 1, \mathbf{x} \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). This matrix $\mathbf{X}$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $\mathbf{x}^2, \mathbf{x}^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:\begin{align}\mathbf{X} = \big[ \boldsymbol 1 , \mathbf{x}^1, \mathbf{x}^2 , \ldots , \mathbf{x}^k \big],\end{align}where $\boldsymbol{1}$ is the vector the same length as $\mathbf{x}$ consisting of of all ones, and $\mathbf{x}^p$ is the vector $\mathbf{x}$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = \mathbf{x}^0$ and $\mathbf{x}^1 = \mathbf{x}$. If we have inputs with more than one feature, we can use a similar design matrix but include all features raised to each power. Imagine that we have two features per data point: $\mathbf{x}_m$ is a vector of one feature per data point and $\mathbf{x}_n$ is another. Our design matrix for a polynomial regression would be:\begin{align}\mathbf{X} = \big[ \boldsymbol 1 , \mathbf{x}_m^1, \mathbf{x}_n^1, \mathbf{x}_m^2 , \mathbf{x}_n^2\ldots , \mathbf{x}_m^k , \mathbf{x}_n^k \big],\end{align} Coding Exercise 2.1: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
_____no_output_____
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models*Estimated timing to here from start of tutorial: 24 min*Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Coding Exercise 2.2: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
# Visualize
plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
# Create design matrix
X_design = make_design_matrix(x, order)
# Fit polynomial model
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
# Visualize
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.3: Evaluating fit quality*Estimated timing to here from start of tutorial: 29 min*As with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:\begin{align}\mathrm{MSE} = \frac 1 N ||\mathbf{y} - \hat{\mathbf{y}}||^2 = \frac 1 N \sum_{i=1}^N (y_i - \hat y_i)^2 \end{align}where the predicted values for each model are given by $ \hat{\mathbf{y}} = \mathbf{X}\boldsymbol{\hat\theta}$.*Which model (i.e. which polynomial order) do you think will have the best MSE?* Coding Exercise 2.3: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
########################################################################
## TODO for students
# Fill out function and remove
raise NotImplementedError("Student exercise: compute MSE")
########################################################################
# Get prediction for the polynomial regression model of this order
y_hat = ...
# Compute the residuals
residuals = ...
# Compute the MSE
mse = ...
mse_list.append(mse)
# Visualize MSE of fits
evaluate_fits(order_list, mse_list)
# to_remove solution
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse = np.mean(residuals ** 2)
mse_list.append(mse)
# Visualize MSE of fits
with plt.xkcd():
evaluate_fits(order_list, mse_list)
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 3, Tutorial 4 Model Fitting: Multiple linear regression and polynomial regression**Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom ---Tutorial ObjectivesThis is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
#@title Video 1: Multiple Linear Regression and Polynomial Regression
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV11Z4y1u7cf', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV11Z4y1u7cf
###Markdown
--- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_1 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
#@title
#@markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
# Uncomment below to test your function
# theta_hat = ordinary_least_squares(X, y)
# print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
[ 0.13861386 -2.09395731 -3.16370742]
###Markdown
After filling in this function, you should see that $\hat{\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
MSE = 0.91
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane. You can see that the residuals (green bars) are orthogonal to the plane.Extra: how would you check that the residuals are orthogonal to the fitted plane? (this is sometimes called the orthogonality principle, see [least squares notes](https://www.cns.nyu.edu/~eero/NOTES/leastSquares.pdf))
###Code
#@title
#@markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial Regression So far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression.Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y_{n}$ given the input values $x_{n}$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:$$y_{n}= \theta_0 + \theta x_{n} + \epsilon_{n}$$With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:$$y_{n}= \theta_0 + \theta_1 x_{n} + \theta_2 x_{n}^2 + \theta_3 x_{n}^3 + \epsilon_{n}$$We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x_{n}^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
#@title
#@markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regressionNow we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. For linear regression, we used $X = x$ as the input data. To add a constant bias (a y-intercept in a 2-D plot), we use $X = \big[ \boldsymbol 1, x \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). We also learn a weight for every column, or every feature of x, as we learned in Section 1.This matrix $X$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $x^2, x^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:$$X = \big[ \boldsymbol 1 , x^1, x^2 , \ldots , x^k \big],$$where $\boldsymbol{1}$ is the vector the same length as $x$ consisting of of all ones, and $x^p$ is the vector or matrix $x$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = x^0$ and $x^1 = x$ Exercise 2: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
# Uncomment to test your function
order = 5
# X_design = make_design_matrix(x, order)
# print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
[[ 1. -1.51194917]
[ 1. -0.35259945]]
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Exercise 3: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys, and np array of theta_hat
# (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
# Uncomment to test your function
max_order = 5
#theta_hats = solve_poly_reg(x, y, max_order)
#plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys, and np array of theta
# (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
X_design = make_design_matrix(x, order)
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.4: Evaluating fit qualityAs with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:$$ MSE = \frac 1 N ||y - \hat y||^2 = \sum_{i=1}^N (y_i - \hat y_i)^2 $$where the predicted values for each model are given by $ \hat y = X \hat \theta$. *Which model (i.e. which polynomial order) do you think will have the best MSE?* Exercise 4: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse = np.zeros((max_order + 1))
for order in range(0, max_order + 1):
X_design = make_design_matrix(x, order)
############################################################################
## TODO for students: Compute MSE
#############################################################################
## Uncomment below and fill in with your code
# Get prediction for the polynomial regression model of this order
#y_hat = ...
# Compute the residuals
#residuals = ...
# Compute the MSE
#mse[order] = ...
pass
fig, ax = plt.subplots()
# Uncomment once above exercise is complete
# ax.bar(range(max_order + 1), mse);
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
# to_remove solution
mse = np.zeros((max_order + 1))
for order in range(0, max_order + 1):
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse[order] = np.mean(residuals ** 2)
with plt.xkcd():
fig, ax = plt.subplots()
ax.bar(range(max_order + 1), mse);
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 3, Tutorial 4 Model Fitting: Multiple linear regression and polynomial regression**Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom ---Tutorial ObjectivesThis is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
#@title Video 1: Multiple Linear Regression and Polynomial Regression
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
--- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_1 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
#@title
#@markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
# Uncomment below to test your function
# theta_hat = ordinary_least_squares(X, y)
# print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
_____no_output_____
###Markdown
After filling in this function, you should see that $\hat{\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
_____no_output_____
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane. You can see that the residuals (green bars) are orthogonal to the plane.Extra: how would you check that the residuals are orthogonal to the fitted plane? (this is sometimes called the orthogonality principle, see [least squares notes](https://www.cns.nyu.edu/~eero/NOTES/leastSquares.pdf))
###Code
#@title
#@markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial Regression So far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression.Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y_{n}$ given the input values $x_{n}$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:$$y_{n}= \theta_0 + \theta x_{n} + \epsilon_{n}$$With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:$$y_{n}= \theta_0 + \theta_1 x_{n} + \theta_2 x_{n}^2 + \theta_3 x_{n}^3 + \epsilon_{n}$$We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x_{n}^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
#@title
#@markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regressionNow we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. For linear regression, we used $X = x$ as the input data. To add a constant bias (a y-intercept in a 2-D plot), we use $X = \big[ \boldsymbol 1, x \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). We also learn a weight for every column, or every feature of x, as we learned in Section 1.This matrix $X$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $x^2, x^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:$$X = \big[ \boldsymbol 1 , x^1, x^2 , \ldots , x^k \big],$$where $\boldsymbol{1}$ is the vector the same length as $x$ consisting of of all ones, and $x^p$ is the vector or matrix $x$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = x^0$ and $x^1 = x$ Exercise 2: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
# Uncomment to test your function
order = 5
# X_design = make_design_matrix(x, order)
# print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
_____no_output_____
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Exercise 3: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
# Uncomment to test your function
max_order = 5
# theta_hats = solve_poly_reg(x, y, max_order)
# plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
# Create design matrix
X_design = make_design_matrix(x, order)
# Fit polynomial model
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.4: Evaluating fit qualityAs with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:$$ MSE = \frac 1 N ||y - \hat y||^2 = \sum_{i=1}^N (y_i - \hat y_i)^2 $$where the predicted values for each model are given by $ \hat y = X \hat \theta$. *Which model (i.e. which polynomial order) do you think will have the best MSE?* Exercise 4: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
############################################################################
# TODO for students: Compute MSE (fill in ... sections)
#############################################################################
# Get prediction for the polynomial regression model of this order
y_hat = ...
# Compute the residuals
residuals = ...
# Compute the MSE
mse = ...
mse_list.append(mse)
fig, ax = plt.subplots()
# Uncomment once above exercise is complete
# ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
# to_remove solution
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse = np.mean(residuals ** 2)
mse_list.append(mse)
with plt.xkcd():
fig, ax = plt.subplots()
ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 3, Tutorial 4 Model Fitting: Multiple linear regression and polynomial regression**Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom ---Tutorial ObjectivesThis is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
#@title Video 1: Multiple Linear Regression and Polynomial Regression
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=d4nfTki6Ejc
###Markdown
--- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_1 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
#@title
#@markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
# Uncomment below to test your function
# theta_hat = ordinary_least_squares(X, y)
# print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
[ 0.13861386 -2.09395731 -3.16370742]
###Markdown
After filling in this function, you should see that $\hat{\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error.
###Code
# Compute predicted data
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
MSE = 0.91
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane. You can see that the residuals (green bars) are orthogonal to the plane.Extra: how would you check that the residuals are orthogonal to the fitted plane? (this is sometimes called the orthogonality principle, see [least squares notes](https://www.cns.nyu.edu/~eero/NOTES/leastSquares.pdf))
###Code
#@title
#@markdown Execute this cell to visualize data and predicted plane
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial Regression So far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression.Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y_{n}$ given the input values $x_{n}$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:$$y_{n}= \theta_0 + \theta x_{n} + \epsilon_{n}$$With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:$$y_{n}= \theta_0 + \theta_1 x_{n} + \theta_2 x_{n}^2 + \theta_3 x_{n}^3 + \epsilon_{n}$$We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x_{n}^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
#@title
#@markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regressionNow we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. For linear regression, we used $X = x$ as the input data. To add a constant bias (a y-intercept in a 2-D plot), we use $X = \big[ \boldsymbol 1, x \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). We also learn a weight for every column, or every feature of x, as we learned in Section 1.This matrix $X$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $x^2, x^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:$$X = \big[ \boldsymbol 1 , x^1, x^2 , \ldots , x^k \big],$$where $\boldsymbol{1}$ is the vector the same length as $x$ consisting of of all ones, and $x^p$ is the vector or matrix $x$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = x^0$ and $x^1 = x$ Exercise 2: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
# Uncomment to test your function
order = 5
# X_design = make_design_matrix(x, order)
# print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
[[ 1. -1.51194917]
[ 1. -0.35259945]]
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Exercise 3: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys, and np array of theta_hat
# (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
# Uncomment to test your function
max_order = 5
#theta_hats = solve_poly_reg(x, y, max_order)
#plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys, and np array of theta
# (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
X_design = make_design_matrix(x, order)
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.4: Evaluating fit qualityAs with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:$$ MSE = \frac 1 N ||y - \hat y||^2 = \sum_{i=1}^N (y_i - \hat y_i)^2 $$where the predicted values for each model are given by $ \hat y = X \hat \theta$. *Which model (i.e. which polynomial order) do you think will have the best MSE?* Exercise 4: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse = np.zeros((max_order + 1))
for order in range(0, max_order + 1):
X_design = make_design_matrix(x, order)
############################################################################
## TODO for students: Compute MSE
#############################################################################
## Uncomment below and fill in with your code
# Get prediction for the polynomial regression model of this order
#y_hat = ...
# Compute the residuals
#residuals = ...
# Compute the MSE
#mse[order] = ...
pass
fig, ax = plt.subplots()
# Uncomment once above exercise is complete
# ax.bar(range(max_order + 1), mse);
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
# to_remove solution
mse = np.zeros((max_order + 1))
for order in range(0, max_order + 1):
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse[order] = np.mean(residuals ** 2)
with plt.xkcd():
fig, ax = plt.subplots()
ax.bar(range(max_order + 1), mse);
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
###Output
_____no_output_____
###Markdown
Tutorial 4: Multiple linear regression and polynomial regression**Week 1, Day 3: Model Fitting****By Neuromatch Academy****Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom ---Tutorial ObjectivesThis is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
#@title Video 1: Multiple Linear Regression and Polynomial Regression
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
--- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
#@title
#@markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
# Uncomment below to test your function
# theta_hat = ordinary_least_squares(X, y)
# print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
_____no_output_____
###Markdown
After filling in this function, you should see that $\hat{\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
_____no_output_____
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane.
###Code
#@title
#@markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial Regression So far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression.Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y_{n}$ given the input values $x_{n}$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:$$y_{n}= \theta_0 + \theta x_{n} + \epsilon_{n}$$With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:$$y_{n}= \theta_0 + \theta_1 x_{n} + \theta_2 x_{n}^2 + \theta_3 x_{n}^3 + \epsilon_{n}$$We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x_{n}^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
#@title
#@markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regressionNow we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. For linear regression, we used $X = x$ as the input data. To add a constant bias (a y-intercept in a 2-D plot), we use $X = \big[ \boldsymbol 1, x \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). We also learn a weight for every column, or every feature of x, as we learned in Section 1.This matrix $X$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $x^2, x^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:$$X = \big[ \boldsymbol 1 , x^1, x^2 , \ldots , x^k \big],$$where $\boldsymbol{1}$ is the vector the same length as $x$ consisting of of all ones, and $x^p$ is the vector or matrix $x$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = x^0$ and $x^1 = x$ Exercise 2: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
# Uncomment to test your function
order = 5
# X_design = make_design_matrix(x, order)
# print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
_____no_output_____
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Exercise 3: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
# Uncomment to test your function
max_order = 5
# theta_hats = solve_poly_reg(x, y, max_order)
# plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
# Create design matrix
X_design = make_design_matrix(x, order)
# Fit polynomial model
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.4: Evaluating fit qualityAs with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:$$ MSE = \frac 1 N ||y - \hat y||^2 = \frac 1 N \sum_{i=1}^N (y_i - \hat y_i)^2 $$where the predicted values for each model are given by $ \hat y = X \hat \theta$. *Which model (i.e. which polynomial order) do you think will have the best MSE?* Exercise 4: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
############################################################################
# TODO for students: Compute MSE (fill in ... sections)
#############################################################################
# Get prediction for the polynomial regression model of this order
y_hat = ...
# Compute the residuals
residuals = ...
# Compute the MSE
mse = ...
mse_list.append(mse)
fig, ax = plt.subplots()
# Uncomment once above exercise is complete
# ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
# to_remove solution
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse = np.mean(residuals ** 2)
mse_list.append(mse)
with plt.xkcd():
fig, ax = plt.subplots()
ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 3, Tutorial 4 Model Fitting: Multiple linear regression and polynomial regression**Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom ---Tutorial ObjectivesThis is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
#@title Video 1: Multiple Linear Regression and Polynomial Regression
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=d4nfTki6Ejc
###Markdown
--- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_1 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
#@title
#@markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
# Uncomment below to test your function
# theta_hat = ordinary_least_squares(X, y)
# print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
[ 0.13861386 -2.09395731 -3.16370742]
###Markdown
After filling in this function, you should see that $\hat{\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
MSE = 0.91
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane. You can see that the residuals (green bars) are orthogonal to the plane.Extra: how would you check that the residuals are orthogonal to the fitted plane? (this is sometimes called the orthogonality principle, see [least squares notes](https://www.cns.nyu.edu/~eero/NOTES/leastSquares.pdf))
###Code
#@title
#@markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial Regression So far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression.Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y_{n}$ given the input values $x_{n}$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:$$y_{n}= \theta_0 + \theta x_{n} + \epsilon_{n}$$With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:$$y_{n}= \theta_0 + \theta_1 x_{n} + \theta_2 x_{n}^2 + \theta_3 x_{n}^3 + \epsilon_{n}$$We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x_{n}^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
#@title
#@markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regressionNow we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. For linear regression, we used $X = x$ as the input data. To add a constant bias (a y-intercept in a 2-D plot), we use $X = \big[ \boldsymbol 1, x \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). We also learn a weight for every column, or every feature of x, as we learned in Section 1.This matrix $X$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $x^2, x^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:$$X = \big[ \boldsymbol 1 , x^1, x^2 , \ldots , x^k \big],$$where $\boldsymbol{1}$ is the vector the same length as $x$ consisting of of all ones, and $x^p$ is the vector or matrix $x$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = x^0$ and $x^1 = x$ Exercise 2: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
# Uncomment to test your function
order = 5
# X_design = make_design_matrix(x, order)
# print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
[[ 1. -1.51194917]
[ 1. -0.35259945]]
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Exercise 3: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys, and np array of theta_hat
# (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
# Uncomment to test your function
max_order = 5
#theta_hats = solve_poly_reg(x, y, max_order)
#plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys, and np array of theta
# (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
X_design = make_design_matrix(x, order)
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.4: Evaluating fit qualityAs with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:$$ MSE = \frac 1 N ||y - \hat y||^2 = \sum_{i=1}^N (y_i - \hat y_i)^2 $$where the predicted values for each model are given by $ \hat y = X \hat \theta$. *Which model (i.e. which polynomial order) do you think will have the best MSE?* Exercise 4: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse = np.zeros((max_order + 1))
for order in range(0, max_order + 1):
X_design = make_design_matrix(x, order)
############################################################################
## TODO for students: Compute MSE
#############################################################################
## Uncomment below and fill in with your code
# Get prediction for the polynomial regression model of this order
#y_hat = ...
# Compute the residuals
#residuals = ...
# Compute the MSE
#mse[order] = ...
pass
fig, ax = plt.subplots()
# Uncomment once above exercise is complete
# ax.bar(range(max_order + 1), mse);
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
# to_remove solution
mse = np.zeros((max_order + 1))
for order in range(0, max_order + 1):
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse[order] = np.mean(residuals ** 2)
with plt.xkcd():
fig, ax = plt.subplots()
ax.bar(range(max_order + 1), mse);
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 3, Tutorial 4 Model Fitting: Multiple linear regression and polynomial regression**Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom ---Tutorial ObjectivesThis is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
#@title Video 1: Multiple Linear Regression and Polynomial Regression
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=d4nfTki6Ejc
###Markdown
--- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
# Central Limit Theorem
n_iter = int(1e4)
n_sample = int(1e3)
bino_p = 0.5
sum_array = np.zeros(n_iter)
for idx in range(n_iter):
sum_array[idx] = np.random.randint(0, high=2, size=n_sample).sum()
h = plt.hist(sum_array, bins=30)
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_1 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
#@title
#@markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
# Uncomment below to test your function
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
[ 0.13861386 -2.09395731 -3.16370742]
###Markdown
After filling in this function, you should see that $\hat{\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
MSE = 0.91
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane. You can see that the residuals (green bars) are orthogonal to the plane.Extra: how would you check that the residuals are orthogonal to the fitted plane? (this is sometimes called the orthogonality principle, see [least squares notes](https://www.cns.nyu.edu/~eero/NOTES/leastSquares.pdf))
###Code
#@title
#@markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial Regression So far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression.Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y_{n}$ given the input values $x_{n}$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:$$y_{n}= \theta_0 + \theta x_{n} + \epsilon_{n}$$With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:$$y_{n}= \theta_0 + \theta_1 x_{n} + \theta_2 x_{n}^2 + \theta_3 x_{n}^3 + \epsilon_{n}$$We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x_{n}^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
#@title
#@markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regressionNow we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. For linear regression, we used $X = x$ as the input data. To add a constant bias (a y-intercept in a 2-D plot), we use $X = \big[ \boldsymbol 1, x \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). We also learn a weight for every column, or every feature of x, as we learned in Section 1.This matrix $X$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $x^2, x^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:$$X = \big[ \boldsymbol 1 , x^1, x^2 , \ldots , x^k \big],$$where $\boldsymbol{1}$ is the vector the same length as $x$ consisting of of all ones, and $x^p$ is the vector or matrix $x$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = x^0$ and $x^1 = x$ Exercise 2: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
design_matrix = np.ones((x.shape[0], order + 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix[:, degree] = (x ** degree)
return design_matrix
# Uncomment to test your function
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
[[ 1. -1.51194917]
[ 1. -0.35259945]]
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Exercise 3: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys, and np array of theta_hat
# (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
# Create design matrix
X_design = make_design_matrix(x, order)
# Fit polynomial model
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
# Uncomment to test your function
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.4: Evaluating fit qualityAs with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:$$ MSE = \frac 1 N ||y - \hat y||^2 = \sum_{i=1}^N (y_i - \hat y_i)^2 $$where the predicted values for each model are given by $ \hat y = X \hat \theta$. *Which model (i.e. which polynomial order) do you think will have the best MSE?* Exercise 4: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse = np.zeros((max_order + 1))
for order in range(0, max_order + 1):
X_design = make_design_matrix(x, order)
## Uncomment below and fill in with your code
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ ordinary_least_squares(X_design, y)
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse[order] = np.mean(residuals ** 2)
fig, ax = plt.subplots()
# Uncomment once above exercise is complete
ax.bar(range(max_order + 1), mse);
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
# to_remove solution
mse = np.zeros((max_order + 1))
for order in range(0, max_order + 1):
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse[order] = np.mean(residuals ** 2)
with plt.xkcd():
fig, ax = plt.subplots()
ax.bar(range(max_order + 1), mse);
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
###Output
_____no_output_____
###Markdown
Tutorial 4: Multiple linear regression and polynomial regression**Week 1, Day 3: Model Fitting****By Neuromatch Academy****Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial Objectives*Estimated timing of tutorial: 35 minutes*This is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/2mkq4/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# @title Video 1: Multiple Linear Regression and Polynomial Regression
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV11Z4y1u7cf", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
--- Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Plotting Functions
def evaluate_fits(order_list, mse_list):
""" Compare the quality of multiple polynomial fits
by plotting their MSE values.
Args:
order_list (list): list of the order of polynomials to be compared
mse_list (list): list of the MSE values for the corresponding polynomial fit
"""
fig, ax = plt.subplots()
ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression*Estimated timing to here from start of tutorial: 8 min*This video covers linear regression with multiple inputs (more than 1D) and polynomial regression. Click here for text recap of video Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)".We want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen for a single data point as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or for multiple data points in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}When we refer to $x_{i, j}$, we mean that it is the i-th data point and the j-th feature of that data point.For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
# @markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Coding Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
_____no_output_____
###Markdown
After filling in this function, you should see that $\boldsymbol{\hat\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\boldsymbol{\hat\theta}$, we can obtain $\hat{\mathbf{y}}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
_____no_output_____
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane.
###Code
# @markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial RegressionSo far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression. Click here for text recap of relevant part of video Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y$ given the input values $x$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:\begin{align}y = \theta_0 + \theta x + \epsilon\end{align}With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:\begin{align}y & = \theta_0 + \theta_1 x + \theta_2 x^2 + \theta_3 x^3 + \epsilon\end{align}We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
# @markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regression*Estimated timing to here from start of tutorial: 16 min*Now we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. Let's go back to one feature for each data point. For linear regression, we used $\mathbf{X} = \mathbf{x}$ as the input data, where $\mathbf{x}$ is a vector where each element is the input for a single data point. To add a constant bias (a y-intercept in a 2-D plot), we use $\mathbf{X} = \big[ \boldsymbol 1, \mathbf{x} \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). This matrix $\mathbf{X}$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $\mathbf{x}^2, \mathbf{x}^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:\begin{align}\mathbf{X} = \big[ \boldsymbol 1 , \mathbf{x}^1, \mathbf{x}^2 , \ldots , \mathbf{x}^k \big],\end{align}where $\boldsymbol{1}$ is the vector the same length as $\mathbf{x}$ consisting of of all ones, and $\mathbf{x}^p$ is the vector $\mathbf{x}$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = \mathbf{x}^0$ and $\mathbf{x}^1 = \mathbf{x}$. If we have inputs with more than one feature, we can use a similar design matrix but include all features raised to each power. Imagine that we have two features per data point: $\mathbf{x}_m$ is a vector of one feature per data point and $\mathbf{x}_n$ is another. Our design matrix for a polynomial regression would be:\begin{align}\mathbf{X} = \big[ \boldsymbol 1 , \mathbf{x}_m^1, \mathbf{x}_n^1, \mathbf{x}_m^2 , \mathbf{x}_n^2\ldots , \mathbf{x}_m^k , \mathbf{x}_n^k \big],\end{align} Coding Exercise 2.1: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
_____no_output_____
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models*Estimated timing to here from start of tutorial: 24 min*Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Coding Exercise 2.2: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
# Visualize
plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
# Create design matrix
X_design = make_design_matrix(x, order)
# Fit polynomial model
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
# Visualize
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.3: Evaluating fit quality*Estimated timing to here from start of tutorial: 29 min*As with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:\begin{align}\mathrm{MSE} = \frac 1 N ||\mathbf{y} - \hat{\mathbf{y}}||^2 = \frac 1 N \sum_{i=1}^N (y_i - \hat y_i)^2 \end{align}where the predicted values for each model are given by $ \hat{\mathbf{y}} = \mathbf{X}\boldsymbol{\hat\theta}$.*Which model (i.e. which polynomial order) do you think will have the best MSE?* Coding Exercise 2.3: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
########################################################################
## TODO for students
# Fill out function and remove
raise NotImplementedError("Student exercise: compute MSE")
########################################################################
# Get prediction for the polynomial regression model of this order
y_hat = ...
# Compute the residuals
residuals = ...
# Compute the MSE
mse = ...
mse_list.append(mse)
# Visualize MSE of fits
evaluate_fits(order_list, mse_list)
# to_remove solution
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse = np.mean(residuals ** 2)
mse_list.append(mse)
# Visualize MSE of fits
with plt.xkcd():
evaluate_fits(order_list, mse_list)
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 3, Tutorial 4 Model Fitting: Multiple linear regression Tutorial ObjectivesThis is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of linear models by generalizing to multiple linear regression (Tutorial 4). We then move on to polynomial regression (Tutorial 5). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 6) and two common methods for model selection, AIC and Cross Validation (Tutorial 7).In this tutorial, we will generalize our linear model to incorporate multiple linear features. - Learn how to structure our inputs for multiple linear regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize our data and model fit in multiple dimensions Setup
###Code
# @title Imports
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
#@title Figure Settings
%matplotlib inline
fig_w, fig_h = (8, 6)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
%config InlineBackend.figure_format = 'retina'
###Output
_____no_output_____
###Markdown
Multiple Linear Regression
###Code
#@title Video: Multiple Linear Regression
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="uQjKnlhGEVY", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=uQjKnlhGEVY
###Markdown
Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn now to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta_0 + \theta_1 x\end{align}where $\theta_0$ is the intercept, $\theta_1$ is the slope. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_1 x_2 + ... +\theta_d x_d\end{align}where $d$ is the dimensionality (number of features) in our input.We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to fully explore the multivariate case while still easily visualizing our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation.In this case our model can be writen as\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
np.random.seed(121)
theta = [0, -2, -3]
n_samples = 40
n_regressors = len(theta)
x = np.random.uniform(-2, 2, (n_samples, n_regressors))
noise = np.random.randn(n_samples)
y = x @ theta + noise
###Output
_____no_output_____
###Markdown
Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \sum_i \frac{x_i y_i}{x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. Exercise: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(x, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
######################################################################
# comment this out when you've filled
raise NotImplementedError("Student excercise: solve for theta_hat vector using OLS")
return theta_hat
# to_remove solution
def ordinary_least_squares(x, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
theta_hat = np.linalg.inv(x.T @ x) @ x.T @ y
return theta_hat
theta_hat = ordinary_least_squares(x, y)
theta_hat
###Output
_____no_output_____
###Markdown
Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error.
###Code
y_hat = x @ theta_hat
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
MSE = 0.57
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane. You can see that the residuals (green bars) are orthogonal to the hyperplane.
###Code
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(x[:,1], x[:,2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(x)):
ax.plot((x[i, 1], x[i, 1]),
(x[i, 2], x[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 3, Tutorial 4 Model Fitting: Multiple linear regression and polynomial regression**Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom ---Tutorial ObjectivesThis is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
#@title Video 1: Multiple Linear Regression and Polynomial Regression
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV11Z4y1u7cf', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV11Z4y1u7cf
###Markdown
--- Setup
###Code
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("/share/dataset/COMMON/nma.mplstyle.txt")
#@title Helper Functions
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_1 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
#@title
#@markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
# Uncomment below to test your function
# theta_hat = ordinary_least_squares(X, y)
# print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
[ 0.13861386 -2.09395731 -3.16370742]
###Markdown
After filling in this function, you should see that $\hat{\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
MSE = 0.91
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane. You can see that the residuals (green bars) are orthogonal to the plane.Extra: how would you check that the residuals are orthogonal to the fitted plane? (this is sometimes called the orthogonality principle, see [least squares notes](https://www.cns.nyu.edu/~eero/NOTES/leastSquares.pdf))
###Code
#@title
#@markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial Regression So far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression.Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y_{n}$ given the input values $x_{n}$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:$$y_{n}= \theta_0 + \theta x_{n} + \epsilon_{n}$$With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:$$y_{n}= \theta_0 + \theta_1 x_{n} + \theta_2 x_{n}^2 + \theta_3 x_{n}^3 + \epsilon_{n}$$We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x_{n}^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
#@title
#@markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regressionNow we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. For linear regression, we used $X = x$ as the input data. To add a constant bias (a y-intercept in a 2-D plot), we use $X = \big[ \boldsymbol 1, x \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). We also learn a weight for every column, or every feature of x, as we learned in Section 1.This matrix $X$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $x^2, x^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:$$X = \big[ \boldsymbol 1 , x^1, x^2 , \ldots , x^k \big],$$where $\boldsymbol{1}$ is the vector the same length as $x$ consisting of of all ones, and $x^p$ is the vector or matrix $x$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = x^0$ and $x^1 = x$ Exercise 2: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
# Uncomment to test your function
order = 5
# X_design = make_design_matrix(x, order)
# print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
[[ 1. -1.51194917]
[ 1. -0.35259945]]
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Exercise 3: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys, and np array of theta_hat
# (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
# Uncomment to test your function
max_order = 5
#theta_hats = solve_poly_reg(x, y, max_order)
#plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys, and np array of theta
# (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
X_design = make_design_matrix(x, order)
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.4: Evaluating fit qualityAs with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:$$ MSE = \frac 1 N ||y - \hat y||^2 = \sum_{i=1}^N (y_i - \hat y_i)^2 $$where the predicted values for each model are given by $ \hat y = X \hat \theta$. *Which model (i.e. which polynomial order) do you think will have the best MSE?* Exercise 4: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse = np.zeros((max_order + 1))
for order in range(0, max_order + 1):
X_design = make_design_matrix(x, order)
############################################################################
## TODO for students: Compute MSE
#############################################################################
## Uncomment below and fill in with your code
# Get prediction for the polynomial regression model of this order
#y_hat = ...
# Compute the residuals
#residuals = ...
# Compute the MSE
#mse[order] = ...
pass
fig, ax = plt.subplots()
# Uncomment once above exercise is complete
# ax.bar(range(max_order + 1), mse);
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
# to_remove solution
mse = np.zeros((max_order + 1))
for order in range(0, max_order + 1):
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse[order] = np.mean(residuals ** 2)
with plt.xkcd():
fig, ax = plt.subplots()
ax.bar(range(max_order + 1), mse);
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
###Output
_____no_output_____
###Markdown
Table of Contents0.1 Exercise 1: Ordinary Least Squares Estimator1 Section 2.1: Design matrix for polynomial regression1.1 Exercise 2: Structure design matrix2 Section 2.2: Fitting polynomial regression models2.1 Exercise 3: Fitting polynomial regression models with different orders3 Section 2.4: Evaluating fit quality3.1 Exercise 4: Compute MSE and compare models Neuromatch Academy: Week 1, Day 3, Tutorial 4 Model Fitting: Multiple linear regression and polynomial regression**Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom ---Tutorial ObjectivesThis is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
#@title Video 1: Multiple Linear Regression and Polynomial Regression
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=d4nfTki6Ejc
###Markdown
--- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_1 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
#@title
#@markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
# Uncomment below to test your function
# theta_hat = ordinary_least_squares(X, y)
# print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
[ 0.13861386 -2.09395731 -3.16370742]
###Markdown
After filling in this function, you should see that $\hat{\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
MSE = 0.91
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane. You can see that the residuals (green bars) are orthogonal to the plane.Extra: how would you check that the residuals are orthogonal to the fitted plane? (this is sometimes called the orthogonality principle, see [least squares notes](https://www.cns.nyu.edu/~eero/NOTES/leastSquares.pdf))
###Code
#@title
#@markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial Regression So far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression.Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y_{n}$ given the input values $x_{n}$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:$$y_{n}= \theta_0 + \theta x_{n} + \epsilon_{n}$$With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:$$y_{n}= \theta_0 + \theta_1 x_{n} + \theta_2 x_{n}^2 + \theta_3 x_{n}^3 + \epsilon_{n}$$We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x_{n}^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
#@title
#@markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regressionNow we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. For linear regression, we used $X = x$ as the input data. To add a constant bias (a y-intercept in a 2-D plot), we use $X = \big[ \boldsymbol 1, x \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). We also learn a weight for every column, or every feature of x, as we learned in Section 1.This matrix $X$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $x^2, x^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:$$X = \big[ \boldsymbol 1 , x^1, x^2 , \ldots , x^k \big],$$where $\boldsymbol{1}$ is the vector the same length as $x$ consisting of of all ones, and $x^p$ is the vector or matrix $x$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = x^0$ and $x^1 = x$ Exercise 2: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
# Uncomment to test your function
order = 5
# X_design = make_design_matrix(x, order)
# print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
[[ 1. -1.51194917]
[ 1. -0.35259945]]
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Exercise 3: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys, and np array of theta_hat
# (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
# Uncomment to test your function
max_order = 5
#theta_hats = solve_poly_reg(x, y, max_order)
#plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys, and np array of theta
# (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
X_design = make_design_matrix(x, order)
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.4: Evaluating fit qualityAs with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:$$ MSE = \frac 1 N ||y - \hat y||^2 = \sum_{i=1}^N (y_i - \hat y_i)^2 $$where the predicted values for each model are given by $ \hat y = X \hat \theta$. *Which model (i.e. which polynomial order) do you think will have the best MSE?* Exercise 4: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse = np.zeros((max_order + 1))
for order in range(0, max_order + 1):
X_design = make_design_matrix(x, order)
############################################################################
## TODO for students: Compute MSE
#############################################################################
## Uncomment below and fill in with your code
# Get prediction for the polynomial regression model of this order
#y_hat = ...
# Compute the residuals
#residuals = ...
# Compute the MSE
#mse[order] = ...
pass
fig, ax = plt.subplots()
# Uncomment once above exercise is complete
# ax.bar(range(max_order + 1), mse);
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
# to_remove solution
mse = np.zeros((max_order + 1))
for order in range(0, max_order + 1):
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse[order] = np.mean(residuals ** 2)
with plt.xkcd():
fig, ax = plt.subplots()
ax.bar(range(max_order + 1), mse);
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
###Output
_____no_output_____
###Markdown
Tutorial 4: Multiple linear regression and polynomial regression**Week 1, Day 3: Model Fitting****By Neuromatch Academy****Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial Objectives*Estimated timing of tutorial: 35 minutes*This is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).In this tutorial, we will generalize the regression model to incorporate multiple features.- Learn how to structure inputs for regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize data and model fit in multiple dimensions- Fit polynomial regression models of different complexity- Plot and evaluate the polynomial regression fits
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/2mkq4/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# @title Video 1: Multiple Linear Regression and Polynomial Regression
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV11Z4y1u7cf", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
--- Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Plotting Functions
def evaluate_fits(order_list, mse_list):
""" Compare the quality of multiple polynomial fits
by plotting their MSE values.
Args:
order_list (list): list of the order of polynomials to be compared
mse_list (list): list of the MSE values for the corresponding polynomial fit
"""
fig, ax = plt.subplots()
ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
###Output
_____no_output_____
###Markdown
--- Section 1: Multiple Linear Regression*Estimated timing to here from start of tutorial: 8 min*This video covers linear regression with multiple inputs (more than 1D) and polynomial regression. Click here for text recap of video Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta x + \epsilon\end{align}where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + ... +\theta_d x_d + \epsilon\end{align}where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)".We want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.In this case our model can be writen for a single data point as:\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or for multiple data points in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}When we refer to $x_{i, j}$, we mean that it is the i-th data point and the j-th feature of that data point.For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
# @markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Coding Exercise 1: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
###Output
_____no_output_____
###Markdown
After filling in this function, you should see that $\hat{\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\hat{\mathbf{\theta}}$, we can obtain $\hat{\mathbf{y}}$ and thus our mean squared error.
###Code
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
_____no_output_____
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane.
###Code
# @markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
--- Section 2: Polynomial RegressionSo far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression. Click here for text recap of relevant part of video Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y$ given the input values $x$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs:\begin{align}y = \theta_0 + \theta x + \epsilon\end{align}With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:\begin{align}y & = \theta_0 + \theta_1 x + \theta_2 x^2 + \theta_3 x^3 + \epsilon\end{align}We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
###Code
# @markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
###Output
_____no_output_____
###Markdown
Section 2.1: Design matrix for polynomial regression*Estimated timing to here from start of tutorial: 16 min*Now we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. For linear regression, we used $\mathbf{X} = x$ as the input data. To add a constant bias (a y-intercept in a 2-D plot), we use $\mathbf{X} = \big[ \boldsymbol 1, x \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). We also learn a weight for every column, or every feature of x, as we learned in Section 1.This matrix $\mathbf{X}$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $x^2, x^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:\begin{align}\mathbf{X} = \big[ \boldsymbol 1 , x^1, x^2 , \ldots , x^k \big],\end{align}where $\boldsymbol{1}$ is the vector the same length as $x$ consisting of of all ones, and $x^p$ is the vector or matrix $x$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = x^0$ and $x^1 = x$ Coding Exercise 2.1: Structure design matrixCreate a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
###Code
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
###Output
_____no_output_____
###Markdown
You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` Section 2.2: Fitting polynomial regression models*Estimated timing to here from start of tutorial: 24 min*Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. Coding Exercise 2.2: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
###Code
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
# Visualize
plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
# Create design matrix
X_design = make_design_matrix(x, order)
# Fit polynomial model
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
# Visualize
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
###Output
_____no_output_____
###Markdown
Section 2.3: Evaluating fit quality*Estimated timing to here from start of tutorial: 29 min*As with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as:\begin{align}\mathrm{MSE} = \frac 1 N ||\mathbf{y} - \hat{\mathbf{y}}||^2 = \frac 1 N \sum_{i=1}^N (y_i - \hat y_i)^2 \end{align}where the predicted values for each model are given by $ \hat{\mathbf{y}} = \mathbf{X} \hat \theta$.*Which model (i.e. which polynomial order) do you think will have the best MSE?* Coding Exercise 2.3: Compute MSE and compare modelsWe will compare the MSE for different polynomial orders with a bar plot.
###Code
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
########################################################################
## TODO for students
# Fill out function and remove
raise NotImplementedError("Student exercise: compute MSE")
########################################################################
# Get prediction for the polynomial regression model of this order
y_hat = ...
# Compute the residuals
residuals = ...
# Compute the MSE
mse = ...
mse_list.append(mse)
# Visualize MSE of fits
evaluate_fits(order_list, mse_list)
# to_remove solution
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse = np.mean(residuals ** 2)
mse_list.append(mse)
# Visualize MSE of fits
with plt.xkcd():
evaluate_fits(order_list, mse_list)
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 3, Tutorial 4 Model Fitting: Multiple linear regression Tutorial ObjectivesThis is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of linear models by generalizing to multiple linear regression (Tutorial 4). We then move on to polynomial regression (Tutorial 5). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 6) and two common methods for model selection, AIC and Cross Validation (Tutorial 7).In this tutorial, we will generalize our linear model to incorporate multiple linear features. - Learn how to structure our inputs for multiple linear regression using the 'Design Matrix'- Generalize the MSE for multiple features using the ordinary least squares estimator- Visualize our data and model fit in multiple dimensions Setup
###Code
# @title Imports
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
#@title Figure Settings
%matplotlib inline
fig_w, fig_h = (8, 6)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
%config InlineBackend.figure_format = 'retina'
###Output
_____no_output_____
###Markdown
Multiple Linear Regression
###Code
#@title Video: Multiple Linear Regression
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="uQjKnlhGEVY", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=uQjKnlhGEVY
###Markdown
Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn now to the general linear regression case, where we can have more than one regressor, or feature, in our input.Recall that our original univariate linear model was given as\begin{align}y = \theta_0 + \theta_1 x\end{align}where $\theta_0$ is the intercept, $\theta_1$ is the slope. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_1 x_2 + ... +\theta_d x_d\end{align}where $d$ is the dimensionality (number of features) in our input.We can condense this succinctly using vector notation for a single data point\begin{align}y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i\end{align}and fully in matrix form\begin{align}\mathbf{y} = \mathbf{X}\boldsymbol{\theta}\end{align}where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to fully explore the multivariate case while still easily visualizing our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation.In this case our model can be writen as\begin{align}y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon\end{align}or in matrix form where\begin{align}\mathbf{X} = \begin{bmatrix}1 & x_{1,1} & x_{1,2} \\1 & x_{2,1} & x_{2,2} \\\vdots & \vdots & \vdots \\1 & x_{n,1} & x_{n,2}\end{bmatrix}, \boldsymbol{\theta} =\begin{bmatrix}\theta_0 \\\theta_1 \\\theta_2 \\\end{bmatrix}\end{align}For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
###Code
np.random.seed(121)
theta = [0, -2, -3]
n_samples = 40
n_regressors = len(theta)
x = np.random.uniform(-2, 2, (n_samples, n_regressors))
noise = np.random.randn(n_samples)
y = x @ theta + noise
###Output
_____no_output_____
###Markdown
Now that we have our dataset, we want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:\begin{align}\hat\theta = \sum_i \frac{x_i y_i}{x_i^2}.\end{align}The same holds true for the multiple regressor case, only now expressed in matrix form\begin{align}\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.\end{align}This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. Exercise: Ordinary Least Squares EstimatorIn this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
###Code
def ordinary_least_squares(x, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
######################################################################
# comment this out when you've filled
raise NotImplementedError("Student excercise: solve for theta_hat vector using OLS")
return theta_hat
# to_remove solution
def ordinary_least_squares(x, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
theta_hat = np.linalg.inv(x.T @ x) @ x.T @ y
return theta_hat
theta_hat = ordinary_least_squares(x, y)
theta_hat
###Output
_____no_output_____
###Markdown
Now that we have our $\mathbf{\hat\theta}$, we can obtain $\mathbf{\hat y}$ and thus our mean squared error.
###Code
y_hat = x @ theta_hat
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
###Output
MSE = 0.57
###Markdown
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane. You can see that the residuals (green bars) are orthogonal to the hyperplane.
###Code
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(x[:,1], x[:,2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(x)):
ax.plot((x[i, 1], x[i, 1]),
(x[i, 2], x[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
###Output
_____no_output_____ |
Slideshow.ipynb | ###Markdown
Teaching Quantum Mechanics with Python and Jupyter Andrew MC Dawes @drdawes github.com/amcdawes/QMlabs
###Code
from qutip import *
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
coherent_dm(5,2)
###Output
_____no_output_____
###Markdown
Simulate a spin-1/2 system in an external magnetic field.
###Code
omega = 2*np.pi
H = 0.707*sigmaz() + 0.707*sigmax()
a=1/np.sqrt(2)
b=a
psi0 = a*basis(2,1) + b*basis(2,0)
t = np.linspace(0,10)
Sz = 1/2*sigmaz()
result1 = sesolve(H,psi0,t,[Sz])
for ex in result1.expect:
plt.plot(omega*t/np.pi,ex)
###Output
_____no_output_____
###Markdown
[//]: ![](https://png.icons8.com/dusk/5x/happy.png) Capitec BBLB__by Marcus Gawronsky__ ![](https://pbs.twimg.com/profile_images/866053673511772160/MrljrN-y_400x400.jpg) [//]: ![](https://png.icons8.com/cotton/3x/goal.png) Aims Accuracy Compatable Scalable [//]: (![](https://png.icons8.com/dusk/4x/maintenance.png)) Tools
###Code
from math import log, pi, cos, sin
import time
import os
import string
import dill
import pandas as pd
from sklearn.pipeline import make_pipeline, make_union
from sklearn import preprocessing
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report, log_loss
from sklearn.model_selection import cross_validate, train_test_split
date = time.strftime("%Y%m%d")
model_name = 'FFNN_Pipeline_'
directory = "./{0}{1}".format(model_name, date)
if not os.path.exists(directory):
os.makedirs(directory)
###Output
_____no_output_____
###Markdown
[//]: ![](https://png.icons8.com/dusk/4x/database.png) Data
###Code
train = pd.read_csv('./train_data.csv')
train['split'] = 'train'
test = pd.read_csv('./test_data.csv')
test['split'] = 'test'
data = pd.concat([train,test],axis=0)
train.amount = train.amount.astype(float)
train.description = train.description.astype(str)
train.transactionDate = pd.to_datetime(train.transactionDate)
test.amount = test.amount.astype(float)
test.description = test.description.astype(str)
test.transactionDate = pd.to_datetime(test.transactionDate)
data.amount = data.amount.astype(float)
data.description = data.description.astype(str)
data.transactionDate = pd.to_datetime(data.transactionDate)
###Output
_____no_output_____
###Markdown
Heterogenous Similarity Natural Language-esque
###Code
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
label = encoder.fit_transform(data.loc[data['split']=='train',:]['category'])
time = (data[data['split']=='train']['transactionDate'].dt.hour + data[data['split']=='train']['transactionDate'].dt.minute/60)/24
day = data[data['split']=='train']['transactionDate'].dt.weekday + time
x = day.apply(lambda x: (1+sin(x*2*pi)/7)*cos(x*2*pi/7))
y = day.apply(lambda x: (1+sin(x*2*pi)/7)*sin(x*2*pi/7))
z = time.apply(lambda x: cos(x*2*pi)/7)
import numpy as np
rand = np.random.randint(low=0, high=x.shape[0], size=1000)
import plotly.plotly as py
import plotly.graph_objs as go
import plotly as plty
plty.tools.set_credentials_file(username='marcussky', api_key='OTp0AS8LVcuURbqr5mTm')
trace1 = go.Scatter3d(
x=x[rand],
y=y[rand],
z=z[rand],
mode='markers',
marker=dict(
size=5,
color=label[rand],
colorscale='Viridis',
opacity=0.7
),
)
Data = [trace1]
fig3d = go.Figure(data=Data)
###Output
_____no_output_____
###Markdown
Time
###Code
py.iplot(fig3d, filename='basic 3d')
###Output
_____no_output_____
###Markdown
Amount
###Code
%matplotlib inline
from math import log
import matplotlib.pyplot as plt
Data = [go.Histogram(x=data.loc[
~(np.abs(data.amount-data.amount.mean())>(3*data.amount.std())),['amount']].apply(
lambda x: (-x))['amount'][rand].tolist())]
layout = go.Layout(
paper_bgcolor='rgba(0,0,0,0)',
plot_bgcolor='rgba(0,0,0,0)',
showlegend=True,
margin=dict(
l=0,
r=0,
b=0,
t=0
)
)
fighist = go.Figure(data=Data)
py.iplot(fighist, filename='basic histogram')
###Output
_____no_output_____
###Markdown
Description![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAXYAAACHCAMAAAA1OYJfAAAA+VBMVEX///+N01/MzMwAAADOzs75+fmQ2GHKysrj4+NLS0vc3Nzy8vILCwvh4eHW1tbq6upaWlqenp65ubnCwsKK0lqurq48PDxhYWG1tbUgICBmZmYVFRUaGhpJSUlYgzuBgYGKioopKSkwMDBCQkKmpqaWlpZ5eXlvb28uLi70+/B7e3uR02aNjY0lJSXq7udbW1vH67BxqUzH5rM5VSZIazF6t1LG2rocKhKa1HVgkEEkNhnF0b6Z13Hs9ecLEgfG3re33KDW58zV483Y78uj0IZklkOd0Xu326GFxlm44p3i9NbX5c0+XCpsoknF17vc49nK08RMcjO30Kb+Yj2yAAAW6klEQVR4nO1diYLbNpJlC6R5imeTEkXJlEjqiCyl43PaWWc8k4m9vZM4mdn//5itAu8DpOR21tksXxK0GsJR9VCoKoBKi+PqkA7BknsEFDnwP783CVbTx8x+EXSi7Y0vPaix0DZ6q3amLR1WD2GqK0L+y1Qj5DHT6x7RPr83IZHymNlZEBSlHJcnZHYt7aqi99uDZJLlpFUrE2KzeiizyCzE+HPSrniRXPzyObTzcrRX+xpcT7sek6hYyq9M+9mUHjM7CzqpSPU5tDtnYn552r1C2a9Lu6r26vbZANqD4pfPon3zZ6b998KXoP3uy9IuOR6JRAkhFLRLE1Hk67KpuuiIPMsHwLviRGnSPuWxVqg2FHAYHaoMOl8dU8nI61Qd+/JKqw0dRJnUB04HM6CL2JZQlRyg3aAqcgXtCrTV61RCf6epNp1Ocs9E1rF/Jh9KB2qUTTPaG7zVaKdCi5nQhiwHJJBNwELPaFctc0nIeeFWRHf3Ebzlm1bnoou7A2gmb1WlQrt+kmEYLQ4rSZQeeoQQL9S50DT55jArc5+xNt95PvQ9y2GHuUzWOHCwCsW8ZmcueG4+wy6m25TQNkF9cocqYlxE2lXjtAJ1DvuKhRrWbAPtDoukudS8HGugOh0A5BMm1s4706YFG0g7ryZmgLwVIldpF9dANLCxpnmmREo4Ke3CXV5R8K6aWl6378ik3HP+phsXtNtx3uV8KubO6yJdpvPVAZqkmUxYTEcW7dkO+XuHJKtaEd/d5rWnBm1WqeHZoLQvnFzFTbFy+ixvpDWndCoDAGnSrPw99/hAu2/vssrAatO+LYT2sEoNd2fiH9eArURpD2DhZttkAWuTnwBUZMvcOskM6Fi0diGKFYT2fK8RL8ppF8H0glli7+EnyeSAMEKCheusPbLxOmlPE8gEusiJyLsnuU27HdBB5jiw5ub6bVZg0Kck9ElrXH29h5YnqqJAaZc94u8ta0FKn8+DPNrMciwTrac+gBICGVGIA1ig/HRFoqPl2FtkRS5oB97IzHXCqLTXkvYQlzic22vYUBrdYsqKeLW8ncyoQYObJiGtFEDsON2OfFRwWGWLyHTVFZwypd2AKT3qRQywrDTcqGAOHjUvNSRtekraZ0TLJzGaTkYBZVfUHBQTd02uXzayAG/LTS+hNEIq2A5t4sAKphvGAAu+S3eavckrS4gQUotBDSv3jth0Tl9JuFpRqtu+sNeCdhv88zbtA2vtIVmtTOaQLYKFTpBKdy43I1i21zB3sE1PKVXS8olIJt0UZj/Sd5cF1YLZQ7skk5h5KASpDtnAUpDbAMywzCohayFN797MZHIbFU75S+QlVwtmWDRGYCWQVu4EkfYg98mkIpada5uxTu0CyWzRnjfQIxLrmXC7Yibo1bh8wDEzU8CVTmmHRTXzBuB1ZaQT1yevc3toh13ssU6rOEPhA44wsJHpF+YitAVs057PDEa0oVyYFQtXIxI1kkEW7eCa0n2OtBctrMygC9rtM1kVFM9TBVq054ZmeOm2ke6qedC6uQVVcJQFgXZGuxCka5pqfaAt0McUwVXQepwM6MC6rdKrMcEABqVMv1woddEeuEn7OV9UEeIalQYGKic0c9dRoE27oGCCO/cyA0Ha11WxarRvq+ECPF7E9R2XVmSDCujgwbZWjn3u8XNMNpkTp3rktOeE0F88GmTUWhq7HAip2srqOiRMtApBOIuS6hfkRAlhJQPL0KRdzkeGRfSFlIploWISt0Zo0G6czlqea6UbA2jXyu1yzjZcpjCKZCb58OslFWaQdj4gddQDveMTufDE/CalXa3SDgOhHrB7SOk7+jIZjiYkwNXOaW5tWNdSQMHPBpHJOV/QS2gvTqlA+xJpdxoa9tPuUkICf7M5a1kcxUymMClVLsSitKNjrIO7hHaokyuIC09BYfukvMDiswQS9llQ0i5fSztnh3K2xA0nX6c9yvaPDFlzXncl7dTawTcuV1UVG/lTjXYHNqp5SlzbcVyvQnuhjmoWYhW0R1UGMYwPOxmYhlcqqLtd8I9xEcPEZYeT4eIrnQxCEd0FtCF3jdmgqkjlhDxdeiztULeZVFXsyWQweOSRB1P4nHatPFx6WWDLnQwEtZ1UHZ67gHalyzBLKIdsaipfxbcXngfE0Obpom+Lbn0hNe8nnbQmh7xf3TL5LI+gnToZoxZS26jSblQsyolK2kvdVK2M9FSsdesElh6XCkW6aFdbUbSOagSyctrjihjzZRp03SxdQyR9x6UStZXK9SvOa+AbNlkCOUR79317bu0YJJhXtADxQIpHQUaeHqZalLQXzMIMWi2TSUg7/zRMcijS1C7asdeh/aCwwK7UQ9rkClqlM8BDLj1/oKVmCQc6nEtoh9RrXT9zwiFilasArv2Y6ddLu3KuaNVFO9qj1nnZmQ3gkVUuGdB+zgTAA3hJe7HnweT2QipWSjtmvXXr4fBkExS5TyfteMycFXwYYmPhlOJUJqyLU6oCfvlEGwqJnyVX6jE/whvHvssBySk2/KJ1FwH7PcgHhsnSMQZon8YVz9RJuw4WcSxmVSaNJVBX5JxLqxZRCc+RFdpJ9tzPLlQrghnI5BVCCSKdCDrHW3c+tw0G7XSglcWreEVyuoub+TSYu7bWOYE/Ei3IaBfwQnAHK8Sf/GKL4apHoeWe7sihJ5Oxo5mF9+YCHy4rd4SZ0Ee8dxMFYbKGPG6nZvr10o5XQLKFKgoM2gXcnWaCF/DT+UluPTiFDSsjR5DQogAHV+VUcU+CSgJJlmQBbyvbZUWsjHY4+JLN0UGdRGu3TE0P09BgGeAA3bTjgZZoh1he4S34pkk7XnuRaBVDzr4vLn5psnqOV/TCM9eCXkMRPGnw3Re/Ke04ibeS4wPpuHgTVnibF8d4OS5n/A3QTu+8UMXY6A6pmeVqEaq4bN/J0B1NBwD5eHBZQSyjACeZHHLafRjBj1d4G5g7pDJ1Q+9LzqgTvkgtyZgdzktNwwGmvlbSLpMoZ0Yx/SzTP6/aTzrU3ZK+t7GnsVZ8zma7SXv4+3LLCovID/xoL3Hnyu1BQbvmYWKi322yM2DgzZttAMfset8/lpIeckk7acchz4Gm4S2eXrm7hizunAsnxstMxY3cMYB5wAGQdm4ip+J5NmToqdeUZuSsT+LUqopVhayysIb1IdNp6RXTG7rtuuhkVNstFbVpVQZ+e9ztjmurO7aKa3jTMrB/KbJkhVC7rfdQnTl13RrZtJ4ugQzpkkruGrqG6y7SAUpz4Kqkuut23SpIPKoo4IMyVyweHVbV5ezTESdNuq/hDBzAyaIKzp8YnOBkEsOLOfidBAcoRxTdeSmKhJ124dbuCd2/OyDBN3//T4CNqEOYVW7sRvzOyJ/LGBBv/Z6T74gvigU+w5yfMMklp+HmI74MFsXV53L9tWX5fwTRmnln/PjLafQw/6swFEXXlfYHwkaMGDFixIgRI0aMGDFixIgRI0aMGDFixIgRI0aMGDFixIgRI0aMGDFixIgRI0aMGDFixIgRI0aMGDFixIgRI0aMGDFixIgRI0aMGDFixIgRIy6DChBYRUdrwRAYRftvoX77DfzTXXR9rcs17XE+oVEYKLbR/S2UlbG/aRaM7ygU6s1q4gx/FaKBzaR6YWQF/qHhyWSC/+UFXxQTvi3PNH+fbxetPzX628eb+2fv7pvFx5t398/+3iEptP/47P4+K27u39HiGXR79+7HRlsFpmShi5MfccSPz24+3mbFs6K4/62buG9/uv3pBlqkxe07WtynxZsh1tX/vr3HtpXiXV78J35l5IQJvv0HvQye3bql7T9vbxi4vf1Hh6j/YDW/uX3WbC/1iN1lvS9gEPino7h9989u5n646ehA625uXwzRzt3fZm1bxe1/fYNmc438Epv2Scvaf2TSeNOyXsTfnzHb3zfb91lLl7W/Z459+/FbBu3v2PL/OsS68JFpczdo7XqP/G1r76G9rW0P7Z3W/jOb9pa199HeZe1s2m9Y1v4tm/Zha++hnVr7dfJ/ISdz0+nb2U7m5l2zfZ+T6bL2F+yx73/oZu4Htvi374do596xe/9twNo7nExP63ZIZavaSfuPVziZum+sGUMn7W96xmaF1Hs27cMh9ad+J1O1dp4XRb5UYSikNlr3WfsTRFXuoZDa6NAXUhtSM0NqZeibqiw9IZUp/oUhtbM3dTIVa+edxEpcke+Rv+LbedGF5k5Z0RNSnzx/+svLl68rqnaF1Ipvf3j76uXLp6WsLYucVuSwTqet3bf+XNW3P3x49cvLf1dYuCCkovivKuJfFVJfPwVdPlR4R2uv7FbbspDJgvf+kCpa28S1rELfvpD6hH4/QkXuTmsvM5kn/8b2r0pR2SGVF9ehZa3DeZ+1VGh/8hZleVmhcDikvv4euvzleSnONSH1yS+oS8WE6iEVrMaCzWpvCwV6QyrvgoVBn2KV+pzMk6cfHn6p0j4QUp98evr6bZV2dkjlrdAFOcKTyF5/ruJkYOM9//TXKu3DIfXh7YeH72u0D4fUwsk8+fD29dMq7fWQmhLOi4l1kZOBRcIm82J794ZUcG6vBmmvhtQnT2q03//caJtvUt5Zr/HgnJTmPhBSQZTnF9FeCanQp077NSEVOtdpr4VU3qUOA6xYZDuZwtp5x3KxtVNsjoEEsk778Cm1TjszpPLz0EI57F3Ss0nrCWSD9gtCKvT5/jonUw2pN3Xa6yEVaHdS2vMw2WftvJ3SLm7dfID+U2qd9qGQ2qSdGVJ5N0yXf5cUigydUuu0X3ZKrdN+5Sm1ae1qJaTySUp7xW30hNQ8BqBzZ2nbQ/tASG3RzgypfHJMl/9oFZt06JR6qbWzab/ylNph7dMe2ntCakH71s1pb1l7j5MZPqXWaX/X9O2Fk8msXdyVtA+dUi/07X1O5oqQ2qK9EVIf6WQGTqlXhtQh2pWS9iRzMheG1Itpr51SHxFS206mFlIzM784pCZ/iJBqZyGV+hq2k/njhlRgcHJ5AjmBdnye/lAMh9SK9XaF1B5rZ4dUERJIHrP3nnMb1x9Su74Kjuu19uGQ2mvt1ZA6AcKdiVi69v5TKjh3cC/ONmG71CrtDze/kOcPD4WuQyH14QZovyna95xSkxByGPu4LfQYCqkPN3BcKkVhW3tNnO//8qkU57qQ+nADtFc61y9+wW9YSWLlIXLw4tfdQmuruJTpC6npAblyPzB0Sn2etX+bU8MKqRjUwxP8W17KDITU139Nhy720nBIfXiViZN3Gfbt1VNq1vlT3rvxmIN35q5rX3wVNoHWc6dH2zKkPvnwNEW+5EMh9SFrny8TM6TSKzmrdiU3EFLzoT/lFReE1E9Zn4L2wUym4mReN3ShmUzj4rd6dT148Tu57uK3ev150cVv2f4LX/zWxr4kpDa6fMbFb6Vz4+K3iesec7RDas9jjuGQWkcrpPY9Ar7uMcdlIbWB60JqA/WQ2pb/kc9Sf2aLPXxKreNrPEv9gd3lj/QstR1S2bR3h9Qea2eH1AvWn/u8Z6nsLteE1DaGPjlw1Qc2+kLqZbT3OJmekHoZ7X3PUi85pTZovyaktkBDqihOIGWkxaRWiF0hdVK2rhfipCOkstEdUpnN2yFV7JKBFt0hlS0K08lk7990FJeEVEbXzMkoU13SK4UCxTQt2mZjKJIuKfVCz4rWN1v/8ObFmxfvu4r3L37r+ArJ396n71WLN1nRDKlSOfOUFtOy6LJ2GPvX795g8euLZsEIqcKbosWb7/6DFt+9z4p/DdL+r+/ev8e2RfGmKH5lfOhyxIgRI0aMGDHiDwbd/toS/F9GmfO1/2ehbghzmlGH5Ot827pg/Am+5V3f8flLK7xMH4m4+GP9+9Fu7VpHpRL6yrl2PONrnyfUo1UXwU45RMwOl/Gokjn+WBOwvLLyi4iXYe8p7Df5dPorIJmLR4nzeKjejkn7IrqIdoWQwA9O3FpTDgFJFToFQbD6XObVqWjrcABXFfiJB2tj5tmKzmp+Pe1GuB5oIUwNVXf0VIPpxFFUWocFGoDBtAJlkvZSFdXgHYVFoOgteGXKCZJup41sbW7wIh02pd3QnUnPFocZtmTvJjp3Iv4pWZAQqvYknFveOecdBkyvjwwdXuv46Wux8xovbX0kh5gcBdg+UaxtHIwa2ibyO1QQ7CQRdUq7YVuJQyeR5tZcUZwLTEZltlGixWKzJHucwyXniOwMTorBpCYbApPsCKPfnPge2RtovaEZEZIw2kUkiDZ7Toz8+EAZs4Nw5W1oe0q7ckfiM+lNUozU3E74Q5B9jnM07K4QK31fNJfEX4jwyvV0M5AN4RQHwR3LI+8DS5/iIxDHUab6LFbY1h4Sz/RDnHdqEjMmJ6BEksmdHJsHxv0TQk1cXLL1frdm7CFlRba6HqLeQKCuJOSYku0uYTZVNru7OWSvK3OyQ+v1XIWXI8a2mIC161NOsfWpYqF/sX0/UZQjcVLaBTkSFT0MJmwlgHbql040pO4DfGU7jsPj9Ij5VpFE2QdKLBJtgb8j2U6nscd3jiaRdX1wIHUXd4rvkKOq6kSDFvLZVmHbwau9D69c4vX8f8ySPEPDPOx2m223wSsxmhx/3nHCjqq18KfcFgz9NIvASJcMKz6mFPg8LBYqn5Qeuw41PpYu2IRFtJe4kgL2QtpdglbKkRNbiZz2NJMJAwGdAv0EwazSyAlcpB3H0cmWTtE95rxw1YIV+8ESNx6D9hl1PEe0P7pWarQwOPrKkHtpN2ecOpN7Yo8SI2uCLHPT1LITMPyJlhiyu/bgl+6uxt0Bf2B4TEOkmm/4JtQ4Dam8uQl8EuHuQE44z1Qp7SFZh2G4JrseV9mifV13fYKSHPczLUTakYwtcXVd5+NFJzNWkHsfUztaidVDu+ylWs7B7tH+1H0sGVRT9RgN0C6EkchuosQhnUDmdI8mCXPU0VsoRFc0Z73pZmMqr/CHCMLY1HiGaHfJbJu43gZpT9IJJUr7jiwotj2mYaSRo6Q9IVW/LbiavN6Gyx3STpsQDxEfO9V28o05pdtBorR7nT5YjtMO88y4hJ13Me2cLhPZYgXelHa0dsmka7vV0Ot6rjflyF4Ou3upCw1/JFSgC2hX9xv8xUTagzW+9GcCpf2SM5AR3dm8VKHdkH1nKulz6p+4abyH8KaDT8xoh9g7VQBS59Aq2adLzFNftEfaT35nHDCp1OjRp7QthLqLnQyUzo5oFsu357SDkYD0grlR0TijUOXkJWFlAyHBXYnhfIh2E7wJZyzQKTk+0u6jCU0xq0HanSzA9ZLvBgTy9vRyYId+TzFJvNpkTlqnm8ElhbXblCMmLDKbi9utKpBD4hxpEsYTedvRZ0szLB9DKrlTs6BhRjpOMUw7er+7TXczxSto5wPPtRfpHqLCJERjpe1K4CX2HnfpAO3Cliy2c7DUo2N5BGlfagtnHuOy0QRyT3a2k5isBDQdxJAkg1NpypaViu1OsruS6cpXVCcqaefuiKWqqsWI8tzcWy7PkGJMoqUfGz42S+Jo2bHyETlakYcaOsTbhsQEqaUzkePIHKJdkATYlvsDg3YTHZywQL8+9fzlJpXVW8EGNpZ79pkD2p6RasdDi1CXLB2N0NssYH5/GYk7GdvP975/wAPSzkRNrQ2MZIpsJQZhe2Tpbc01sKdR7tTdBvIcj30YMLLzmTRw0FV3phlKe8xuxb0ML2mn9X4t7szeBPIOCIp265gklxzE1Z4zwGPaAoxKc7UusjF95B2X7tqKwE/RnWY1/ByqHjdoispBs3q/JfnrHpnVuc0Jzva4P9lf+07sTwMjnIvuXXDB9uz841MjPg/GIThr3mOc4p8B/wPGMHWZQeErCgAAAABJRU5ErkJggg==) Category
###Code
labels = data[data['split']=='train']['category'].value_counts().keys().tolist()
values = data[data['split']=='train']['category'].value_counts().values.tolist()
trace = go.Pie(labels=labels, values=values)
py.iplot([trace], filename='basic_pie_chart')
###Output
_____no_output_____
###Markdown
[//]: ![](https://png.icons8.com/dusk/4x/mind-map.png) Models __Example Pipeline__ pipeline = make_pipeline( * make_union( + make_pipeline(preprocessing.FunctionTransformer(*select_description), CountVectorizer()), + make_pipeline(preprocessing.FunctionTransformer(*select_amount), RobustStandardizer()), + make_pipeline(preprocessing.FunctionTransformer(*select_transactionDate.apply(lambda x: cos(x)))) * ), * make_pipeline(), *MLPClassifier() ) __Example Grid__ from sklearn.linear_model import LogisticRegression params = dict(reduce_dim=[None, PCA(5), PCA(10)]) grid_search = GridSearchCV(pipe.named_steps.makepipeline, param_grid=params) [//]: ![](https://png.icons8.com/dusk/2x/combo-chart.png) Network Models
###Code
encoder = preprocessing.LabelBinarizer()
encoded_y_train = encoder.fit_transform(train['category'])
encoder.classes_
table = str.maketrans({key: None for key in string.punctuation+'0123456789'})
pipeline = make_pipeline(
make_union(
# Description
make_pipeline(
preprocessing.FunctionTransformer(lambda x: x['description'].apply(lambda x: x.lower().translate(table)), validate=False),
make_union(
make_pipeline(
CountVectorizer(strip_accents='ascii', analyzer='char', lowercase=True),
TfidfTransformer(),
),
make_pipeline(
CountVectorizer(strip_accents='unicode', analyzer='word', max_df=0.999,min_df=0.001),
TfidfTransformer(),
),
n_jobs=-1
)
),
make_pipeline(
make_union(
# Amount
make_pipeline(
preprocessing.FunctionTransformer(lambda x: x['amount'].apply(lambda x: log(-x+1)).values.reshape(-1,1), validate=False),
preprocessing.RobustScaler(),
preprocessing.PolynomialFeatures(degree=3, interaction_only=False, include_bias=False)
),
# Codes
make_pipeline(
preprocessing.FunctionTransformer(lambda x: x['code'].apply(lambda x: str(x)), validate=False),
CountVectorizer(),
),
#time
make_pipeline(
preprocessing.FunctionTransformer(lambda x: (x['transactionDate'].dt.weekday + x['transactionDate'].dt.hour/24 + x['transactionDate'].dt.minute/(24*60)), validate=False),
make_union(
make_pipeline(
preprocessing.FunctionTransformer(lambda x: x.apply(lambda x: cos(x)).values.reshape(-1,1), validate=False),
),
make_pipeline(
preprocessing.FunctionTransformer(lambda x: x.apply(lambda x: sin(x)).values.reshape(-1,1), validate=False),
),
),
),
#day
make_pipeline(
preprocessing.FunctionTransformer(lambda x: (x['transactionDate'].dt.hour/24 + x['transactionDate'].dt.minute/(24*60)), validate=False),
make_union(
make_pipeline(
preprocessing.FunctionTransformer(lambda x: x.apply(lambda x: cos(x)/7).values.reshape(-1,1), validate=False),
),
make_pipeline(
preprocessing.FunctionTransformer(lambda x: x.apply(lambda x: sin(x)/7).values.reshape(-1,1), validate=False),
),
),
),
),
),
),
#Classifier
MLPClassifier(hidden_layer_sizes=(50, 50, 50),activation='tanh',solver='adam', verbose=False, early_stopping=False,max_iter=10, alpha=0.0002),
)
###Output
_____no_output_____
###Markdown
Multilayer Perceptron
###Code
X_train, X_test, y_train, y_test = train_test_split(train.drop(columns=['category']), encoded_y_train, train_size=0.8)
pipeline.fit(X_train, y_train)
pipeline.score(X_test, y_test)
y_pred_proba = pipeline.predict_proba(X_test)
log_loss(y_test, y_pred_proba)
###Output
_____no_output_____
###Markdown
Data reprocessing
###Code
pipeline
transformed_data_test = pipeline.named_steps.featureunion.transform(X_test)
inputs_shape = transformed_data.shape[1]
inputs_shape
###Output
_____no_output_____
###Markdown
Keras FFNN Dropout Model
###Code
transformed_data_train = pipeline.named_steps.featureunion.transform(X_train)
from keras.models import Sequential
from keras.layers import Dense, Dropout, Input
from keras.callbacks import TensorBoard, EarlyStopping
from keras.wrappers.scikit_learn import KerasClassifier
def create_model():
model = Sequential()
model.add(Dense(40, input_dim=inputs_shape, activation='tanh'))
model.add(Dropout(0.2))
model.add(Dense(30, activation='tanh'))
model.add(Dropout(0.2))
model.add(Dense(30, activation='tanh'))
model.add(Dense(12, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
return model
stopping = EarlyStopping(monitor='val_loss', min_delta=0.0005, patience=1, verbose=2, mode='auto')
tensorboard = TensorBoard(log_dir='./{0}{1}/tensorboard_logs'.format(model_name,date), histogram_freq=1, batch_size=50, write_graph=True, write_grads=True)
dropout_model = KerasClassifier(build_fn=create_model, epochs=2,validation_split=0.1, shuffle=True, batch_size=32, verbose=2)
create_model().summary()
dropout_model.fit(transformed_data_train, y_train)
###Output
Train on 192819 samples, validate on 21425 samples
Epoch 1/2
- 57s - loss: 0.0849 - val_loss: 0.0195
Epoch 2/2
- 56s - loss: 0.0216 - val_loss: 0.0170
###Markdown
[//]: ![](https://png.icons8.com/cotton/1x/tree.png) Tree Models Boosted Decision Trees
###Code
from xgboost import XGBClassifier
model = XGBClassifier()
model.fit(transformed_data_train, encoder.inverse_transform(y_train))
model.feature_importances_[0:10]
model.score(transformed_data_test, encoder.inverse_transform(y_test))
###Output
_____no_output_____
###Markdown
Convolutional Models Wide and Deep Character-level CNN![](https://raw.githubusercontent.com/pth1993/NNVLP/master/docs/cnn.png)
###Code
from keras.models import Model
from keras.layers import Conv1D, GlobalMaxPooling1D, MaxPooling1D, Input, Concatenate
transformed_data = pipeline.named_steps.featureunion.transform(X)
from keras.preprocessing.text import Tokenizer
X = X.astype(str)
tokenizer = Tokenizer(num_words=26, filters='!"#$%&()*+,-./:;<=>?@[\]^_`{|}~', lower=True, split=' ', char_level=True, oov_token=None)
tokenizer.fit_on_texts(data['description'])
Deep = tokenizer.texts_to_sequences(data['description'])
from keras.preprocessing.sequence import pad_sequences
Deep = pad_sequences(Deep, maxlen=40, dtype='int32', padding='pre', truncating='pre', value=0.0)
Deep0 = Input(shape=(Deep[1],Deep[2]))
Deep1 = Conv1D(filters = 30, kernel_size = 3, strides=1)(Deep0)
Deep2 = MaxPooling1D(pool_size=3)(Deep1)
Deep3 = Conv1D(filters = 30, kernel_size = 4, strides=1)(Deep2)
Deep4 = GlobalMaxPooling1D()(Deep3)
Deep5 = Dense(units = 20, activation= 'relu', kernel_initializer='he_uniform')(Deep4)
Deep6 = Dropout(rate=0.1)(Deep5)
Wide0 = Input(shape=(Wide_dim,))
Wide1 = Dense(units = 15, activation= 'relu', kernel_initializer='he_uniform')(Wide0)
Wide2 = Dropout(rate=0.15)(Wide1)
model0 = Concatenate(axis=-1)([Deep6, Wide2])
model1 = Dense(units = 25, activation= 'relu')(model0)
model2 = Dropout(rate=0.15)(model1)
model5 = Dense(units = Y_train.shape[1], activation= 'softmax')(model2)
model = Model(inputs=[Deep0, Wide0], outputs=model5)
model.compile(loss='categorical_crossentropy', optimizer=adam)
model.fit(x=[Deep, transformed_data], y=y, epochs=25, verbose=2, validation_split=0.1, shuffle=True, callbacks=[stopping])
###Output
_____no_output_____
###Markdown
Benchmarks Cross Validation
###Code
X, y = train.drop(columns=['category']), encoded_y_train
scores = cross_validate(pipeline, X, y, scoring='neg_log_loss', cv=10, return_train_score=True)
cv_scores = pd.DataFrame(scores)
cv_scores
###Output
_____no_output_____
###Markdown
Performance
###Code
"Mean Time: {0} | Standard Deviation: {1}".format(cv_scores.fit_time.mean(), cv_scores.fit_time.std())
"Mean Score: {0} | Score Standard Deviation: {1}".format(cv_scores.test_score.mean(), cv_scores.test_score.std())
###Output
_____no_output_____
###Markdown
Classification Report
###Code
y_pred = pipeline.predict(X_test)
print(classification_report(y_test, y_pred, target_names=list(encoder.classes_)))
###Output
precision recall f1-score support
Clothing 1.00 1.00 1.00 5052
Eat_Out 1.00 1.00 1.00 5016
Education 1.00 1.00 1.00 4962
Entertainment 1.00 1.00 1.00 5035
Gifts_Donations 1.00 1.00 1.00 4952
Groceries 0.99 1.00 0.99 5020
Health_Fitness 1.00 0.96 0.98 419
Holiday_Travel 1.00 1.00 1.00 4968
Home 1.00 1.00 1.00 4996
Medical 1.00 1.00 1.00 4954
Pets 1.00 1.00 1.00 3199
Transport 1.00 1.00 1.00 4988
avg / total 1.00 1.00 1.00 53561
###Markdown
[//]: ![](https://png.icons8.com/dusk/4x/factory.png) Production Pickle Online learning Scalable Speed
###Code
%%timeit
predictions = pipeline.predict(test)
###Output
2.88 s ± 34.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Serializing
###Code
with open('./{0}{1}/pipeline.pkl'.format(model_name,date), 'wb') as f:
dill.dump(pipeline, f, dill.HIGHEST_PROTOCOL)
print("File Size: {0} mb".format(os.path.getsize(
'./{0}{1}/pipeline.pkl'.format(model_name,date))/1000000))
with open('./{0}{1}/pipeline.pkl'.format(model_name,date), 'rb') as f:
model = dill.load(f)
###Output
_____no_output_____
###Markdown
Online-Learning Online words with early-stopping off.
###Code
all_classes = np.unique(range(0,y_test.shape[1]))
pipeline.named_steps.mlpclassifier.partial_fit(transformed_data_test, y_test, classes=all_classes)
###Output
_____no_output_____
###Markdown
Flask-based REST API
###Code
import numpy as np
from flask import Flask, abort, jsonify, request
import requests,json
app = Flask(__name__)
@app.route('/predict', methods=['post'])
def predict():
json_ = request.get_json(force=True)
query = [json_]
return pipeline.predict(query)
if __name__ == '__main__':
app.run(port=8080)
###Output
_____no_output_____
###Markdown
Submission
###Code
pipeline.fit(train.drop(columns=['category']), encoded_y_train)
predictions = pipeline.predict_proba(test)
predictions = pd.DataFrame(predictions, columns=encoder.classes_)
submission = pd.concat([test['id'], predictions], axis=1)
submission.to_csv(path_or_buf='./{0}{1}/submission.csv'.format(model_name,date), sep=',', index=False)
###Output
_____no_output_____ |
notebooks/15_missing_data_and_other_opportunities.ipynb | ###Markdown
Chapter 15. Missing Data and Other Opportunities
###Code
import math
import os
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import jax.numpy as jnp
from jax import ops, random, vmap
from jax.scipy.special import expit
import numpyro
import numpyro.distributions as dist
from numpyro.diagnostics import print_summary
from numpyro.distributions import constraints
from numpyro.infer import MCMC, NUTS, init_to_value
if "SVG" in os.environ:
%config InlineBackend.figure_formats = ["svg"]
az.style.use("arviz-darkgrid")
numpyro.set_platform("cpu")
numpyro.set_host_device_count(4)
###Output
_____no_output_____
###Markdown
Code 15.1
###Code
# simulate a pancake and return randomly ordered sides
def sim_pancake(seed):
pancake = dist.Categorical(logits=jnp.ones(3)).sample(random.PRNGKey(2 * seed))
sides = jnp.array([1, 1, 1, 0, 0, 0]).reshape(3, 2).T[:, pancake]
return random.permutation(random.PRNGKey(2 * seed + 1), sides)
# sim 10,000 pancakes
pancakes = vmap(sim_pancake, out_axes=1)(jnp.arange(10000))
up = pancakes[0]
down = pancakes[1]
# compute proportion 1/1 (BB) out of all 1/1 and 1/0
num_11_10 = jnp.sum(up == 1)
num_11 = jnp.sum((up == 1) & (down == 1))
num_11 / num_11_10
###Output
_____no_output_____
###Markdown
Code 15.2
###Code
WaffleDivorce = pd.read_csv("../data/WaffleDivorce.csv", sep=";")
d = WaffleDivorce
# points
ax = az.plot_pair(
d[["MedianAgeMarriage", "Divorce"]].to_dict(orient="list"),
scatter_kwargs=dict(ms=15, mfc="none"),
)
ax.set(ylim=(4, 15), xlabel="Median age marrage", ylabel="Divorce rate")
# standard errors
for i in range(d.shape[0]):
ci = d.Divorce[i] + jnp.array([-1, 1]) * d["Divorce SE"][i]
x = d.MedianAgeMarriage[i]
plt.plot([x, x], ci, "k")
###Output
_____no_output_____
###Markdown
Code 15.3
###Code
dlist = dict(
D_obs=d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values,
D_sd=d["Divorce SE"].values / d.Divorce.std(),
M=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0],
)
def model(A, M, D_sd, D_obs, N):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
mu = a + bA * A + bM * M
D_true = numpyro.sample("D_true", dist.Normal(mu, sigma))
numpyro.sample("D_obs", dist.Normal(D_true, D_sd), obs=D_obs)
m15_1 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_1.run(random.PRNGKey(0), **dlist)
###Output
_____no_output_____
###Markdown
Code 15.4
###Code
m15_1.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
D_true[0] 1.18 0.37 1.18 0.63 1.82 2994.08 1.00
D_true[1] 0.68 0.56 0.68 -0.15 1.63 3886.40 1.00
D_true[2] 0.43 0.35 0.43 -0.10 1.02 3496.83 1.00
D_true[3] 1.41 0.47 1.42 0.65 2.11 3191.36 1.00
D_true[4] -0.90 0.13 -0.90 -1.10 -0.69 4891.79 1.00
D_true[5] 0.65 0.38 0.66 0.07 1.29 3366.77 1.00
D_true[6] -1.37 0.35 -1.37 -1.94 -0.85 3792.56 1.00
D_true[7] -0.35 0.49 -0.33 -1.10 0.44 3352.11 1.00
D_true[8] -1.88 0.60 -1.89 -2.82 -0.90 2520.68 1.00
D_true[9] -0.62 0.16 -0.62 -0.85 -0.32 3877.20 1.00
D_true[10] 0.78 0.28 0.77 0.35 1.22 4248.21 1.00
D_true[11] -0.56 0.49 -0.56 -1.35 0.21 3013.46 1.00
D_true[12] 0.15 0.49 0.17 -0.59 0.97 1573.35 1.00
D_true[13] -0.87 0.23 -0.87 -1.25 -0.50 3840.61 1.00
D_true[14] 0.56 0.31 0.55 0.11 1.09 3881.69 1.00
D_true[15] 0.28 0.38 0.29 -0.35 0.84 3537.87 1.00
D_true[16] 0.50 0.42 0.50 -0.13 1.17 4165.48 1.00
D_true[17] 1.26 0.33 1.26 0.74 1.81 3670.00 1.00
D_true[18] 0.44 0.39 0.43 -0.15 1.07 4188.21 1.00
D_true[19] 0.43 0.52 0.40 -0.40 1.26 2300.86 1.00
D_true[20] -0.55 0.32 -0.56 -1.11 -0.08 4109.26 1.00
D_true[21] -1.09 0.27 -1.09 -1.50 -0.66 3322.54 1.00
D_true[22] -0.26 0.27 -0.27 -0.69 0.16 4368.18 1.00
D_true[23] -1.01 0.28 -1.01 -1.43 -0.55 3460.48 1.00
D_true[24] 0.44 0.41 0.43 -0.20 1.09 3508.90 1.00
D_true[25] -0.03 0.31 -0.02 -0.51 0.49 5159.17 1.00
D_true[26] -0.01 0.51 -0.00 -0.89 0.74 3082.96 1.00
D_true[27] -0.16 0.39 -0.17 -0.79 0.47 4702.21 1.00
D_true[28] -0.26 0.48 -0.28 -1.07 0.49 3698.39 1.00
D_true[29] -1.81 0.24 -1.82 -2.19 -1.40 3468.19 1.00
D_true[30] 0.17 0.44 0.17 -0.50 0.87 4467.52 1.00
D_true[31] -1.66 0.16 -1.66 -1.95 -1.42 3546.08 1.00
D_true[32] 0.12 0.24 0.12 -0.25 0.49 4106.29 1.00
D_true[33] -0.08 0.53 -0.06 -0.87 0.77 2352.92 1.00
D_true[34] -0.12 0.22 -0.12 -0.46 0.23 3803.36 1.00
D_true[35] 1.28 0.43 1.27 0.64 2.01 4563.89 1.00
D_true[36] 0.24 0.36 0.23 -0.31 0.81 4425.27 1.00
D_true[37] -1.02 0.22 -1.02 -1.35 -0.67 4596.58 1.00
D_true[38] -0.93 0.55 -0.94 -1.78 -0.04 3146.40 1.00
D_true[39] -0.68 0.33 -0.68 -1.18 -0.13 4441.57 1.00
D_true[40] 0.24 0.56 0.24 -0.61 1.17 3139.58 1.00
D_true[41] 0.75 0.34 0.75 0.20 1.28 2849.56 1.00
D_true[42] 0.19 0.18 0.19 -0.10 0.46 3765.82 1.00
D_true[43] 0.79 0.42 0.80 0.13 1.46 2594.25 1.00
D_true[44] -0.40 0.51 -0.41 -1.22 0.40 3325.20 1.00
D_true[45] -0.39 0.25 -0.39 -0.78 0.00 3877.89 1.00
D_true[46] 0.15 0.30 0.16 -0.33 0.62 4323.28 1.00
D_true[47] 0.56 0.46 0.56 -0.22 1.24 3581.97 1.00
D_true[48] -0.64 0.27 -0.64 -1.06 -0.22 3759.21 1.00
D_true[49] 0.82 0.62 0.82 -0.14 1.79 3003.50 1.00
a -0.05 0.10 -0.06 -0.21 0.10 2791.80 1.00
bA -0.61 0.16 -0.62 -0.87 -0.37 2064.26 1.00
bM 0.04 0.17 0.04 -0.20 0.33 1884.70 1.00
sigma 0.60 0.11 0.59 0.42 0.76 810.11 1.00
Number of divergences: 0
###Markdown
Code 15.5
###Code
dlist = dict(
D_obs=d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values,
D_sd=d["Divorce SE"].values / d.Divorce.std(),
M_obs=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
M_sd=d["Marriage SE"].values / d.Marriage.std(),
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0],
)
def model(A, M_sd, M_obs, D_sd, D_obs, N):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
M_est = numpyro.sample("M_est", dist.Normal(0, 1).expand([N]))
numpyro.sample("M_obs", dist.Normal(M_est, M_sd), obs=M_obs)
mu = a + bA * A + bM * M_est
D_est = numpyro.sample("D_est", dist.Normal(mu, sigma))
numpyro.sample("D_obs", dist.Normal(D_est, D_sd), obs=D_obs)
m15_2 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_2.run(random.PRNGKey(0), **dlist)
###Output
_____no_output_____
###Markdown
Code 15.6
###Code
post = m15_2.get_samples()
D_est = jnp.mean(post["D_est"], 0)
M_est = jnp.mean(post["M_est"], 0)
plt.plot(dlist["M_obs"], dlist["D_obs"], "bo", alpha=0.5)
plt.gca().set(xlabel="marriage rate (std)", ylabel="divorce rate (std)")
plt.plot(M_est, D_est, "ko", mfc="none")
for i in range(d.shape[0]):
plt.plot([dlist["M_obs"][i], M_est[i]], [dlist["D_obs"][i], D_est[i]], "k-", lw=1)
###Output
_____no_output_____
###Markdown
Code 15.7
###Code
N = 500
A = dist.Normal().sample(random.PRNGKey(0), (N,))
M = dist.Normal(-A).sample(random.PRNGKey(1))
D = dist.Normal(A).sample(random.PRNGKey(2))
A_obs = dist.Normal(A).sample(random.PRNGKey(3))
###Output
_____no_output_____
###Markdown
Code 15.8
###Code
N = 100
S = dist.Normal().sample(random.PRNGKey(0), (N,))
H = dist.Binomial(10, expit(S)).sample(random.PRNGKey(1))
###Output
_____no_output_____
###Markdown
Code 15.9
###Code
D = dist.Bernoulli(0.5).sample(random.PRNGKey(2), (N,)) # dogs completely random
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.10
###Code
D = jnp.where(S > 0, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.11
###Code
with numpyro.handlers.seed(rng_seed=501):
N = 1000
X = numpyro.sample("X", dist.Normal().expand([N]))
S = numpyro.sample("S", dist.Normal().expand([N]))
H = numpyro.sample("H", dist.Binomial(10, logits=2 + S - 2 * X))
D = jnp.where(X > 1, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.12
###Code
dat_list = dict(H=H, S=S)
def model(S, H):
a = numpyro.sample("a", dist.Normal(0, 1))
bS = numpyro.sample("bS", dist.Normal(0, 0.5))
logit_p = a + bS * S
numpyro.sample("H", dist.Binomial(10, logits=logit_p), obs=H)
m15_3 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_3.run(random.PRNGKey(0), **dat_list)
m15_3.print_summary()
###Output
mean std median 5.0% 95.0% n_eff r_hat
a 1.32 0.03 1.32 1.28 1.36 1473.87 1.00
bS 0.62 0.03 0.62 0.58 0.66 1294.44 1.00
Number of divergences: 0
###Markdown
Code 15.13
###Code
dat_list0 = dict(H=H[D == 0], S=S[D == 0])
def model(S, H):
a = numpyro.sample("a", dist.Normal(0, 1))
bS = numpyro.sample("bS", dist.Normal(0, 0.5))
logit_p = a + bS * S
numpyro.sample("H", dist.Binomial(10, logits=logit_p), obs=H)
m15_4 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_4.run(random.PRNGKey(0), **dat_list0)
m15_4.print_summary()
###Output
mean std median 5.0% 95.0% n_eff r_hat
a 1.92 0.03 1.92 1.86 1.97 1027.57 1.01
bS 0.72 0.03 0.72 0.67 0.78 1039.30 1.00
Number of divergences: 0
###Markdown
Code 15.14
###Code
D = jnp.where(jnp.abs(X) < 1, 1, 0)
###Output
_____no_output_____
###Markdown
Code 15.15
###Code
N = 100
S = dist.Normal().sample(random.PRNGKey(0), (N,))
H = dist.Binomial(10, logits=S).sample(random.PRNGKey(1))
D = jnp.where(H < 5, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.16
###Code
milk = pd.read_csv("../data/milk.csv", sep=";")
d = milk
d["neocortex.prop"] = d["neocortex.perc"] / 100
d["logmass"] = d.mass.apply(math.log)
###Output
_____no_output_____
###Markdown
Code 15.17
###Code
dat_list = dict(
K=d["kcal.per.g"].pipe(lambda x: (x - x.mean()) / x.std()).values,
B=d["neocortex.prop"].pipe(lambda x: (x - x.mean()) / x.std()).values,
M=d.logmass.pipe(lambda x: (x - x.mean()) / x.std()).values,
)
def model(B, M, K):
a = numpyro.sample("a", dist.Normal(0, 0.5))
nu = numpyro.sample("nu", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma_B = numpyro.sample("sigma_B", dist.Exponential(1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
B_impute = numpyro.sample(
"B_impute", dist.Normal(0, 1).expand([int(np.isnan(B).sum())]).mask(False)
)
B = ops.index_update(B, np.nonzero(np.isnan(B))[0], B_impute)
numpyro.sample("B", dist.Normal(nu, sigma_B), obs=B)
mu = a + bB * B + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_5 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_5.run(random.PRNGKey(0), **dat_list)
###Output
_____no_output_____
###Markdown
Code 15.18
###Code
m15_5.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
B_impute[0] -0.59 0.93 -0.61 -2.06 0.86 1655.83 1.00
B_impute[1] -0.72 0.94 -0.73 -2.31 0.72 1612.19 1.00
B_impute[2] -0.74 0.93 -0.76 -2.18 0.83 1767.06 1.00
B_impute[3] -0.31 0.89 -0.33 -1.68 1.14 2108.70 1.00
B_impute[4] 0.45 0.90 0.44 -0.88 1.91 2078.89 1.00
B_impute[5] -0.18 0.93 -0.19 -1.67 1.21 2398.70 1.00
B_impute[6] 0.18 0.89 0.19 -1.24 1.60 2515.01 1.00
B_impute[7] 0.30 0.87 0.30 -1.03 1.69 2381.34 1.00
B_impute[8] 0.51 0.89 0.53 -0.94 1.82 2092.03 1.00
B_impute[9] -0.44 0.92 -0.44 -1.94 0.94 1808.37 1.00
B_impute[10] -0.26 0.89 -0.28 -1.62 1.19 2006.01 1.00
B_impute[11] 0.16 0.89 0.18 -1.32 1.53 2103.16 1.00
a 0.03 0.17 0.04 -0.26 0.27 2079.00 1.00
bB 0.50 0.24 0.50 0.09 0.85 687.56 1.00
bM -0.55 0.20 -0.55 -0.84 -0.20 871.45 1.00
nu -0.05 0.21 -0.05 -0.37 0.29 1669.64 1.00
sigma 0.84 0.14 0.83 0.62 1.04 930.30 1.00
sigma_B 1.01 0.17 0.99 0.76 1.27 1055.17 1.00
Number of divergences: 0
###Markdown
Code 15.19
###Code
obs_idx = d["neocortex.prop"].notnull().values
dat_list_obs = dict(
K=dat_list["K"][obs_idx], B=dat_list["B"][obs_idx], M=dat_list["M"][obs_idx]
)
def model(B, M, K):
a = numpyro.sample("a", dist.Normal(0, 0.5))
nu = numpyro.sample("nu", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma_B = numpyro.sample("sigma_B", dist.Exponential(1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
numpyro.sample("B", dist.Normal(nu, sigma_B), obs=B)
mu = a + bB * B + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_6 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_6.run(random.PRNGKey(0), **dat_list_obs)
m15_6.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
a 0.10 0.19 0.10 -0.19 0.43 2099.18 1.00
bB 0.61 0.28 0.62 0.15 1.02 1260.84 1.00
bM -0.64 0.25 -0.65 -1.05 -0.27 1227.34 1.00
nu -0.01 0.23 0.00 -0.35 0.37 2052.60 1.00
sigma 0.87 0.19 0.83 0.60 1.13 1141.14 1.00
sigma_B 1.05 0.19 1.03 0.74 1.31 1891.93 1.00
Number of divergences: 0
###Markdown
Code 15.20
###Code
az.plot_forest(
[az.from_numpyro(m15_5), az.from_numpyro(m15_6)],
model_names=["m15.5", "m15.6"],
var_names=["bB", "bM"],
combined=True,
hdi_prob=0.89,
)
plt.show()
###Output
_____no_output_____
###Markdown
Code 15.21
###Code
post = m15_5.get_samples()
B_impute_mu = jnp.mean(post["B_impute"], 0)
B_impute_ci = jnp.percentile(post["B_impute"], q=(5.5, 94.5), axis=0)
# B vs K
plt.plot(dat_list["B"], dat_list["K"], "o")
plt.gca().set(xlabel="neocortex percent (std)", ylabel="kcal mild (std)")
miss_idx = pd.isna(dat_list["B"]).nonzero()[0]
Ki = dat_list["K"][miss_idx]
plt.plot(B_impute_mu, Ki, "ko", mfc="none")
for i in range(12):
plt.plot(B_impute_ci[:, i], jnp.repeat(Ki[i], 2), "k", lw=1)
plt.show()
# M vs B
plt.plot(dat_list["M"], dat_list["B"], "o")
plt.gca().set(xlabel="log body mass (std)", ylabel="neocortex percent (std)")
Mi = dat_list["M"][miss_idx]
plt.plot(Mi, B_impute_mu, "ko", mfc="none")
for i in range(12):
plt.plot(jnp.repeat(Mi[i], 2), B_impute_ci[:, i], "k", lw=1)
###Output
_____no_output_____
###Markdown
Code 15.22
###Code
def model(B, M, K):
# priors
a = numpyro.sample("a", dist.Normal(0, 0.5))
muB = numpyro.sample("muB", dist.Normal(0, 0.5))
muM = numpyro.sample("muM", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
Rho_BM = numpyro.sample("Rho_BM", dist.LKJ(2, 2))
Sigma_BM = numpyro.sample("Sigma_BM", dist.Exponential(1).expand([2]))
# define B_merge as mix of observed and imputed values
B_impute = numpyro.sample(
"B_impute", dist.Normal(0, 1).expand([int(np.isnan(B).sum())]).mask(False)
)
B_merge = ops.index_update(B, np.nonzero(np.isnan(B))[0], B_impute)
# M and B correlation
MB = jnp.stack([M, B_merge], axis=1)
cov = jnp.outer(Sigma_BM, Sigma_BM) * Rho_BM
numpyro.sample("MB", dist.MultivariateNormal(jnp.stack([muM, muB]), cov), obs=MB)
# K as function of B and M
mu = a + bB * B_merge + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_7 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_7.run(random.PRNGKey(0), **dat_list)
post = m15_7.get_samples(group_by_chain=True)
print_summary({k: v for k, v in post.items() if k in ["bM", "bB", "Rho_BM"]})
###Output
mean std median 5.0% 95.0% n_eff r_hat
Rho_BM[0,0] 1.00 0.00 1.00 1.00 1.00 nan nan
Rho_BM[0,1] 0.60 0.14 0.62 0.39 0.80 1416.93 1.00
Rho_BM[1,0] 0.60 0.14 0.62 0.39 0.80 1416.93 1.00
Rho_BM[1,1] 1.00 0.00 1.00 1.00 1.00 1718.78 1.00
bB 0.58 0.26 0.59 0.14 0.98 704.15 1.01
bM -0.64 0.23 -0.64 -1.02 -0.30 942.12 1.00
###Markdown
Code 15.23
###Code
B_missidx = pd.isna(dat_list["B"]).nonzero()[0]
###Output
_____no_output_____
###Markdown
Code 15.24
###Code
Moralizing_gods = pd.read_csv("../data/Moralizing_gods.csv", sep=";")
Moralizing_gods
###Output
_____no_output_____
###Markdown
Code 15.25
###Code
Moralizing_gods.moralizing_gods.value_counts(dropna=False)
###Output
_____no_output_____
###Markdown
Code 15.26
###Code
symbol = Moralizing_gods.moralizing_gods.apply(lambda x: "." if x == 1 else "o")
symbol[Moralizing_gods.moralizing_gods.isna()] = "x"
color = Moralizing_gods.moralizing_gods.apply(lambda x: "k" if pd.isna(x) else "b")
for pch in ["o", ".", "x"]:
plt.scatter(
Moralizing_gods.year[symbol == pch],
Moralizing_gods.population[symbol == pch],
marker=pch,
color=color[symbol == pch],
facecolor="none" if pch == "o" else None,
lw=1.5,
alpha=0.7,
)
plt.gca().set(xlabel="Time (year)", ylabel="Population size")
plt.show()
###Output
_____no_output_____
###Markdown
Code 15.27
###Code
dmg = Moralizing_gods
dmg.astype(str).groupby(["moralizing_gods", "writing"]).size().unstack(fill_value=0)
###Output
_____no_output_____
###Markdown
Code 15.28
###Code
dmg = Moralizing_gods
haw = dmg.polity == "Big Island Hawaii"
dmg.loc[haw, ["year", "population", "writing", "moralizing_gods"]].T.round(3)
###Output
_____no_output_____
###Markdown
Code 15.29
###Code
with numpyro.handlers.seed(rng_seed=9):
N_houses = 100
alpha = 5
beta = -3
k = 0.5
r = 0.2
cat = numpyro.sample("cat", dist.Bernoulli(k).expand([N_houses]))
notes = numpyro.sample("notes", dist.Poisson(alpha + beta * cat))
R_C = numpyro.sample("R_C", dist.Bernoulli(r).expand([N_houses]))
cat_obs = jnp.where(R_C == 1, -9, cat)
###Output
_____no_output_____
###Markdown
Code 15.30
###Code
dat = dict(notes=notes, cat=cat_obs.copy(), RC=R_C.copy(), N=N_houses - 1)
def model(N, RC, cat, notes):
# priors
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 0.5))
# sneaking cat model
k = numpyro.sample("k", dist.Beta(2, 2))
numpyro.sample("cat|RC==0", dist.Bernoulli(k), obs=cat[RC == 0])
# singing bird model
# cat NA:
custom_logprob = jnp.logaddexp(
jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes[RC == 1]),
jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes[RC == 1]),
)
numpyro.factor("notes|RC==1", custom_logprob)
# cat known present/absent:
lambda_ = jnp.exp(a + b * cat[RC == 0])
numpyro.sample("notes|RC==0", dist.Poisson(lambda_), obs=notes[RC == 0])
m15_8 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_8.run(random.PRNGKey(0), **dat)
###Output
_____no_output_____
###Markdown
Code 15.31
###Code
def model(N, RC, cat, notes, link=False):
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 0.5))
# sneaking cat model
k = numpyro.sample("k", dist.Beta(2, 2))
numpyro.sample("cat|RC==0", dist.Bernoulli(k), obs=cat[RC == 0])
# singing bird model
custom_logprob = jnp.logaddexp(
jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes[RC == 1]),
jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes[RC == 1]),
)
numpyro.factor("notes|RC==1", custom_logprob)
lambda_ = jnp.exp(a + b * cat[RC == 0])
numpyro.sample("notes|RC==0", dist.Poisson(lambda_), obs=notes[RC == 0])
if link:
lpC0 = numpyro.deterministic(
"lpC0", jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes)
)
lpC1 = numpyro.deterministic(
"lpC1", jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes)
)
numpyro.deterministic("PrC1", jnp.exp(lpC1) / (jnp.exp(lpC1) + jnp.exp(lpC0)))
m15_9 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_9.run(random.PRNGKey(0), **dat)
###Output
_____no_output_____
###Markdown
Code 15.32
###Code
with numpyro.handlers.seed(rng_seed=100):
x = numpyro.sample("x", dist.Normal().expand([10]))
y = numpyro.sample("y", dist.Normal(x))
x = jnp.concatenate([x, jnp.array([jnp.nan])])
y = jnp.concatenate([y, jnp.array([100])])
d = dict(x=x, y=y)
###Output
_____no_output_____
###Markdown
Code 15.33
###Code
Primates301 = pd.read_csv("../data/Primates301.csv", sep=";")
d = Primates301
cc = d.dropna(subset=["brain", "body"]).index
B = d.brain[cc]
M = d.body[cc]
B = B.values / max(B)
M = M.values / max(M)
###Output
_____no_output_____
###Markdown
Code 15.34
###Code
Bse = B * 0.1
Mse = M * 0.1
###Output
_____no_output_____
###Markdown
Code 15.35
###Code
dat_list = dict(B=B, M=M)
def model(M, B):
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
mu = a + b * jnp.log(M)
numpyro.sample("B", dist.LogNormal(mu, sigma), obs=B)
m15H4 = MCMC(NUTS(model), num_warmup=500, num_samples=500)
m15H4.run(random.PRNGKey(0), **dat_list)
###Output
sample: 100%|██████████| 1000/1000 [00:06<00:00, 158.50it/s, 7 steps of size 2.37e-01. acc. prob=0.95]
###Markdown
Code 15.36
###Code
start = dict(M_true=dat_list["M"], B_true=dat_list["B"])
init_strategy = init_to_value(values=start)
###Output
_____no_output_____
###Markdown
Code 15.37
###Code
Primates301 = pd.read_csv("../data/Primates301.csv", sep=";")
d = Primates301
d.isna().sum()
###Output
_____no_output_____
###Markdown
Code 15.38
###Code
cc = d.dropna(subset=["body"]).index
M = d.body[cc]
M = M.values / max(M)
B = d.brain[cc]
B = B.values / B.max(skipna=True)
###Output
_____no_output_____
###Markdown
Code 15.39
###Code
start = dict(B_impute=jnp.repeat(0.5, 56))
init_strategy = init_to_value(values=start)
###Output
_____no_output_____
###Markdown
Chapter 15. Missing Data and Other Opportunities
###Code
import math
import os
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import jax.numpy as jnp
from jax import ops, random, vmap
from jax.scipy.special import expit
import numpyro
import numpyro.distributions as dist
from numpyro.diagnostics import print_summary
from numpyro.distributions import constraints
from numpyro.infer import MCMC, NUTS, init_to_value
if "SVG" in os.environ:
%config InlineBackend.figure_formats = ["svg"]
az.style.use("arviz-darkgrid")
numpyro.set_host_device_count(4)
###Output
_____no_output_____
###Markdown
Code 15.1
###Code
# simulate a pancake and return randomly ordered sides
def sim_pancake(seed):
pancake = dist.Categorical(logits=jnp.ones(3)).sample(random.PRNGKey(2 * seed))
sides = jnp.array([1, 1, 1, 0, 0, 0]).reshape(3, 2).T[:, pancake]
return random.permutation(random.PRNGKey(2 * seed + 1), sides)
# sim 10,000 pancakes
pancakes = vmap(sim_pancake, out_axes=1)(jnp.arange(10000))
up = pancakes[0]
down = pancakes[1]
# compute proportion 1/1 (BB) out of all 1/1 and 1/0
num_11_10 = jnp.sum(up == 1)
num_11 = jnp.sum((up == 1) & (down == 1))
num_11 / num_11_10
###Output
_____no_output_____
###Markdown
Code 15.2
###Code
WaffleDivorce = pd.read_csv("../data/WaffleDivorce.csv", sep=";")
d = WaffleDivorce
# points
ax = az.plot_pair(
d[["MedianAgeMarriage", "Divorce"]].to_dict(orient="list"),
scatter_kwargs=dict(ms=15, mfc="none"),
)
ax.set(ylim=(4, 15), xlabel="Median age marrage", ylabel="Divorce rate")
# standard errors
for i in range(d.shape[0]):
ci = d.Divorce[i] + jnp.array([-1, 1]) * d["Divorce SE"][i]
x = d.MedianAgeMarriage[i]
plt.plot([x, x], ci, "k")
###Output
_____no_output_____
###Markdown
Code 15.3
###Code
dlist = dict(
D_obs=d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values,
D_sd=d["Divorce SE"].values / d.Divorce.std(),
M=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0],
)
def model(A, M, D_sd, D_obs, N):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
mu = a + bA * A + bM * M
D_true = numpyro.sample("D_true", dist.Normal(mu, sigma))
numpyro.sample("D_obs", dist.Normal(D_true, D_sd), obs=D_obs)
m15_1 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_1.run(random.PRNGKey(0), **dlist)
###Output
_____no_output_____
###Markdown
Code 15.4
###Code
m15_1.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
D_true[0] 1.18 0.37 1.18 0.63 1.82 2994.08 1.00
D_true[1] 0.68 0.56 0.68 -0.15 1.63 3886.40 1.00
D_true[2] 0.43 0.35 0.43 -0.10 1.02 3496.83 1.00
D_true[3] 1.41 0.47 1.42 0.65 2.11 3191.36 1.00
D_true[4] -0.90 0.13 -0.90 -1.10 -0.69 4891.79 1.00
D_true[5] 0.65 0.38 0.66 0.07 1.29 3366.77 1.00
D_true[6] -1.37 0.35 -1.37 -1.94 -0.85 3792.56 1.00
D_true[7] -0.35 0.49 -0.33 -1.10 0.44 3352.11 1.00
D_true[8] -1.88 0.60 -1.89 -2.82 -0.90 2520.68 1.00
D_true[9] -0.62 0.16 -0.62 -0.85 -0.32 3877.20 1.00
D_true[10] 0.78 0.28 0.77 0.35 1.22 4248.21 1.00
D_true[11] -0.56 0.49 -0.56 -1.35 0.21 3013.46 1.00
D_true[12] 0.15 0.49 0.17 -0.59 0.97 1573.35 1.00
D_true[13] -0.87 0.23 -0.87 -1.25 -0.50 3840.61 1.00
D_true[14] 0.56 0.31 0.55 0.11 1.09 3881.69 1.00
D_true[15] 0.28 0.38 0.29 -0.35 0.84 3537.87 1.00
D_true[16] 0.50 0.42 0.50 -0.13 1.17 4165.48 1.00
D_true[17] 1.26 0.33 1.26 0.74 1.81 3670.00 1.00
D_true[18] 0.44 0.39 0.43 -0.15 1.07 4188.21 1.00
D_true[19] 0.43 0.52 0.40 -0.40 1.26 2300.86 1.00
D_true[20] -0.55 0.32 -0.56 -1.11 -0.08 4109.26 1.00
D_true[21] -1.09 0.27 -1.09 -1.50 -0.66 3322.54 1.00
D_true[22] -0.26 0.27 -0.27 -0.69 0.16 4368.18 1.00
D_true[23] -1.01 0.28 -1.01 -1.43 -0.55 3460.48 1.00
D_true[24] 0.44 0.41 0.43 -0.20 1.09 3508.90 1.00
D_true[25] -0.03 0.31 -0.02 -0.51 0.49 5159.17 1.00
D_true[26] -0.01 0.51 -0.00 -0.89 0.74 3082.96 1.00
D_true[27] -0.16 0.39 -0.17 -0.79 0.47 4702.21 1.00
D_true[28] -0.26 0.48 -0.28 -1.07 0.49 3698.39 1.00
D_true[29] -1.81 0.24 -1.82 -2.19 -1.40 3468.19 1.00
D_true[30] 0.17 0.44 0.17 -0.50 0.87 4467.52 1.00
D_true[31] -1.66 0.16 -1.66 -1.95 -1.42 3546.08 1.00
D_true[32] 0.12 0.24 0.12 -0.25 0.49 4106.29 1.00
D_true[33] -0.08 0.53 -0.06 -0.87 0.77 2352.92 1.00
D_true[34] -0.12 0.22 -0.12 -0.46 0.23 3803.36 1.00
D_true[35] 1.28 0.43 1.27 0.64 2.01 4563.89 1.00
D_true[36] 0.24 0.36 0.23 -0.31 0.81 4425.27 1.00
D_true[37] -1.02 0.22 -1.02 -1.35 -0.67 4596.58 1.00
D_true[38] -0.93 0.55 -0.94 -1.78 -0.04 3146.40 1.00
D_true[39] -0.68 0.33 -0.68 -1.18 -0.13 4441.57 1.00
D_true[40] 0.24 0.56 0.24 -0.61 1.17 3139.58 1.00
D_true[41] 0.75 0.34 0.75 0.20 1.28 2849.56 1.00
D_true[42] 0.19 0.18 0.19 -0.10 0.46 3765.82 1.00
D_true[43] 0.79 0.42 0.80 0.13 1.46 2594.25 1.00
D_true[44] -0.40 0.51 -0.41 -1.22 0.40 3325.20 1.00
D_true[45] -0.39 0.25 -0.39 -0.78 0.00 3877.89 1.00
D_true[46] 0.15 0.30 0.16 -0.33 0.62 4323.28 1.00
D_true[47] 0.56 0.46 0.56 -0.22 1.24 3581.97 1.00
D_true[48] -0.64 0.27 -0.64 -1.06 -0.22 3759.21 1.00
D_true[49] 0.82 0.62 0.82 -0.14 1.79 3003.50 1.00
a -0.05 0.10 -0.06 -0.21 0.10 2791.80 1.00
bA -0.61 0.16 -0.62 -0.87 -0.37 2064.26 1.00
bM 0.04 0.17 0.04 -0.20 0.33 1884.70 1.00
sigma 0.60 0.11 0.59 0.42 0.76 810.11 1.00
Number of divergences: 0
###Markdown
Code 15.5
###Code
dlist = dict(
D_obs=d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values,
D_sd=d["Divorce SE"].values / d.Divorce.std(),
M_obs=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
M_sd=d["Marriage SE"].values / d.Marriage.std(),
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0],
)
def model(A, M_sd, M_obs, D_sd, D_obs, N):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
M_est = numpyro.sample("M_est", dist.Normal(0, 1).expand([N]))
numpyro.sample("M_obs", dist.Normal(M_est, M_sd), obs=M_obs)
mu = a + bA * A + bM * M_est
D_est = numpyro.sample("D_est", dist.Normal(mu, sigma))
numpyro.sample("D_obs", dist.Normal(D_est, D_sd), obs=D_obs)
m15_2 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_2.run(random.PRNGKey(0), **dlist)
###Output
_____no_output_____
###Markdown
Code 15.6
###Code
post = m15_2.get_samples()
D_est = jnp.mean(post["D_est"], 0)
M_est = jnp.mean(post["M_est"], 0)
plt.plot(dlist["M_obs"], dlist["D_obs"], "bo", alpha=0.5)
plt.gca().set(xlabel="marriage rate (std)", ylabel="divorce rate (std)")
plt.plot(M_est, D_est, "ko", mfc="none")
for i in range(d.shape[0]):
plt.plot([dlist["M_obs"][i], M_est[i]], [dlist["D_obs"][i], D_est[i]], "k-", lw=1)
###Output
_____no_output_____
###Markdown
Code 15.7
###Code
N = 500
A = dist.Normal().sample(random.PRNGKey(0), (N,))
M = dist.Normal(-A).sample(random.PRNGKey(1))
D = dist.Normal(A).sample(random.PRNGKey(2))
A_obs = dist.Normal(A).sample(random.PRNGKey(3))
###Output
_____no_output_____
###Markdown
Code 15.8
###Code
N = 100
S = dist.Normal().sample(random.PRNGKey(0), (N,))
H = dist.Binomial(10, expit(S)).sample(random.PRNGKey(1))
###Output
_____no_output_____
###Markdown
Code 15.9
###Code
D = dist.Bernoulli(0.5).sample(random.PRNGKey(2), (N,)) # dogs completely random
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.10
###Code
D = jnp.where(S > 0, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.11
###Code
with numpyro.handlers.seed(rng_seed=501):
N = 1000
X = numpyro.sample("X", dist.Normal().expand([N]))
S = numpyro.sample("S", dist.Normal().expand([N]))
H = numpyro.sample("H", dist.Binomial(10, logits=2 + S - 2 * X))
D = jnp.where(X > 1, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.12
###Code
dat_list = dict(H=H, S=S)
def model(S, H):
a = numpyro.sample("a", dist.Normal(0, 1))
bS = numpyro.sample("bS", dist.Normal(0, 0.5))
logit_p = a + bS * S
numpyro.sample("H", dist.Binomial(10, logits=logit_p), obs=H)
m15_3 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_3.run(random.PRNGKey(0), **dat_list)
m15_3.print_summary()
###Output
mean std median 5.0% 95.0% n_eff r_hat
a 1.32 0.03 1.32 1.28 1.36 1473.87 1.00
bS 0.62 0.03 0.62 0.58 0.66 1294.44 1.00
Number of divergences: 0
###Markdown
Code 15.13
###Code
dat_list0 = dict(H=H[D == 0], S=S[D == 0])
def model(S, H):
a = numpyro.sample("a", dist.Normal(0, 1))
bS = numpyro.sample("bS", dist.Normal(0, 0.5))
logit_p = a + bS * S
numpyro.sample("H", dist.Binomial(10, logits=logit_p), obs=H)
m15_4 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_4.run(random.PRNGKey(0), **dat_list0)
m15_4.print_summary()
###Output
mean std median 5.0% 95.0% n_eff r_hat
a 1.92 0.03 1.92 1.86 1.97 1027.57 1.01
bS 0.72 0.03 0.72 0.67 0.78 1039.30 1.00
Number of divergences: 0
###Markdown
Code 15.14
###Code
D = jnp.where(jnp.abs(X) < 1, 1, 0)
###Output
_____no_output_____
###Markdown
Code 15.15
###Code
N = 100
S = dist.Normal().sample(random.PRNGKey(0), (N,))
H = dist.Binomial(10, logits=S).sample(random.PRNGKey(1))
D = jnp.where(H < 5, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.16
###Code
milk = pd.read_csv("../data/milk.csv", sep=";")
d = milk
d["neocortex.prop"] = d["neocortex.perc"] / 100
d["logmass"] = d.mass.apply(math.log)
###Output
_____no_output_____
###Markdown
Code 15.17
###Code
dat_list = dict(
K=d["kcal.per.g"].pipe(lambda x: (x - x.mean()) / x.std()).values,
B=d["neocortex.prop"].pipe(lambda x: (x - x.mean()) / x.std()).values,
M=d.logmass.pipe(lambda x: (x - x.mean()) / x.std()).values,
)
def model(B, M, K):
a = numpyro.sample("a", dist.Normal(0, 0.5))
nu = numpyro.sample("nu", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma_B = numpyro.sample("sigma_B", dist.Exponential(1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
B_impute = numpyro.sample(
"B_impute", dist.Normal(0, 1).expand([int(np.isnan(B).sum())]).mask(False)
)
B = ops.index_update(B, np.nonzero(np.isnan(B))[0], B_impute)
numpyro.sample("B", dist.Normal(nu, sigma_B), obs=B)
mu = a + bB * B + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_5 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_5.run(random.PRNGKey(0), **dat_list)
###Output
_____no_output_____
###Markdown
Code 15.18
###Code
m15_5.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
B_impute[0] -0.59 0.93 -0.61 -2.06 0.86 1655.83 1.00
B_impute[1] -0.72 0.94 -0.73 -2.31 0.72 1612.19 1.00
B_impute[2] -0.74 0.93 -0.76 -2.18 0.83 1767.06 1.00
B_impute[3] -0.31 0.89 -0.33 -1.68 1.14 2108.70 1.00
B_impute[4] 0.45 0.90 0.44 -0.88 1.91 2078.89 1.00
B_impute[5] -0.18 0.93 -0.19 -1.67 1.21 2398.70 1.00
B_impute[6] 0.18 0.89 0.19 -1.24 1.60 2515.01 1.00
B_impute[7] 0.30 0.87 0.30 -1.03 1.69 2381.34 1.00
B_impute[8] 0.51 0.89 0.53 -0.94 1.82 2092.03 1.00
B_impute[9] -0.44 0.92 -0.44 -1.94 0.94 1808.37 1.00
B_impute[10] -0.26 0.89 -0.28 -1.62 1.19 2006.01 1.00
B_impute[11] 0.16 0.89 0.18 -1.32 1.53 2103.16 1.00
a 0.03 0.17 0.04 -0.26 0.27 2079.00 1.00
bB 0.50 0.24 0.50 0.09 0.85 687.56 1.00
bM -0.55 0.20 -0.55 -0.84 -0.20 871.45 1.00
nu -0.05 0.21 -0.05 -0.37 0.29 1669.64 1.00
sigma 0.84 0.14 0.83 0.62 1.04 930.30 1.00
sigma_B 1.01 0.17 0.99 0.76 1.27 1055.17 1.00
Number of divergences: 0
###Markdown
Code 15.19
###Code
obs_idx = d["neocortex.prop"].notnull().values
dat_list_obs = dict(
K=dat_list["K"][obs_idx], B=dat_list["B"][obs_idx], M=dat_list["M"][obs_idx]
)
def model(B, M, K):
a = numpyro.sample("a", dist.Normal(0, 0.5))
nu = numpyro.sample("nu", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma_B = numpyro.sample("sigma_B", dist.Exponential(1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
numpyro.sample("B", dist.Normal(nu, sigma_B), obs=B)
mu = a + bB * B + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_6 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_6.run(random.PRNGKey(0), **dat_list_obs)
m15_6.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
a 0.10 0.19 0.10 -0.19 0.43 2099.18 1.00
bB 0.61 0.28 0.62 0.15 1.02 1260.84 1.00
bM -0.64 0.25 -0.65 -1.05 -0.27 1227.34 1.00
nu -0.01 0.23 0.00 -0.35 0.37 2052.60 1.00
sigma 0.87 0.19 0.83 0.60 1.13 1141.14 1.00
sigma_B 1.05 0.19 1.03 0.74 1.31 1891.93 1.00
Number of divergences: 0
###Markdown
Code 15.20
###Code
az.plot_forest(
[az.from_numpyro(m15_5), az.from_numpyro(m15_6)],
model_names=["m15.5", "m15.6"],
var_names=["bB", "bM"],
combined=True,
hdi_prob=0.89,
)
plt.show()
###Output
_____no_output_____
###Markdown
Code 15.21
###Code
post = m15_5.get_samples()
B_impute_mu = jnp.mean(post["B_impute"], 0)
B_impute_ci = jnp.percentile(post["B_impute"], q=(5.5, 94.5), axis=0)
# B vs K
plt.plot(dat_list["B"], dat_list["K"], "o")
plt.gca().set(xlabel="neocortex percent (std)", ylabel="kcal mild (std)")
miss_idx = pd.isna(dat_list["B"]).nonzero()[0]
Ki = dat_list["K"][miss_idx]
plt.plot(B_impute_mu, Ki, "ko", mfc="none")
for i in range(12):
plt.plot(B_impute_ci[:, i], jnp.repeat(Ki[i], 2), "k", lw=1)
plt.show()
# M vs B
plt.plot(dat_list["M"], dat_list["B"], "o")
plt.gca().set(xlabel="log body mass (std)", ylabel="neocortex percent (std)")
Mi = dat_list["M"][miss_idx]
plt.plot(Mi, B_impute_mu, "ko", mfc="none")
for i in range(12):
plt.plot(jnp.repeat(Mi[i], 2), B_impute_ci[:, i], "k", lw=1)
###Output
_____no_output_____
###Markdown
Code 15.22
###Code
def model(B, M, K):
# priors
a = numpyro.sample("a", dist.Normal(0, 0.5))
muB = numpyro.sample("muB", dist.Normal(0, 0.5))
muM = numpyro.sample("muM", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
Rho_BM = numpyro.sample("Rho_BM", dist.LKJ(2, 2))
Sigma_BM = numpyro.sample("Sigma_BM", dist.Exponential(1).expand([2]))
# define B_merge as mix of observed and imputed values
B_impute = numpyro.sample(
"B_impute", dist.Normal(0, 1).expand([int(np.isnan(B).sum())]).mask(False)
)
B_merge = ops.index_update(B, np.nonzero(np.isnan(B))[0], B_impute)
# M and B correlation
MB = jnp.stack([M, B_merge], axis=1)
cov = jnp.outer(Sigma_BM, Sigma_BM) * Rho_BM
numpyro.sample("MB", dist.MultivariateNormal(jnp.stack([muM, muB]), cov), obs=MB)
# K as function of B and M
mu = a + bB * B_merge + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_7 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_7.run(random.PRNGKey(0), **dat_list)
post = m15_7.get_samples(group_by_chain=True)
print_summary({k: v for k, v in post.items() if k in ["bM", "bB", "Rho_BM"]})
###Output
mean std median 5.0% 95.0% n_eff r_hat
Rho_BM[0,0] 1.00 0.00 1.00 1.00 1.00 nan nan
Rho_BM[0,1] 0.60 0.14 0.62 0.39 0.80 1416.93 1.00
Rho_BM[1,0] 0.60 0.14 0.62 0.39 0.80 1416.93 1.00
Rho_BM[1,1] 1.00 0.00 1.00 1.00 1.00 1718.78 1.00
bB 0.58 0.26 0.59 0.14 0.98 704.15 1.01
bM -0.64 0.23 -0.64 -1.02 -0.30 942.12 1.00
###Markdown
Code 15.23
###Code
B_missidx = pd.isna(dat_list["B"]).nonzero()[0]
###Output
_____no_output_____
###Markdown
Code 15.24
###Code
Moralizing_gods = pd.read_csv("../data/Moralizing_gods.csv", sep=";")
Moralizing_gods
###Output
_____no_output_____
###Markdown
Code 15.25
###Code
Moralizing_gods.moralizing_gods.value_counts(dropna=False)
###Output
_____no_output_____
###Markdown
Code 15.26
###Code
symbol = Moralizing_gods.moralizing_gods.apply(lambda x: "." if x == 1 else "o")
symbol[Moralizing_gods.moralizing_gods.isna()] = "x"
color = Moralizing_gods.moralizing_gods.apply(lambda x: "k" if pd.isna(x) else "b")
for pch in ["o", ".", "x"]:
plt.scatter(
Moralizing_gods.year[symbol == pch],
Moralizing_gods.population[symbol == pch],
marker=pch,
color=color[symbol == pch],
facecolor="none" if pch == "o" else None,
lw=1.5,
alpha=0.7,
)
plt.gca().set(xlabel="Time (year)", ylabel="Population size")
plt.show()
###Output
_____no_output_____
###Markdown
Code 15.27
###Code
dmg = Moralizing_gods
dmg.astype(str).groupby(["moralizing_gods", "writing"]).size().unstack(fill_value=0)
###Output
_____no_output_____
###Markdown
Code 15.28
###Code
dmg = Moralizing_gods
haw = dmg.polity == "Big Island Hawaii"
dmg.loc[haw, ["year", "population", "writing", "moralizing_gods"]].T.round(3)
###Output
_____no_output_____
###Markdown
Code 15.29
###Code
with numpyro.handlers.seed(rng_seed=9):
N_houses = 100
alpha = 5
beta = -3
k = 0.5
r = 0.2
cat = numpyro.sample("cat", dist.Bernoulli(k).expand([N_houses]))
notes = numpyro.sample("notes", dist.Poisson(alpha + beta * cat))
R_C = numpyro.sample("R_C", dist.Bernoulli(r).expand([N_houses]))
cat_obs = jnp.where(R_C == 1, -9, cat)
###Output
_____no_output_____
###Markdown
Code 15.30
###Code
dat = dict(notes=notes, cat=cat_obs.copy(), RC=R_C.copy(), N=N_houses - 1)
def model(N, RC, cat, notes):
# priors
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 0.5))
# sneaking cat model
k = numpyro.sample("k", dist.Beta(2, 2))
numpyro.sample("cat|RC==0", dist.Bernoulli(k), obs=cat[RC == 0])
# singing bird model
# cat NA:
custom_logprob = jnp.logaddexp(
jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes[RC == 1]),
jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes[RC == 1]),
)
numpyro.factor("notes|RC==1", custom_logprob)
# cat known present/absent:
lambda_ = jnp.exp(a + b * cat[RC == 0])
numpyro.sample("notes|RC==0", dist.Poisson(lambda_), obs=notes[RC == 0])
m15_8 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_8.run(random.PRNGKey(0), **dat)
###Output
_____no_output_____
###Markdown
Code 15.31
###Code
def model(N, RC, cat, notes, link=False):
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 0.5))
# sneaking cat model
k = numpyro.sample("k", dist.Beta(2, 2))
numpyro.sample("cat|RC==0", dist.Bernoulli(k), obs=cat[RC == 0])
# singing bird model
custom_logprob = jnp.logaddexp(
jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes[RC == 1]),
jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes[RC == 1]),
)
numpyro.factor("notes|RC==1", custom_logprob)
lambda_ = jnp.exp(a + b * cat[RC == 0])
numpyro.sample("notes|RC==0", dist.Poisson(lambda_), obs=notes[RC == 0])
if link:
lpC0 = numpyro.deterministic(
"lpC0", jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes)
)
lpC1 = numpyro.deterministic(
"lpC1", jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes)
)
numpyro.deterministic("PrC1", jnp.exp(lpC1) / (jnp.exp(lpC1) + jnp.exp(lpC0)))
m15_9 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_9.run(random.PRNGKey(0), **dat)
###Output
_____no_output_____
###Markdown
Code 15.32
###Code
with numpyro.handlers.seed(rng_seed=100):
x = numpyro.sample("x", dist.Normal().expand([10]))
y = numpyro.sample("y", dist.Normal(x))
x = jnp.concatenate([x, jnp.array([jnp.nan])])
y = jnp.concatenate([y, jnp.array([100])])
d = dict(x=x, y=y)
###Output
_____no_output_____
###Markdown
Code 15.33
###Code
Primates301 = pd.read_csv("../data/Primates301.csv", sep=";")
d = Primates301
cc = d.dropna(subset=["brain", "body"]).index
B = d.brain[cc]
M = d.body[cc]
B = B.values / max(B)
M = M.values / max(M)
###Output
_____no_output_____
###Markdown
Code 15.34
###Code
Bse = B * 0.1
Mse = M * 0.1
###Output
_____no_output_____
###Markdown
Code 15.35
###Code
dat_list = dict(B=B, M=M)
def model(M, B):
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
mu = a + b * jnp.log(M)
numpyro.sample("B", dist.LogNormal(mu, sigma), obs=B)
m15H4 = MCMC(NUTS(model), 500, 500)
m15H4.run(random.PRNGKey(0), **dat_list)
###Output
sample: 100%|██████████| 1000/1000 [00:06<00:00, 158.50it/s, 7 steps of size 2.37e-01. acc. prob=0.95]
###Markdown
Code 15.36
###Code
start = dict(M_true=dat_list["M"], B_true=dat_list["B"])
init_strategy = init_to_value(values=start)
###Output
_____no_output_____
###Markdown
Code 15.37
###Code
Primates301 = pd.read_csv("../data/Primates301.csv", sep=";")
d = Primates301
d.isna().sum()
###Output
_____no_output_____
###Markdown
Code 15.38
###Code
cc = d.dropna(subset=["body"]).index
M = d.body[cc]
M = M.values / max(M)
B = d.brain[cc]
B = B.values / B.max(skipna=True)
###Output
_____no_output_____
###Markdown
Code 15.39
###Code
start = dict(B_impute=jnp.repeat(0.5, 56))
init_strategy = init_to_value(values=start)
###Output
_____no_output_____
###Markdown
Chapter 15. Missing Data and Other Opportunities
###Code
!pip install -q numpyro arviz causalgraphicalmodels daft
import math
import os
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import jax.numpy as jnp
from jax import ops, random, vmap
from jax.scipy.special import expit
import numpyro
import numpyro.distributions as dist
from numpyro.diagnostics import print_summary
from numpyro.distributions import constraints
from numpyro.infer import MCMC, NUTS, init_to_value
if "SVG" in os.environ:
%config InlineBackend.figure_formats = ["svg"]
az.style.use("arviz-darkgrid")
numpyro.set_platform("cpu")
numpyro.set_host_device_count(4)
###Output
_____no_output_____
###Markdown
Code 15.1
###Code
# simulate a pancake and return randomly ordered sides
def sim_pancake(seed):
pancake = dist.Categorical(logits=jnp.ones(3)).sample(random.PRNGKey(2 * seed))
sides = jnp.array([1, 1, 1, 0, 0, 0]).reshape(3, 2).T[:, pancake]
return random.permutation(random.PRNGKey(2 * seed + 1), sides)
# sim 10,000 pancakes
pancakes = vmap(sim_pancake, out_axes=1)(jnp.arange(10000))
up = pancakes[0]
down = pancakes[1]
# compute proportion 1/1 (BB) out of all 1/1 and 1/0
num_11_10 = jnp.sum(up == 1)
num_11 = jnp.sum((up == 1) & (down == 1))
num_11 / num_11_10
###Output
_____no_output_____
###Markdown
Code 15.2
###Code
WaffleDivorce = pd.read_csv("../data/WaffleDivorce.csv", sep=";")
d = WaffleDivorce
# points
ax = az.plot_pair(
d[["MedianAgeMarriage", "Divorce"]].to_dict(orient="list"),
scatter_kwargs=dict(ms=15, mfc="none"),
)
ax.set(ylim=(4, 15), xlabel="Median age marrage", ylabel="Divorce rate")
# standard errors
for i in range(d.shape[0]):
ci = d.Divorce[i] + jnp.array([-1, 1]) * d["Divorce SE"][i]
x = d.MedianAgeMarriage[i]
plt.plot([x, x], ci, "k")
###Output
_____no_output_____
###Markdown
Code 15.3
###Code
dlist = dict(
D_obs=d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values,
D_sd=d["Divorce SE"].values / d.Divorce.std(),
M=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0],
)
def model(A, M, D_sd, D_obs, N):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
mu = a + bA * A + bM * M
D_true = numpyro.sample("D_true", dist.Normal(mu, sigma))
numpyro.sample("D_obs", dist.Normal(D_true, D_sd), obs=D_obs)
m15_1 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_1.run(random.PRNGKey(0), **dlist)
###Output
_____no_output_____
###Markdown
Code 15.4
###Code
m15_1.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
D_true[0] 1.18 0.37 1.18 0.63 1.82 2994.08 1.00
D_true[1] 0.68 0.56 0.68 -0.15 1.63 3886.40 1.00
D_true[2] 0.43 0.35 0.43 -0.10 1.02 3496.83 1.00
D_true[3] 1.41 0.47 1.42 0.65 2.11 3191.36 1.00
D_true[4] -0.90 0.13 -0.90 -1.10 -0.69 4891.79 1.00
D_true[5] 0.65 0.38 0.66 0.07 1.29 3366.77 1.00
D_true[6] -1.37 0.35 -1.37 -1.94 -0.85 3792.56 1.00
D_true[7] -0.35 0.49 -0.33 -1.10 0.44 3352.11 1.00
D_true[8] -1.88 0.60 -1.89 -2.82 -0.90 2520.68 1.00
D_true[9] -0.62 0.16 -0.62 -0.85 -0.32 3877.20 1.00
D_true[10] 0.78 0.28 0.77 0.35 1.22 4248.21 1.00
D_true[11] -0.56 0.49 -0.56 -1.35 0.21 3013.46 1.00
D_true[12] 0.15 0.49 0.17 -0.59 0.97 1573.35 1.00
D_true[13] -0.87 0.23 -0.87 -1.25 -0.50 3840.61 1.00
D_true[14] 0.56 0.31 0.55 0.11 1.09 3881.69 1.00
D_true[15] 0.28 0.38 0.29 -0.35 0.84 3537.87 1.00
D_true[16] 0.50 0.42 0.50 -0.13 1.17 4165.48 1.00
D_true[17] 1.26 0.33 1.26 0.74 1.81 3670.00 1.00
D_true[18] 0.44 0.39 0.43 -0.15 1.07 4188.21 1.00
D_true[19] 0.43 0.52 0.40 -0.40 1.26 2300.86 1.00
D_true[20] -0.55 0.32 -0.56 -1.11 -0.08 4109.26 1.00
D_true[21] -1.09 0.27 -1.09 -1.50 -0.66 3322.54 1.00
D_true[22] -0.26 0.27 -0.27 -0.69 0.16 4368.18 1.00
D_true[23] -1.01 0.28 -1.01 -1.43 -0.55 3460.48 1.00
D_true[24] 0.44 0.41 0.43 -0.20 1.09 3508.90 1.00
D_true[25] -0.03 0.31 -0.02 -0.51 0.49 5159.17 1.00
D_true[26] -0.01 0.51 -0.00 -0.89 0.74 3082.96 1.00
D_true[27] -0.16 0.39 -0.17 -0.79 0.47 4702.21 1.00
D_true[28] -0.26 0.48 -0.28 -1.07 0.49 3698.39 1.00
D_true[29] -1.81 0.24 -1.82 -2.19 -1.40 3468.19 1.00
D_true[30] 0.17 0.44 0.17 -0.50 0.87 4467.52 1.00
D_true[31] -1.66 0.16 -1.66 -1.95 -1.42 3546.08 1.00
D_true[32] 0.12 0.24 0.12 -0.25 0.49 4106.29 1.00
D_true[33] -0.08 0.53 -0.06 -0.87 0.77 2352.92 1.00
D_true[34] -0.12 0.22 -0.12 -0.46 0.23 3803.36 1.00
D_true[35] 1.28 0.43 1.27 0.64 2.01 4563.89 1.00
D_true[36] 0.24 0.36 0.23 -0.31 0.81 4425.27 1.00
D_true[37] -1.02 0.22 -1.02 -1.35 -0.67 4596.58 1.00
D_true[38] -0.93 0.55 -0.94 -1.78 -0.04 3146.40 1.00
D_true[39] -0.68 0.33 -0.68 -1.18 -0.13 4441.57 1.00
D_true[40] 0.24 0.56 0.24 -0.61 1.17 3139.58 1.00
D_true[41] 0.75 0.34 0.75 0.20 1.28 2849.56 1.00
D_true[42] 0.19 0.18 0.19 -0.10 0.46 3765.82 1.00
D_true[43] 0.79 0.42 0.80 0.13 1.46 2594.25 1.00
D_true[44] -0.40 0.51 -0.41 -1.22 0.40 3325.20 1.00
D_true[45] -0.39 0.25 -0.39 -0.78 0.00 3877.89 1.00
D_true[46] 0.15 0.30 0.16 -0.33 0.62 4323.28 1.00
D_true[47] 0.56 0.46 0.56 -0.22 1.24 3581.97 1.00
D_true[48] -0.64 0.27 -0.64 -1.06 -0.22 3759.21 1.00
D_true[49] 0.82 0.62 0.82 -0.14 1.79 3003.50 1.00
a -0.05 0.10 -0.06 -0.21 0.10 2791.80 1.00
bA -0.61 0.16 -0.62 -0.87 -0.37 2064.26 1.00
bM 0.04 0.17 0.04 -0.20 0.33 1884.70 1.00
sigma 0.60 0.11 0.59 0.42 0.76 810.11 1.00
Number of divergences: 0
###Markdown
Code 15.5
###Code
dlist = dict(
D_obs=d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values,
D_sd=d["Divorce SE"].values / d.Divorce.std(),
M_obs=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
M_sd=d["Marriage SE"].values / d.Marriage.std(),
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0],
)
def model(A, M_sd, M_obs, D_sd, D_obs, N):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
M_est = numpyro.sample("M_est", dist.Normal(0, 1).expand([N]))
numpyro.sample("M_obs", dist.Normal(M_est, M_sd), obs=M_obs)
mu = a + bA * A + bM * M_est
D_est = numpyro.sample("D_est", dist.Normal(mu, sigma))
numpyro.sample("D_obs", dist.Normal(D_est, D_sd), obs=D_obs)
m15_2 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_2.run(random.PRNGKey(0), **dlist)
###Output
_____no_output_____
###Markdown
Code 15.6
###Code
post = m15_2.get_samples()
D_est = jnp.mean(post["D_est"], 0)
M_est = jnp.mean(post["M_est"], 0)
plt.plot(dlist["M_obs"], dlist["D_obs"], "bo", alpha=0.5)
plt.gca().set(xlabel="marriage rate (std)", ylabel="divorce rate (std)")
plt.plot(M_est, D_est, "ko", mfc="none")
for i in range(d.shape[0]):
plt.plot([dlist["M_obs"][i], M_est[i]], [dlist["D_obs"][i], D_est[i]], "k-", lw=1)
###Output
_____no_output_____
###Markdown
Code 15.7
###Code
N = 500
A = dist.Normal().sample(random.PRNGKey(0), (N,))
M = dist.Normal(-A).sample(random.PRNGKey(1))
D = dist.Normal(A).sample(random.PRNGKey(2))
A_obs = dist.Normal(A).sample(random.PRNGKey(3))
###Output
_____no_output_____
###Markdown
Code 15.8
###Code
N = 100
S = dist.Normal().sample(random.PRNGKey(0), (N,))
H = dist.Binomial(10, expit(S)).sample(random.PRNGKey(1))
###Output
_____no_output_____
###Markdown
Code 15.9
###Code
D = dist.Bernoulli(0.5).sample(random.PRNGKey(2), (N,)) # dogs completely random
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.10
###Code
D = jnp.where(S > 0, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.11
###Code
with numpyro.handlers.seed(rng_seed=501):
N = 1000
X = numpyro.sample("X", dist.Normal().expand([N]))
S = numpyro.sample("S", dist.Normal().expand([N]))
H = numpyro.sample("H", dist.Binomial(10, logits=2 + S - 2 * X))
D = jnp.where(X > 1, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.12
###Code
dat_list = dict(H=H, S=S)
def model(S, H):
a = numpyro.sample("a", dist.Normal(0, 1))
bS = numpyro.sample("bS", dist.Normal(0, 0.5))
logit_p = a + bS * S
numpyro.sample("H", dist.Binomial(10, logits=logit_p), obs=H)
m15_3 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_3.run(random.PRNGKey(0), **dat_list)
m15_3.print_summary()
###Output
mean std median 5.0% 95.0% n_eff r_hat
a 1.32 0.03 1.32 1.28 1.36 1473.87 1.00
bS 0.62 0.03 0.62 0.58 0.66 1294.44 1.00
Number of divergences: 0
###Markdown
Code 15.13
###Code
dat_list0 = dict(H=H[D == 0], S=S[D == 0])
def model(S, H):
a = numpyro.sample("a", dist.Normal(0, 1))
bS = numpyro.sample("bS", dist.Normal(0, 0.5))
logit_p = a + bS * S
numpyro.sample("H", dist.Binomial(10, logits=logit_p), obs=H)
m15_4 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_4.run(random.PRNGKey(0), **dat_list0)
m15_4.print_summary()
###Output
mean std median 5.0% 95.0% n_eff r_hat
a 1.92 0.03 1.92 1.86 1.97 1027.57 1.01
bS 0.72 0.03 0.72 0.67 0.78 1039.30 1.00
Number of divergences: 0
###Markdown
Code 15.14
###Code
D = jnp.where(jnp.abs(X) < 1, 1, 0)
###Output
_____no_output_____
###Markdown
Code 15.15
###Code
N = 100
S = dist.Normal().sample(random.PRNGKey(0), (N,))
H = dist.Binomial(10, logits=S).sample(random.PRNGKey(1))
D = jnp.where(H < 5, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.16
###Code
milk = pd.read_csv("../data/milk.csv", sep=";")
d = milk
d["neocortex.prop"] = d["neocortex.perc"] / 100
d["logmass"] = d.mass.apply(math.log)
###Output
_____no_output_____
###Markdown
Code 15.17
###Code
dat_list = dict(
K=d["kcal.per.g"].pipe(lambda x: (x - x.mean()) / x.std()).values,
B=d["neocortex.prop"].pipe(lambda x: (x - x.mean()) / x.std()).values,
M=d.logmass.pipe(lambda x: (x - x.mean()) / x.std()).values,
)
def model(B, M, K):
a = numpyro.sample("a", dist.Normal(0, 0.5))
nu = numpyro.sample("nu", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma_B = numpyro.sample("sigma_B", dist.Exponential(1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
B_impute = numpyro.sample(
"B_impute", dist.Normal(0, 1).expand([int(np.isnan(B).sum())]).mask(False)
)
B = ops.index_update(B, np.nonzero(np.isnan(B))[0], B_impute)
numpyro.sample("B", dist.Normal(nu, sigma_B), obs=B)
mu = a + bB * B + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_5 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_5.run(random.PRNGKey(0), **dat_list)
###Output
_____no_output_____
###Markdown
Code 15.18
###Code
m15_5.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
B_impute[0] -0.59 0.93 -0.61 -2.06 0.86 1655.83 1.00
B_impute[1] -0.72 0.94 -0.73 -2.31 0.72 1612.19 1.00
B_impute[2] -0.74 0.93 -0.76 -2.18 0.83 1767.06 1.00
B_impute[3] -0.31 0.89 -0.33 -1.68 1.14 2108.70 1.00
B_impute[4] 0.45 0.90 0.44 -0.88 1.91 2078.89 1.00
B_impute[5] -0.18 0.93 -0.19 -1.67 1.21 2398.70 1.00
B_impute[6] 0.18 0.89 0.19 -1.24 1.60 2515.01 1.00
B_impute[7] 0.30 0.87 0.30 -1.03 1.69 2381.34 1.00
B_impute[8] 0.51 0.89 0.53 -0.94 1.82 2092.03 1.00
B_impute[9] -0.44 0.92 -0.44 -1.94 0.94 1808.37 1.00
B_impute[10] -0.26 0.89 -0.28 -1.62 1.19 2006.01 1.00
B_impute[11] 0.16 0.89 0.18 -1.32 1.53 2103.16 1.00
a 0.03 0.17 0.04 -0.26 0.27 2079.00 1.00
bB 0.50 0.24 0.50 0.09 0.85 687.56 1.00
bM -0.55 0.20 -0.55 -0.84 -0.20 871.45 1.00
nu -0.05 0.21 -0.05 -0.37 0.29 1669.64 1.00
sigma 0.84 0.14 0.83 0.62 1.04 930.30 1.00
sigma_B 1.01 0.17 0.99 0.76 1.27 1055.17 1.00
Number of divergences: 0
###Markdown
Code 15.19
###Code
obs_idx = d["neocortex.prop"].notnull().values
dat_list_obs = dict(
K=dat_list["K"][obs_idx], B=dat_list["B"][obs_idx], M=dat_list["M"][obs_idx]
)
def model(B, M, K):
a = numpyro.sample("a", dist.Normal(0, 0.5))
nu = numpyro.sample("nu", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma_B = numpyro.sample("sigma_B", dist.Exponential(1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
numpyro.sample("B", dist.Normal(nu, sigma_B), obs=B)
mu = a + bB * B + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_6 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_6.run(random.PRNGKey(0), **dat_list_obs)
m15_6.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
a 0.10 0.19 0.10 -0.19 0.43 2099.18 1.00
bB 0.61 0.28 0.62 0.15 1.02 1260.84 1.00
bM -0.64 0.25 -0.65 -1.05 -0.27 1227.34 1.00
nu -0.01 0.23 0.00 -0.35 0.37 2052.60 1.00
sigma 0.87 0.19 0.83 0.60 1.13 1141.14 1.00
sigma_B 1.05 0.19 1.03 0.74 1.31 1891.93 1.00
Number of divergences: 0
###Markdown
Code 15.20
###Code
az.plot_forest(
[az.from_numpyro(m15_5), az.from_numpyro(m15_6)],
model_names=["m15.5", "m15.6"],
var_names=["bB", "bM"],
combined=True,
hdi_prob=0.89,
)
plt.show()
###Output
_____no_output_____
###Markdown
Code 15.21
###Code
post = m15_5.get_samples()
B_impute_mu = jnp.mean(post["B_impute"], 0)
B_impute_ci = jnp.percentile(post["B_impute"], q=jnp.array([5.5, 94.5]), axis=0)
# B vs K
plt.plot(dat_list["B"], dat_list["K"], "o")
plt.gca().set(xlabel="neocortex percent (std)", ylabel="kcal mild (std)")
miss_idx = pd.isna(dat_list["B"]).nonzero()[0]
Ki = dat_list["K"][miss_idx]
plt.plot(B_impute_mu, Ki, "ko", mfc="none")
for i in range(12):
plt.plot(B_impute_ci[:, i], jnp.repeat(Ki[i], 2), "k", lw=1)
plt.show()
# M vs B
plt.plot(dat_list["M"], dat_list["B"], "o")
plt.gca().set(xlabel="log body mass (std)", ylabel="neocortex percent (std)")
Mi = dat_list["M"][miss_idx]
plt.plot(Mi, B_impute_mu, "ko", mfc="none")
for i in range(12):
plt.plot(jnp.repeat(Mi[i], 2), B_impute_ci[:, i], "k", lw=1)
###Output
_____no_output_____
###Markdown
Code 15.22
###Code
def model(B, M, K):
# priors
a = numpyro.sample("a", dist.Normal(0, 0.5))
muB = numpyro.sample("muB", dist.Normal(0, 0.5))
muM = numpyro.sample("muM", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
Rho_BM = numpyro.sample("Rho_BM", dist.LKJ(2, 2))
Sigma_BM = numpyro.sample("Sigma_BM", dist.Exponential(1).expand([2]))
# define B_merge as mix of observed and imputed values
B_impute = numpyro.sample(
"B_impute", dist.Normal(0, 1).expand([int(np.isnan(B).sum())]).mask(False)
)
B_merge = ops.index_update(B, np.nonzero(np.isnan(B))[0], B_impute)
# M and B correlation
MB = jnp.stack([M, B_merge], axis=1)
cov = jnp.outer(Sigma_BM, Sigma_BM) * Rho_BM
numpyro.sample("MB", dist.MultivariateNormal(jnp.stack([muM, muB]), cov), obs=MB)
# K as function of B and M
mu = a + bB * B_merge + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_7 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_7.run(random.PRNGKey(0), **dat_list)
post = m15_7.get_samples(group_by_chain=True)
print_summary({k: v for k, v in post.items() if k in ["bM", "bB", "Rho_BM"]})
###Output
mean std median 5.0% 95.0% n_eff r_hat
Rho_BM[0,0] 1.00 0.00 1.00 1.00 1.00 nan nan
Rho_BM[0,1] 0.60 0.14 0.62 0.39 0.80 1416.93 1.00
Rho_BM[1,0] 0.60 0.14 0.62 0.39 0.80 1416.93 1.00
Rho_BM[1,1] 1.00 0.00 1.00 1.00 1.00 1718.78 1.00
bB 0.58 0.26 0.59 0.14 0.98 704.15 1.01
bM -0.64 0.23 -0.64 -1.02 -0.30 942.12 1.00
###Markdown
Code 15.23
###Code
B_missidx = pd.isna(dat_list["B"]).nonzero()[0]
###Output
_____no_output_____
###Markdown
Code 15.24
###Code
Moralizing_gods = pd.read_csv("../data/Moralizing_gods.csv", sep=";")
Moralizing_gods
###Output
_____no_output_____
###Markdown
Code 15.25
###Code
Moralizing_gods.moralizing_gods.value_counts(dropna=False)
###Output
_____no_output_____
###Markdown
Code 15.26
###Code
symbol = Moralizing_gods.moralizing_gods.apply(lambda x: "." if x == 1 else "o")
symbol[Moralizing_gods.moralizing_gods.isna()] = "x"
color = Moralizing_gods.moralizing_gods.apply(lambda x: "k" if pd.isna(x) else "b")
for pch in ["o", ".", "x"]:
plt.scatter(
Moralizing_gods.year[symbol == pch],
Moralizing_gods.population[symbol == pch],
marker=pch,
color=color[symbol == pch],
facecolor="none" if pch == "o" else None,
lw=1.5,
alpha=0.7,
)
plt.gca().set(xlabel="Time (year)", ylabel="Population size")
plt.show()
###Output
_____no_output_____
###Markdown
Code 15.27
###Code
dmg = Moralizing_gods
dmg.astype(str).groupby(["moralizing_gods", "writing"]).size().unstack(fill_value=0)
###Output
_____no_output_____
###Markdown
Code 15.28
###Code
dmg = Moralizing_gods
haw = dmg.polity == "Big Island Hawaii"
dmg.loc[haw, ["year", "population", "writing", "moralizing_gods"]].T.round(3)
###Output
_____no_output_____
###Markdown
Code 15.29
###Code
with numpyro.handlers.seed(rng_seed=9):
N_houses = 100
alpha = 5
beta = -3
k = 0.5
r = 0.2
cat = numpyro.sample("cat", dist.Bernoulli(k).expand([N_houses]))
notes = numpyro.sample("notes", dist.Poisson(alpha + beta * cat))
R_C = numpyro.sample("R_C", dist.Bernoulli(r).expand([N_houses]))
cat_obs = jnp.where(R_C == 1, -9, cat)
###Output
_____no_output_____
###Markdown
Code 15.30
###Code
dat = dict(notes=notes, cat=cat_obs.copy(), RC=R_C.copy(), N=N_houses - 1)
def model(N, RC, cat, notes):
# priors
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 0.5))
# sneaking cat model
k = numpyro.sample("k", dist.Beta(2, 2))
numpyro.sample("cat|RC==0", dist.Bernoulli(k), obs=cat[RC == 0])
# singing bird model
# cat NA:
custom_logprob = jnp.logaddexp(
jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes[RC == 1]),
jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes[RC == 1]),
)
numpyro.factor("notes|RC==1", custom_logprob)
# cat known present/absent:
lambda_ = jnp.exp(a + b * cat[RC == 0])
numpyro.sample("notes|RC==0", dist.Poisson(lambda_), obs=notes[RC == 0])
m15_8 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_8.run(random.PRNGKey(0), **dat)
###Output
_____no_output_____
###Markdown
Code 15.31
###Code
def model(N, RC, cat, notes, link=False):
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 0.5))
# sneaking cat model
k = numpyro.sample("k", dist.Beta(2, 2))
numpyro.sample("cat|RC==0", dist.Bernoulli(k), obs=cat[RC == 0])
# singing bird model
custom_logprob = jnp.logaddexp(
jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes[RC == 1]),
jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes[RC == 1]),
)
numpyro.factor("notes|RC==1", custom_logprob)
lambda_ = jnp.exp(a + b * cat[RC == 0])
numpyro.sample("notes|RC==0", dist.Poisson(lambda_), obs=notes[RC == 0])
if link:
lpC0 = numpyro.deterministic(
"lpC0", jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes)
)
lpC1 = numpyro.deterministic(
"lpC1", jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes)
)
numpyro.deterministic("PrC1", jnp.exp(lpC1) / (jnp.exp(lpC1) + jnp.exp(lpC0)))
m15_9 = MCMC(NUTS(model), num_warmup=500, num_samples=500, num_chains=4)
m15_9.run(random.PRNGKey(0), **dat)
###Output
_____no_output_____
###Markdown
Code 15.32
###Code
with numpyro.handlers.seed(rng_seed=100):
x = numpyro.sample("x", dist.Normal().expand([10]))
y = numpyro.sample("y", dist.Normal(x))
x = jnp.concatenate([x, jnp.array([jnp.nan])])
y = jnp.concatenate([y, jnp.array([100])])
d = dict(x=x, y=y)
###Output
_____no_output_____
###Markdown
Code 15.33
###Code
Primates301 = pd.read_csv("../data/Primates301.csv", sep=";")
d = Primates301
cc = d.dropna(subset=["brain", "body"]).index
B = d.brain[cc]
M = d.body[cc]
B = B.values / max(B)
M = M.values / max(M)
###Output
_____no_output_____
###Markdown
Code 15.34
###Code
Bse = B * 0.1
Mse = M * 0.1
###Output
_____no_output_____
###Markdown
Code 15.35
###Code
dat_list = dict(B=B, M=M)
def model(M, B):
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
mu = a + b * jnp.log(M)
numpyro.sample("B", dist.LogNormal(mu, sigma), obs=B)
m15H4 = MCMC(NUTS(model), num_warmup=500, num_samples=500)
m15H4.run(random.PRNGKey(0), **dat_list)
###Output
sample: 100%|██████████| 1000/1000 [00:06<00:00, 158.50it/s, 7 steps of size 2.37e-01. acc. prob=0.95]
###Markdown
Code 15.36
###Code
start = dict(M_true=dat_list["M"], B_true=dat_list["B"])
init_strategy = init_to_value(values=start)
###Output
_____no_output_____
###Markdown
Code 15.37
###Code
Primates301 = pd.read_csv("../data/Primates301.csv", sep=";")
d = Primates301
d.isna().sum()
###Output
_____no_output_____
###Markdown
Code 15.38
###Code
cc = d.dropna(subset=["body"]).index
M = d.body[cc]
M = M.values / max(M)
B = d.brain[cc]
B = B.values / B.max(skipna=True)
###Output
_____no_output_____
###Markdown
Code 15.39
###Code
start = dict(B_impute=jnp.repeat(0.5, 56))
init_strategy = init_to_value(values=start)
###Output
_____no_output_____
###Markdown
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ksachdeva/rethinking-tensorflow-probability/blob/master/notebooks/15_missing_data_and_other_opportunities.ipynb) Chapter 15 - Missing Data and Other Opportunities Imports and utility functions
###Code
# Install packages that are not installed in colab
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
%tensorflow_version 2.X
print("Installing watermark & arviz ...")
!pip install -q watermark
!pip install -q arviz
%load_ext watermark
# Core
import collections
import numpy as np
import arviz as az
import pandas as pd
import xarray as xr
import tensorflow as tf
import tensorflow_probability as tfp
# visualization
import matplotlib.pyplot as plt
# aliases
tfd = tfp.distributions
tfb = tfp.bijectors
Root = tfd.JointDistributionCoroutine.Root
%watermark -p numpy,tensorflow,tensorflow_probability,arviz,scipy,pandas
# config of various plotting libraries
%config InlineBackend.figure_format = 'retina'
az.style.use('arviz-darkgrid')
###Output
_____no_output_____
###Markdown
Tensorflow MCMC Sampling helpers
###Code
USE_XLA = False #@param
NUMBER_OF_CHAINS = 2 #@param
NUMBER_OF_BURNIN = 500 #@param
NUMBER_OF_SAMPLES = 500 #@param
NUMBER_OF_LEAPFROG_STEPS = 4 #@param
def _trace_to_arviz(trace=None,
sample_stats=None,
observed_data=None,
prior_predictive=None,
posterior_predictive=None,
inplace=True):
if trace is not None and isinstance(trace, dict):
trace = {k: v.numpy()
for k, v in trace.items()}
if sample_stats is not None and isinstance(sample_stats, dict):
sample_stats = {k: v.numpy().T for k, v in sample_stats.items()}
if prior_predictive is not None and isinstance(prior_predictive, dict):
prior_predictive = {k: v[np.newaxis]
for k, v in prior_predictive.items()}
if posterior_predictive is not None and isinstance(posterior_predictive, dict):
if isinstance(trace, az.InferenceData) and inplace == True:
return trace + az.from_dict(posterior_predictive=posterior_predictive)
else:
trace = None
return az.from_dict(
posterior=trace,
sample_stats=sample_stats,
prior_predictive=prior_predictive,
posterior_predictive=posterior_predictive,
observed_data=observed_data,
)
@tf.function(autograph=False, experimental_compile=USE_XLA)
def run_hmc_chain(init_state,
bijectors,
step_size,
target_log_prob_fn,
num_leapfrog_steps=NUMBER_OF_LEAPFROG_STEPS,
num_samples=NUMBER_OF_SAMPLES,
burnin=NUMBER_OF_BURNIN,
):
def _trace_fn_transitioned(_, pkr):
return (
pkr.inner_results.inner_results.log_accept_ratio
)
hmc_kernel = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn,
num_leapfrog_steps=num_leapfrog_steps,
step_size=step_size)
inner_kernel = tfp.mcmc.TransformedTransitionKernel(
inner_kernel=hmc_kernel,
bijector=bijectors)
kernel = tfp.mcmc.SimpleStepSizeAdaptation(
inner_kernel=inner_kernel,
target_accept_prob=.8,
num_adaptation_steps=int(0.8*burnin),
log_accept_prob_getter_fn=lambda pkr: pkr.inner_results.log_accept_ratio
)
results, sampler_stat = tfp.mcmc.sample_chain(
num_results=num_samples,
num_burnin_steps=burnin,
current_state=init_state,
kernel=kernel,
trace_fn=_trace_fn_transitioned)
return results, sampler_stat
def sample_posterior(jdc,
observed_data,
params,
init_state=None,
bijectors=None,
step_size = 0.1,
num_chains=NUMBER_OF_CHAINS,
num_samples=NUMBER_OF_SAMPLES,
burnin=NUMBER_OF_BURNIN):
if init_state is None:
init_state = list(jdc.sample(num_chains)[:-1])
if bijectors is None:
bijectors = [tfb.Identity() for i in init_state]
target_log_prob_fn = lambda *x: jdc.log_prob(x + observed_data)
results, sample_stats = run_hmc_chain(init_state,
bijectors,
step_size=step_size,
target_log_prob_fn=target_log_prob_fn,
num_samples=num_samples,
burnin=burnin)
stat_names = ['mean_tree_accept']
sampler_stats = dict(zip(stat_names, [sample_stats]))
transposed_results = []
for r in results:
if len(r.shape) == 2:
transposed_shape = [1,0]
elif len(r.shape) == 3:
transposed_shape = [1,0,2]
else:
transposed_shape = [1,0,2,3]
transposed_results.append(tf.transpose(r, transposed_shape))
posterior = dict(zip(params, transposed_results))
az_trace = _trace_to_arviz(trace=posterior,
sample_stats=sampler_stats)
return posterior, az_trace
###Output
_____no_output_____
###Markdown
Dataset URLs & Utils
###Code
# You could change base url to local dir or a remoate raw github content
_BASE_URL = "https://raw.githubusercontent.com/rmcelreath/rethinking/master/data"
WAFFLE_DIVORCE_DATASET_PATH = f"{_BASE_URL}/WaffleDivorce.csv"
# A utility method to convert data (columns) from pandas dataframe
# into tensors with appropriate type
def df_to_tensors(name, df, columns, default_type=tf.float32):
""" name : Name of the dataset
df : pandas dataframe
colums : a list of names that have the same type or
a dictionary where keys are the column names and values are the tensorflow type (e.g. tf.float32)
"""
if isinstance(columns,dict):
column_names = columns.keys()
fields = [tf.cast(df[k].values, dtype=v) for k,v in columns.items()]
else:
column_names = columns
fields = [tf.cast(df[k].values, dtype=default_type) for k in column_names]
# build the cls
tuple_cls = collections.namedtuple(name, column_names)
# build the obj
return tuple_cls._make(fields)
###Output
_____no_output_____
###Markdown
Introduction Code 15.1
###Code
# simulate a pancake and return randomly ordered sides
def sim_pancake():
pancake = tfd.Categorical(logits=np.ones(3)).sample().numpy()
sides = np.array([1, 1, 1, 0, 0, 0]).reshape(3, 2).T[:, pancake]
np.random.shuffle(sides)
return sides
# sim 10,000 pancakes
pancakes = []
for i in range(10_000):
pancakes.append(sim_pancake())
pancakes = np.array(pancakes).T
up = pancakes[0]
down = pancakes[1]
# compute proportion 1/1 (BB) out of all 1/1 and 1/0
num_11_10 = np.sum(up == 1)
num_11 = np.sum((up == 1) & (down == 1))
num_11 / num_11_10
###Output
_____no_output_____
###Markdown
15.1 Measurement error Code 15.2In the waffle dataset, both divorce rate and marriage rate variables are measured with substantial error and that error is reported in the form of standard errors.Also error varies across the states. Below we are plotting the measurement errors
###Code
d = pd.read_csv(WAFFLE_DIVORCE_DATASET_PATH, sep=";")
# points
ax = az.plot_pair(d[["MedianAgeMarriage", "Divorce"]].to_dict(orient="list"),
scatter_kwargs=dict(ms=15, mfc="none"))
ax.set(ylim=(4, 15), xlabel="Median age marriage", ylabel="Divorce rate")
# standard errors
for i in range(d.shape[0]):
ci = d.Divorce[i] + np.array([-1, 1]) * d["Divorce SE"][i]
x = d.MedianAgeMarriage[i]
plt.plot([x, x], ci, "k")
###Output
_____no_output_____
###Markdown
In the above plot, the lenght of the vertical lines show how uncertain the observed divorce rate is. 15.1.1 Error on the outcome Code 15.3
###Code
d["D_obs"] = d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values
d["D_sd"] = d["Divorce SE"].values / d.Divorce.std()
d["M"] = d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values
d["A"] = d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values
N = d.shape[0]
tdf = df_to_tensors("Waffle", d, ["D_obs", "D_sd", "M", "A"])
def model_15_1(A, M, D_sd, N):
def _generator():
alpha = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.2, name="alpha"), sample_shape=1))
betaA = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaA"), sample_shape=1))
betaM = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaM"), sample_shape=1))
sigma = yield Root(tfd.Sample(tfd.Exponential(rate=1., name="sigma"), sample_shape=1))
mu = alpha[...,tf.newaxis] + betaA[...,tf.newaxis] * A + betaM[...,tf.newaxis] * M
scale = sigma[...,tf.newaxis]
D_true = yield tfd.Independent(tfd.Normal(loc=mu, scale=scale), reinterpreted_batch_ndims=1)
D_obs = yield tfd.Independent(tfd.Normal(loc=D_true, scale=D_sd), reinterpreted_batch_ndims=1)
return tfd.JointDistributionCoroutine(_generator, validate_args=False)
jdc_15_1 = model_15_1(tdf.A, tdf.M, tdf.D_sd, N)
NUM_CHAINS_FOR_15_1 = 2
init_state = [
tf.zeros([NUM_CHAINS_FOR_15_1]),
tf.zeros([NUM_CHAINS_FOR_15_1]),
tf.zeros([NUM_CHAINS_FOR_15_1]),
tf.ones([NUM_CHAINS_FOR_15_1]),
tf.zeros([NUM_CHAINS_FOR_15_1, N]),
]
bijectors = [
tfb.Identity(),
tfb.Identity(),
tfb.Identity(),
tfb.Exp(),
tfb.Identity()
]
posterior_15_1, trace_15_1 = sample_posterior(jdc_15_1,
observed_data=(tdf.D_obs,),
params=['alpha', 'betaA', 'betaM', 'sigma', 'D_true'],
init_state=init_state,
bijectors=bijectors)
###Output
WARNING:tensorflow:From /Users/ksachdeva/Desktop/Dev/myoss/rethinking-tfp-interim/env/lib/python3.6/site-packages/tensorflow_probability/python/mcmc/kernel.py:104: calling HamiltonianMonteCarlo.__init__ (from tensorflow_probability.python.mcmc.hmc) with step_size_update_fn is deprecated and will be removed after 2019-05-22.
Instructions for updating:
The `step_size_update_fn` argument is deprecated. Use `tfp.mcmc.SimpleStepSizeAdaptation` instead.
WARNING:tensorflow:From /Users/ksachdeva/Desktop/Dev/myoss/rethinking-tfp-interim/env/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py:507: calling HamiltonianMonteCarlo.__init__ (from tensorflow_probability.python.mcmc.hmc) with seed is deprecated and will be removed after 2020-09-20.
Instructions for updating:
The `seed` argument is deprecated (but will work until removed). Pass seed to `tfp.mcmc.sample_chain` instead.
###Markdown
Code 15.4
###Code
az.summary(trace_15_1, round_to=2, kind='all', hdi_prob=0.89)
###Output
_____no_output_____
###Markdown
Code 15.5What happens when there is a measurement error on predictor variables as well ?
###Code
d["D_obs"] = d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values
d["D_sd"] = d["Divorce SE"].values / d.Divorce.std()
d["M_obs"] = d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values
d["M_sd"] = d["Marriage SE"].values / d.Marriage.std()
d["A"] = d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values
N = d.shape[0]
tdf = df_to_tensors("Waffle", d, ["D_obs", "D_sd", "M_obs", "M_sd", "A"])
def model_15_2(A, M_sd, D_sd, N):
def _generator():
alpha = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.2, name="alpha"), sample_shape=1))
betaA = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaA"), sample_shape=1))
betaM = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaM"), sample_shape=1))
sigma = yield Root(tfd.Sample(tfd.Exponential(rate=1., name="sigma"), sample_shape=1))
M_true = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=1., name="M_true"), sample_shape=N))
mu = alpha[...,tf.newaxis] + betaA[...,tf.newaxis] * A + betaM[...,tf.newaxis] * M_true
scale = sigma[...,tf.newaxis]
D_true = yield tfd.Independent(tfd.Normal(loc=mu, scale=scale), reinterpreted_batch_ndims=1)
D_obs = yield tfd.Independent(tfd.Normal(loc=D_true, scale=D_sd), reinterpreted_batch_ndims=1)
M_obs = yield tfd.Independent(tfd.Normal(loc=M_true, scale=M_sd, name="M_obs"), reinterpreted_batch_ndims=1)
return tfd.JointDistributionCoroutine(_generator, validate_args=False)
jdc_15_2 = model_15_2(tdf.A, tdf.M_sd, tdf.D_sd, N)
NUM_CHAINS_FOR_15_2 = 2
init_state = [
tf.zeros([NUM_CHAINS_FOR_15_2]),
tf.zeros([NUM_CHAINS_FOR_15_2]),
tf.zeros([NUM_CHAINS_FOR_15_2]),
tf.ones([NUM_CHAINS_FOR_15_2]),
tf.zeros([NUM_CHAINS_FOR_15_2, N]), # M_True
tf.zeros([NUM_CHAINS_FOR_15_2, N]), # D_True
]
bijectors = [
tfb.Identity(),
tfb.Identity(),
tfb.Identity(),
tfb.Exp(),
tfb.Identity(),
tfb.Identity()
]
posterior_15_2, trace_15_2 = sample_posterior(jdc_15_2,
observed_data=(tdf.D_obs, tdf.M_obs),
params=['alpha', 'betaA', 'betaM', 'sigma', 'M_true', 'D_true'],
init_state=init_state,
bijectors=bijectors)
###Output
_____no_output_____
###Markdown
Code 15.6
###Code
post_D_true = trace_15_2.posterior["D_true"].values[0]
post_M_true = trace_15_2.posterior["M_true"].values[0]
D_est = np.mean(post_D_true, 0)
M_est = np.mean(post_M_true, 0)
plt.plot(d["M_obs"], d["D_obs"], "bo", alpha=0.5)
plt.gca().set(xlabel="marriage rate (std)", ylabel="divorce rate (std)")
plt.plot(M_est, D_est, "ko", mfc="none")
for i in range(d.shape[0]):
plt.plot([d["M_obs"][i], M_est[i]], [d["D_obs"][i], D_est[i]],
"k-", lw=1)
###Output
_____no_output_____
###Markdown
Above figure demonstrates shrinkage of both divorce rate and marriage rate. Solid points are the observed values. Open points are posterior means. Lines connect pairs of points for the same state. Both variables are shrunk towards the inferred regression relationship. With measurement error, the insight is to realize that any uncertain piece of data can be replaced by a distribution that reflects uncertainty. Code 15.7
###Code
# Simulated toy data
N = 500
A = tfd.Normal(loc=0., scale=1.0).sample((N,))
M = tfd.Normal(loc=-A, scale=1.0).sample()
D = tfd.Normal(loc=A, scale=1.0).sample()
A_obs = tfd.Normal(loc=A, scale=1.).sample()
###Output
_____no_output_____
###Markdown
15.2 Missing data 15.2.1 DAG ate my homework Code 15.8
###Code
N = 100
S = tfd.Normal(loc=0., scale=1.).sample((N,))
H = tfd.Binomial(total_count=10, probs=tf.sigmoid(S)).sample()
###Output
_____no_output_____
###Markdown
Code 15.9Hm = Homework missingDog's decision to eat a piece of homework or not is not influenced by any relevant variable
###Code
D = tfd.Bernoulli(0.5).sample().numpy() # dogs completely random
Hm = np.where(D == 1, np.nan, H)
Hm
###Output
_____no_output_____
###Markdown
Since missing values are random, missignness does not necessiarily change the overall distribution of homework score. Code 15.10Here studying influences whether a dog eats homework S->DStudents who study a lot do not play with their Dogs and then dogs take revenge by eating homework
###Code
D = np.where(S > 0, 1, 0)
Hm = np.where(D == 1, np.nan, H)
Hm
###Output
_____no_output_____
###Markdown
Now every student who studies more than average (0) is missing homework Code 15.11The case of noisy home and its influence on homework & Dog's behavior
###Code
# TODO - use seed; have not been able to make it work with tfp
N = 1000
X = tfd.Sample(tfd.Normal(loc=0., scale=1.), sample_shape=(N,)).sample().numpy()
S = tfd.Sample(tfd.Normal(loc=0., scale=1.), sample_shape=(N,)).sample().numpy()
logits = 2 + S - 2 * X
H = tfd.Binomial(total_count=10, logits=logits).sample().numpy()
D = np.where(X > 1, 1, 0)
Hm = np.where(D == 1, np.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.12
###Code
tdf = df_to_tensors("SimulatedHomeWork", pd.DataFrame.from_dict(dict(H=H,S=S)), ["H", "S"])
def model_15_3(S):
def _generator():
alpha = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=1., name="alpha"), sample_shape=1))
betaS = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaS"), sample_shape=1))
logits = tf.squeeze(alpha[...,tf.newaxis] + betaS[...,tf.newaxis] * S)
H = yield tfd.Independent(tfd.Binomial(total_count=10, logits=logits), reinterpreted_batch_ndims=1)
return tfd.JointDistributionCoroutine(_generator, validate_args=False)
jdc_15_3 = model_15_3(tdf.S)
NUM_CHAINS_FOR_15_3 = 4
alpha_init, betaS_init, _ = jdc_15_3.sample()
init_state = [
tf.tile(alpha_init, (NUM_CHAINS_FOR_15_3,)),
tf.tile(betaS_init, (NUM_CHAINS_FOR_15_3,))
]
bijectors = [
tfb.Identity(),
tfb.Identity(),
]
posterior_15_3, trace_15_3 = sample_posterior(jdc_15_3,
observed_data=(tdf.H,),
params=['alpha', 'betaS'],
init_state=init_state,
bijectors=bijectors)
az.summary(trace_15_3, round_to=2, kind='all', hdi_prob=0.89)
###Output
_____no_output_____
###Markdown
The true coefficient on S should be 1.00. We don’t expect to get that exactly, but the estimate above is way off Code 15.13We build the model with missing data now
###Code
tdf = df_to_tensors("SimulatedHomeWork",
pd.DataFrame.from_dict(dict(H=H[D==0],S=S[D==0])), ["H", "S"])
def model_15_4(S):
def _generator():
alpha = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=1., name="alpha"), sample_shape=1))
betaS = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaS"), sample_shape=1))
logits = tf.squeeze(alpha[...,tf.newaxis] + betaS[...,tf.newaxis] * S)
H = yield tfd.Independent(tfd.Binomial(total_count=10, logits=logits), reinterpreted_batch_ndims=1)
return tfd.JointDistributionCoroutine(_generator, validate_args=False)
jdc_15_4 = model_15_4(tdf.S)
NUM_CHAINS_FOR_15_4 = 2
alpha_init, betaS_init, _ = jdc_15_4.sample()
init_state = [
tf.tile(alpha_init, (NUM_CHAINS_FOR_15_4,)),
tf.tile(betaS_init, (NUM_CHAINS_FOR_15_4,))
]
bijectors = [
tfb.Identity(),
tfb.Identity(),
]
posterior_15_4, trace_15_4 = sample_posterior(jdc_15_4,
observed_data=(tdf.H,),
params=['alpha', 'betaS'],
init_state=init_state,
bijectors=bijectors)
az.summary(trace_15_4, round_to=2, kind='all', hdi_prob=0.89)
###Output
_____no_output_____
###Markdown
Code 15.14
###Code
D = np.where(np.abs(X) < 1, 1, 0)
###Output
_____no_output_____
###Markdown
Code 15.15
###Code
N = 100
S = tfd.Normal(loc=0., scale=1.).sample((N,))
H = tfd.Binomial(total_count=10, logits=S).sample().numpy()
D = np.where(H < 5, 1, 0)
Hm = np.where(D == 1, np.nan, H)
Hm
###Output
_____no_output_____
###Markdown
Chapter 15. Missing Data and Other Opportunities
###Code
import arviz as az
import matplotlib.pyplot as plt
import numpy as onp
import pandas as pd
from jax import ops, vmap
import jax.numpy as np
from jax.random import PRNGKey, shuffle
from jax.scipy.special import expit
import numpyro
from numpyro.diagnostics import print_summary
import numpyro.distributions as dist
from numpyro.infer import MCMC, NUTS
%config InlineBackend.figure_formats = ["svg"]
az.style.use("arviz-darkgrid")
numpyro.set_host_device_count(4)
###Output
_____no_output_____
###Markdown
Code 15.1
###Code
# simulate a pancake and return randomly ordered sides
def sim_pancake(seed):
pancake = dist.Categorical(logits=np.ones(3)).sample(PRNGKey(2 * seed))
sides = np.array([1, 1, 1, 0, 0, 0]).reshape(3, 2).T[:, pancake]
return shuffle(PRNGKey(2 * seed + 1), sides)
# sim 10,000 pancakes
pancakes = vmap(sim_pancake, out_axes=1)(np.arange(10000))
up = pancakes[0]
down = pancakes[1]
# compute proportion 1/1 (BB) out of all 1/1 and 1/0
num_11_10 = np.sum(up == 1)
num_11 = np.sum((up == 1) & (down == 1))
num_11 / num_11_10
###Output
_____no_output_____
###Markdown
Code 15.2
###Code
WaffleDivorce = pd.read_csv("../data/WaffleDivorce.csv", sep=";")
d = WaffleDivorce
# points
ax = az.plot_pair(d[["MedianAgeMarriage", "Divorce"]].to_dict(orient="list"),
plot_kwargs=dict(ms=15, mfc="none"))
ax.set(ylim=(4, 15), xlabel="Median age marrage", ylabel="Divorce rate")
# standard errors
for i in range(d.shape[0]):
ci = d.Divorce[i] + np.array([-1, 1]) * d["Divorce SE"][i]
x = d.MedianAgeMarriage[i]
plt.plot([x, x], ci, "k")
###Output
_____no_output_____
###Markdown
Code 15.3
###Code
dlist = dict(
D_obs=d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values,
D_sd=d["Divorce SE"].values / d.Divorce.std(),
M=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0])
def model(A, M, D_sd, D_obs, N):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
mu = a + bA * A + bM * M
D_true = numpyro.sample("D_true", dist.Normal(mu, sigma))
numpyro.sample("D_obs", dist.Normal(D_true, D_sd), obs=D_obs)
m15_1 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_1.run(PRNGKey(0), **dlist)
###Output
_____no_output_____
###Markdown
Code 15.4
###Code
m15_1.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
D_true[0] 1.16 0.37 1.16 0.61 1.76 2417.37 1.00
D_true[1] 0.70 0.55 0.68 -0.21 1.56 2984.25 1.00
D_true[2] 0.43 0.34 0.43 -0.08 0.95 2992.93 1.00
D_true[3] 1.41 0.48 1.42 0.69 2.18 2892.45 1.00
D_true[4] -0.90 0.13 -0.90 -1.11 -0.70 3769.52 1.00
D_true[5] 0.67 0.40 0.66 0.03 1.29 4076.81 1.00
D_true[6] -1.37 0.36 -1.37 -1.94 -0.80 4089.29 1.00
D_true[7] -0.34 0.49 -0.34 -1.12 0.43 3567.06 1.00
D_true[8] -1.88 0.59 -1.88 -2.76 -0.86 2270.06 1.00
D_true[9] -0.62 0.17 -0.62 -0.90 -0.37 3937.60 1.00
D_true[10] 0.76 0.28 0.76 0.31 1.20 3041.16 1.00
D_true[11] -0.55 0.51 -0.53 -1.38 0.24 2622.84 1.00
D_true[12] 0.17 0.49 0.15 -0.61 0.93 1433.86 1.00
D_true[13] -0.86 0.23 -0.86 -1.23 -0.51 3417.41 1.00
D_true[14] 0.56 0.30 0.56 0.07 1.04 3675.07 1.00
D_true[15] 0.29 0.38 0.29 -0.33 0.89 4028.30 1.00
D_true[16] 0.49 0.42 0.48 -0.17 1.16 3463.14 1.00
D_true[17] 1.25 0.35 1.25 0.72 1.83 2800.35 1.00
D_true[18] 0.43 0.39 0.42 -0.21 1.02 2932.95 1.00
D_true[19] 0.41 0.53 0.41 -0.51 1.18 1678.05 1.00
D_true[20] -0.56 0.31 -0.57 -1.06 -0.06 3315.34 1.00
D_true[21] -1.10 0.26 -1.10 -1.51 -0.68 2935.22 1.00
D_true[22] -0.27 0.27 -0.26 -0.70 0.16 3214.68 1.00
D_true[23] -1.00 0.30 -1.00 -1.45 -0.50 3129.76 1.00
D_true[24] 0.42 0.41 0.42 -0.18 1.11 3345.13 1.00
D_true[25] -0.04 0.30 -0.03 -0.51 0.41 3217.96 1.00
D_true[26] -0.02 0.49 -0.04 -0.79 0.77 3473.62 1.00
D_true[27] -0.16 0.39 -0.15 -0.81 0.42 3649.30 1.00
D_true[28] -0.26 0.52 -0.26 -1.11 0.54 3556.10 1.00
D_true[29] -1.80 0.24 -1.80 -2.17 -1.43 2961.28 1.00
D_true[30] 0.18 0.42 0.17 -0.52 0.83 3703.32 1.00
D_true[31] -1.66 0.17 -1.66 -1.93 -1.40 3376.91 1.00
D_true[32] 0.12 0.25 0.11 -0.28 0.52 3674.55 1.00
D_true[33] -0.04 0.51 -0.04 -0.88 0.73 2283.38 1.00
D_true[34] -0.13 0.23 -0.13 -0.51 0.22 3823.52 1.00
D_true[35] 1.28 0.43 1.27 0.57 1.94 4052.14 1.00
D_true[36] 0.23 0.34 0.23 -0.33 0.76 3819.33 1.00
D_true[37] -1.02 0.21 -1.02 -1.39 -0.71 3187.56 1.00
D_true[38] -0.93 0.52 -0.95 -1.80 -0.14 2337.48 1.00
D_true[39] -0.68 0.33 -0.68 -1.22 -0.19 3912.39 1.00
D_true[40] 0.25 0.56 0.25 -0.66 1.11 2340.90 1.00
D_true[41] 0.73 0.34 0.72 0.20 1.28 3114.26 1.00
D_true[42] 0.19 0.17 0.19 -0.07 0.48 3582.58 1.00
D_true[43] 0.79 0.42 0.81 0.08 1.41 2250.96 1.00
D_true[44] -0.42 0.53 -0.42 -1.29 0.34 3457.95 1.00
D_true[45] -0.39 0.26 -0.39 -0.82 0.02 3644.34 1.00
D_true[46] 0.13 0.30 0.13 -0.39 0.55 3832.73 1.00
D_true[47] 0.57 0.46 0.57 -0.10 1.35 4228.23 1.00
D_true[48] -0.63 0.28 -0.64 -1.06 -0.19 2949.04 1.00
D_true[49] 0.86 0.59 0.87 0.02 1.89 2428.65 1.00
a -0.05 0.10 -0.05 -0.22 0.09 2077.59 1.00
bA -0.61 0.16 -0.61 -0.89 -0.37 1395.64 1.00
bM 0.06 0.17 0.05 -0.20 0.33 1371.75 1.00
sigma 0.59 0.11 0.58 0.42 0.76 591.47 1.00
Number of divergences: 0
###Markdown
Code 15.5
###Code
dlist = dict(
D_obs=d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values,
D_sd=d["Divorce SE"].values / d.Divorce.std(),
M_obs=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
M_sd=d["Marriage SE"].values / d.Marriage.std(),
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0])
def model(A, M_sd, M_obs, D_sd, D_obs, N):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
M_est = numpyro.sample("M_est", dist.Normal(0, 1), sample_shape=(N,))
numpyro.sample("M_obs", dist.Normal(M_est, M_sd), obs=M_obs)
mu = a + bA * A + bM * M_est
D_est = numpyro.sample("D_est", dist.Normal(mu, sigma))
numpyro.sample("D_obs", dist.Normal(D_est, D_sd), obs=D_obs)
m15_2 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_2.run(PRNGKey(0), **dlist)
###Output
_____no_output_____
###Markdown
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ksachdeva/rethinking-tensorflow-probability/blob/master/notebooks/15_missing_data_and_other_opportunities.ipynb) Chapter 15 - Missing Data and Other Opportunities Imports and utility functions
###Code
# Install packages that are not installed in colab
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
%tensorflow_version 2.X
!pip install watermark
!pip install arviz
USE_NIGHTLY_TFP = True # @param
if IN_COLAB and USE_NIGHTLY_TFP:
!pip install --upgrade tf-nightly
!pip install --upgrade tfp-nightly
%load_ext watermark
# Core
import numpy as np
import arviz as az
import pandas as pd
import xarray as xr
import tensorflow as tf
import tensorflow_probability as tfp
# visualization
import matplotlib.pyplot as plt
# aliases
tfd = tfp.distributions
tfb = tfp.bijectors
Root = tfd.JointDistributionCoroutine.Root
%watermark -p numpy,tensorflow,tensorflow_probability,arviz,scipy,pandas
# config of various plotting libraries
%config InlineBackend.figure_format = 'retina'
az.style.use('arviz-darkgrid')
if not USE_NIGHTLY_TFP:
assert tf.__version__ >= '2.1.0', "Tensorflow version should be at minimum 2.1.0"
assert tfp.__version__ >= '0.9.0', "TFP version should be at minimum 0.9.0"
###Output
_____no_output_____
###Markdown
Tensorflow MCMC Sampling helpers
###Code
USE_XLA = False #@param
NUMBER_OF_CHAINS = 2 #@param
NUMBER_OF_BURNIN = 500 #@param
NUMBER_OF_SAMPLES = 500 #@param
NUMBER_OF_LEAPFROG_STEPS = 4 #@param
def _trace_to_arviz(trace=None,
sample_stats=None,
observed_data=None,
prior_predictive=None,
posterior_predictive=None,
inplace=True):
if trace is not None and isinstance(trace, dict):
trace = {k: v.numpy()
for k, v in trace.items()}
if sample_stats is not None and isinstance(sample_stats, dict):
sample_stats = {k: v.numpy().T for k, v in sample_stats.items()}
if prior_predictive is not None and isinstance(prior_predictive, dict):
prior_predictive = {k: v[np.newaxis]
for k, v in prior_predictive.items()}
if posterior_predictive is not None and isinstance(posterior_predictive, dict):
if isinstance(trace, az.InferenceData) and inplace == True:
return trace + az.from_dict(posterior_predictive=posterior_predictive)
else:
trace = None
return az.from_dict(
posterior=trace,
sample_stats=sample_stats,
prior_predictive=prior_predictive,
posterior_predictive=posterior_predictive,
observed_data=observed_data,
)
@tf.function(autograph=False, experimental_compile=USE_XLA)
def run_hmc_chain(init_state,
bijectors,
step_size,
target_log_prob_fn,
num_leapfrog_steps=NUMBER_OF_LEAPFROG_STEPS,
num_samples=NUMBER_OF_SAMPLES,
burnin=NUMBER_OF_BURNIN,
):
def _trace_fn_transitioned(_, pkr):
return (
pkr.inner_results.inner_results.log_accept_ratio
)
hmc_kernel = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn,
num_leapfrog_steps=num_leapfrog_steps,
step_size=step_size)
inner_kernel = tfp.mcmc.TransformedTransitionKernel(
inner_kernel=hmc_kernel,
bijector=bijectors)
kernel = tfp.mcmc.SimpleStepSizeAdaptation(
inner_kernel=inner_kernel,
target_accept_prob=.8,
num_adaptation_steps=int(0.8*burnin),
log_accept_prob_getter_fn=lambda pkr: pkr.inner_results.log_accept_ratio
)
results, sampler_stat = tfp.mcmc.sample_chain(
num_results=num_samples,
num_burnin_steps=burnin,
current_state=init_state,
kernel=kernel,
trace_fn=_trace_fn_transitioned)
return results, sampler_stat
def sample_posterior(jdc,
observed_data,
params,
init_state=None,
bijectors=None,
step_size = 0.1,
num_chains=NUMBER_OF_CHAINS,
num_samples=NUMBER_OF_SAMPLES,
burnin=NUMBER_OF_BURNIN):
if init_state is None:
init_state = list(jdc.sample(num_chains)[:-1])
if bijectors is None:
bijectors = [tfb.Identity() for i in init_state]
target_log_prob_fn = lambda *x: jdc.log_prob(x + observed_data)
results, sample_stats = run_hmc_chain(init_state,
bijectors,
step_size=step_size,
target_log_prob_fn=target_log_prob_fn,
num_samples=num_samples,
burnin=burnin)
stat_names = ['mean_tree_accept']
sampler_stats = dict(zip(stat_names, [sample_stats]))
transposed_results = []
for r in results:
if len(r.shape) == 2:
transposed_shape = [1,0]
elif len(r.shape) == 3:
transposed_shape = [1,0,2]
else:
transposed_shape = [1,0,2,3]
transposed_results.append(tf.transpose(r, transposed_shape))
posterior = dict(zip(params, transposed_results))
az_trace = _trace_to_arviz(trace=posterior,
sample_stats=sampler_stats)
return posterior, az_trace
###Output
_____no_output_____
###Markdown
Dataset URLs
###Code
# You could change base url to local dir or a remoate raw github content
_BASE_URL = "https://raw.githubusercontent.com/ksachdeva/rethinking-tensorflow-probability/master/data"
WAFFLE_DIVORCE_DATASET_PATH = f"{_BASE_URL}/WaffleDivorce.csv"
###Output
_____no_output_____
###Markdown
Code 15.1
###Code
# simulate a pancake and return randomly ordered sides
def sim_pancake():
pancake = tfd.Categorical(logits=np.ones(3)).sample().numpy()
sides = np.array([1, 1, 1, 0, 0, 0]).reshape(3, 2).T[:, pancake]
np.random.shuffle(sides)
return sides
# sim 10,000 pancakes
pancakes = []
for i in range(10_000):
pancakes.append(sim_pancake())
pancakes = np.array(pancakes).T
up = pancakes[0]
down = pancakes[1]
# compute proportion 1/1 (BB) out of all 1/1 and 1/0
num_11_10 = np.sum(up == 1)
num_11 = np.sum((up == 1) & (down == 1))
num_11 / num_11_10
###Output
_____no_output_____
###Markdown
Code 15.2In the waffle dataset, both divorce rate and marriage rate variables are measured with substantial error and that error is reported in the form of standard errors.Also error varies across the states. Below we are plotting the measurement errors
###Code
d = pd.read_csv(WAFFLE_DIVORCE_DATASET_PATH, sep=";")
# points
ax = az.plot_pair(d[["MedianAgeMarriage", "Divorce"]].to_dict(orient="list"),
plot_kwargs=dict(ms=15, mfc="none"))
ax.set(ylim=(4, 15), xlabel="Median age marriage", ylabel="Divorce rate")
# standard errors
for i in range(d.shape[0]):
ci = d.Divorce[i] + np.array([-1, 1]) * d["Divorce SE"][i]
x = d.MedianAgeMarriage[i]
plt.plot([x, x], ci, "k")
###Output
_____no_output_____
###Markdown
In the above plot, the lenght of the vertical lines show how uncertain the observed divorce rate is. Code 15.3
###Code
dat = dict(
D_obs=tf.cast(d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values, dtype=tf.float32),
D_sd=tf.cast(d["Divorce SE"].values / d.Divorce.std(), dtype=tf.float32),
M=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0])
def model_15_1(A, M, D_sd, N):
def _generator():
alpha = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.2, name="alpha"), sample_shape=1))
betaA = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaA"), sample_shape=1))
betaM = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaM"), sample_shape=1))
sigma = yield Root(tfd.Sample(tfd.Exponential(rate=1., name="sigma"), sample_shape=1))
mu = alpha[...,tf.newaxis] + betaA[...,tf.newaxis] * A + betaM[...,tf.newaxis] * M
scale = sigma[...,tf.newaxis]
D_true = yield tfd.Independent(tfd.Normal(loc=mu, scale=scale), reinterpreted_batch_ndims=1)
D_obs = yield tfd.Independent(tfd.Normal(loc=D_true, scale=D_sd), reinterpreted_batch_ndims=1)
return tfd.JointDistributionCoroutine(_generator, validate_args=False)
jdc_15_1 = model_15_1(dat["A"], dat["M"], dat["D_sd"], dat["N"])
NUM_CHAINS_FOR_15_1 = 2
init_state = [
tf.zeros([NUM_CHAINS_FOR_15_1]),
tf.zeros([NUM_CHAINS_FOR_15_1]),
tf.zeros([NUM_CHAINS_FOR_15_1]),
tf.ones([NUM_CHAINS_FOR_15_1]),
tf.zeros([NUM_CHAINS_FOR_15_1, dat["N"]]),
]
bijectors = [
tfb.Identity(),
tfb.Identity(),
tfb.Identity(),
tfb.Exp(),
tfb.Identity()
]
posterior_15_1, trace_15_1 = sample_posterior(jdc_15_1,
observed_data=(dat["D_obs"],),
params=['alpha', 'betaA', 'betaM', 'sigma', 'D_true'],
init_state=init_state,
bijectors=bijectors)
###Output
_____no_output_____
###Markdown
Code 15.4
###Code
az.summary(trace_15_1, round_to=2, kind='all', credible_interval=0.89)
###Output
_____no_output_____
###Markdown
Code 15.5What happens when there is a measurement error on predictor variables as well ?
###Code
dat = dict(
D_obs=tf.cast(d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values, dtype=tf.float32),
D_sd=tf.cast(d["Divorce SE"].values / d.Divorce.std(), dtype=tf.float32),
M_obs=tf.cast(d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values, dtype=tf.float32),
M_sd=tf.cast(d["Marriage SE"].values / d.Marriage.std(), dtype=tf.float32),
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0])
def model_15_2(A, M_sd, D_sd, N):
def _generator():
alpha = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.2, name="alpha"), sample_shape=1))
betaA = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaA"), sample_shape=1))
betaM = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaM"), sample_shape=1))
sigma = yield Root(tfd.Sample(tfd.Exponential(rate=1., name="sigma"), sample_shape=1))
M_true = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=1., name="M_true"), sample_shape=N))
mu = alpha[...,tf.newaxis] + betaA[...,tf.newaxis] * A + betaM[...,tf.newaxis] * M_true
scale = sigma[...,tf.newaxis]
D_true = yield tfd.Independent(tfd.Normal(loc=mu, scale=scale), reinterpreted_batch_ndims=1)
D_obs = yield tfd.Independent(tfd.Normal(loc=D_true, scale=D_sd), reinterpreted_batch_ndims=1)
M_obs = yield tfd.Independent(tfd.Normal(loc=M_true, scale=M_sd, name="M_obs"), reinterpreted_batch_ndims=1)
return tfd.JointDistributionCoroutine(_generator, validate_args=False)
jdc_15_2 = model_15_2(dat["A"], dat["M_sd"], dat["D_sd"], dat["N"])
NUM_CHAINS_FOR_15_2 = 2
init_state = [
tf.zeros([NUM_CHAINS_FOR_15_2]),
tf.zeros([NUM_CHAINS_FOR_15_2]),
tf.zeros([NUM_CHAINS_FOR_15_2]),
tf.ones([NUM_CHAINS_FOR_15_2]),
tf.zeros([NUM_CHAINS_FOR_15_2, dat["N"]]), # M_True
tf.zeros([NUM_CHAINS_FOR_15_2, dat["N"]]), # D_True
]
bijectors = [
tfb.Identity(),
tfb.Identity(),
tfb.Identity(),
tfb.Exp(),
tfb.Identity(),
tfb.Identity()
]
posterior_15_2, trace_15_2 = sample_posterior(jdc_15_2,
observed_data=(dat["D_obs"], dat["M_obs"]),
params=['alpha', 'betaA', 'betaM', 'sigma', 'M_true', 'D_true'],
init_state=init_state,
bijectors=bijectors)
###Output
_____no_output_____
###Markdown
Code 15.6
###Code
post_D_true = trace_15_2.posterior["D_true"].values[0]
post_M_true = trace_15_2.posterior["M_true"].values[0]
D_est = np.mean(post_D_true, 0)
M_est = np.mean(post_M_true, 0)
plt.plot(dat["M_obs"], dat["D_obs"], "bo", alpha=0.5)
plt.gca().set(xlabel="marriage rate (std)", ylabel="divorce rate (std)")
plt.plot(M_est, D_est, "ko", mfc="none")
for i in range(d.shape[0]):
plt.plot([dat["M_obs"][i], M_est[i]], [dat["D_obs"][i], D_est[i]],
"k-", lw=1)
###Output
_____no_output_____
###Markdown
Above figure demonstrates shrinkage of both divorce rate and marriage rate. Solid points are the observed values. Open points are posterior means. Lines connect pairs of points for the same state. Both variables are shrunk towards the inferred regression relationship. With measurement error, the insight is to realize that any uncertain piece of data can be replaced by a distribution that reflects uncertainty. Code 15.7
###Code
# Simulated toy data
N = 500
A = tfd.Normal(loc=0., scale=1.0).sample((N,))
M = tfd.Normal(loc=-A, scale=1.0).sample()
D = tfd.Normal(loc=A, scale=1.0).sample()
A_obs = tfd.Normal(loc=A, scale=1.).sample()
###Output
_____no_output_____
###Markdown
Code 15.8
###Code
N = 100
S = tfd.Normal(loc=0., scale=1.).sample((N,))
H = tfd.Binomial(total_count=10, probs=tf.sigmoid(S)).sample()
###Output
_____no_output_____
###Markdown
Code 15.9Hm = Homework missingDog's decision to eat a piece of homework or not is not influenced by any relevant variable
###Code
D = tfd.Bernoulli(0.5).sample().numpy() # dogs completely random
Hm = np.where(D == 1, np.nan, H)
Hm
###Output
_____no_output_____
###Markdown
Since missing values are random, missignness does not necessiarily change the overall distribution of homework score. Code 15.10Here studying influences whether a dog eats homework S->DStudents who study a lot do not play with their Dogs and then dogs take revenge by eating homework
###Code
D = np.where(S > 0, 1, 0)
Hm = np.where(D == 1, np.nan, H)
Hm
###Output
_____no_output_____
###Markdown
Now every student who studies more than average (0) is missing homework Code 15.11The case of noisy home and its influence on homework & Dog's behavior
###Code
# TODO - use seed; have not been able to make it work with tfp
N = 1000
X = tfd.Sample(tfd.Normal(loc=0., scale=1.), sample_shape=(N,)).sample().numpy()
S = tfd.Sample(tfd.Normal(loc=0., scale=1.), sample_shape=(N,)).sample().numpy()
logits = 2 + S - 2 * X
H = tfd.Binomial(total_count=10, logits=logits).sample().numpy()
D = np.where(X > 1, 1, 0)
Hm = np.where(D == 1, np.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.12
###Code
dat = dict(H=H, S=S)
def model_15_3(S):
def _generator():
alpha = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=1., name="alpha"), sample_shape=1))
betaS = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaS"), sample_shape=1))
logits = tf.squeeze(alpha[...,tf.newaxis] + betaS[...,tf.newaxis] * S)
H = yield tfd.Independent(tfd.Binomial(total_count=10, logits=logits), reinterpreted_batch_ndims=1)
return tfd.JointDistributionCoroutine(_generator, validate_args=False)
jdc_15_3 = model_15_3(dat["S"])
NUM_CHAINS_FOR_15_3 = 4
alpha_init, betaS_init, _ = jdc_15_3.sample()
init_state = [
tf.tile(alpha_init, (NUM_CHAINS_FOR_15_3,)),
tf.tile(betaS_init, (NUM_CHAINS_FOR_15_3,))
]
bijectors = [
tfb.Identity(),
tfb.Identity(),
]
posterior_15_3, trace_15_3 = sample_posterior(jdc_15_3,
observed_data=(dat["H"],),
params=['alpha', 'betaS'],
init_state=init_state,
bijectors=bijectors)
az.summary(trace_15_3, round_to=2, kind='all', credible_interval=0.89)
###Output
_____no_output_____
###Markdown
The true coefficient on S should be 1.00. We don’t expect to get that exactly, but the estimate above is way off Code 15.13We build the model with missing data now
###Code
dat = dict(H=H[D==0], S=S[D==0])
def model_15_4(S):
def _generator():
alpha = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=1., name="alpha"), sample_shape=1))
betaS = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaS"), sample_shape=1))
logits = tf.squeeze(alpha[...,tf.newaxis] + betaS[...,tf.newaxis] * S)
H = yield tfd.Independent(tfd.Binomial(total_count=10, logits=logits), reinterpreted_batch_ndims=1)
return tfd.JointDistributionCoroutine(_generator, validate_args=False)
jdc_15_4 = model_15_4(dat["S"])
NUM_CHAINS_FOR_15_4 = 2
alpha_init, betaS_init, _ = jdc_15_4.sample()
init_state = [
tf.tile(alpha_init, (NUM_CHAINS_FOR_15_4,)),
tf.tile(betaS_init, (NUM_CHAINS_FOR_15_4,))
]
bijectors = [
tfb.Identity(),
tfb.Identity(),
]
posterior_15_4, trace_15_4 = sample_posterior(jdc_15_4,
observed_data=(dat["H"],),
params=['alpha', 'betaS'],
init_state=init_state,
bijectors=bijectors)
az.summary(trace_15_4, round_to=2, kind='all', credible_interval=0.89)
###Output
_____no_output_____
###Markdown
Code 15.14
###Code
D = np.where(np.abs(X) < 1, 1, 0)
###Output
_____no_output_____
###Markdown
Code 15.15
###Code
N = 100
S = tfd.Normal(loc=0., scale=1.).sample((N,))
H = tfd.Binomial(total_count=10, logits=S).sample().numpy()
D = np.where(H < 5, 1, 0)
Hm = np.where(D == 1, np.nan, H)
Hm
###Output
_____no_output_____
###Markdown
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ksachdeva/rethinking-tensorflow-probability/blob/master/notebooks/15_missing_data_and_other_opportunities.ipynb) Chapter 15 - Missing Data and Other Opportunities Imports and utility functions
###Code
# Install packages that are not installed in colab
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
%tensorflow_version 2.X
!pip install watermark
!pip install arviz
USE_NIGHTLY_TFP = True # @param
if IN_COLAB and USE_NIGHTLY_TFP:
!pip install --upgrade tf-nightly
!pip install --upgrade tfp-nightly
%load_ext watermark
# Core
import numpy as np
import arviz as az
import pandas as pd
import xarray as xr
import tensorflow as tf
import tensorflow_probability as tfp
# visualization
import matplotlib.pyplot as plt
# aliases
tfd = tfp.distributions
tfb = tfp.bijectors
Root = tfd.JointDistributionCoroutine.Root
%watermark -p numpy,tensorflow,tensorflow_probability,arviz,scipy,pandas
# config of various plotting libraries
%config InlineBackend.figure_format = 'retina'
az.style.use('arviz-darkgrid')
if not USE_NIGHTLY_TFP:
assert tf.__version__ >= '2.1.0', "Tensorflow version should be at minimum 2.1.0"
assert tfp.__version__ >= '0.9.0', "TFP version should be at minimum 0.9.0"
###Output
_____no_output_____
###Markdown
Tensorflow MCMC Sampling helpers
###Code
USE_XLA = False
NUMBER_OF_CHAINS = 2
NUMBER_OF_BURNIN = 500
NUMBER_OF_SAMPLES = 500
NUMBER_OF_LEAPFROG_STEPS = 4
def _trace_to_arviz(trace=None,
sample_stats=None,
observed_data=None,
prior_predictive=None,
posterior_predictive=None,
inplace=True):
if trace is not None and isinstance(trace, dict):
trace = {k: np.swapaxes(v.numpy(), 1, 0)
for k, v in trace.items()}
if sample_stats is not None and isinstance(sample_stats, dict):
sample_stats = {k: v.numpy().T for k, v in sample_stats.items()}
if prior_predictive is not None and isinstance(prior_predictive, dict):
prior_predictive = {k: v[np.newaxis]
for k, v in prior_predictive.items()}
if posterior_predictive is not None and isinstance(posterior_predictive, dict):
if isinstance(trace, az.InferenceData) and inplace == True:
return trace + az.from_dict(posterior_predictive=posterior_predictive)
else:
trace = None
return az.from_dict(
posterior=trace,
sample_stats=sample_stats,
prior_predictive=prior_predictive,
posterior_predictive=posterior_predictive,
observed_data=observed_data,
)
@tf.function(autograph=False, experimental_compile=USE_XLA)
def run_chain(init_state,
bijectors,
step_size,
target_log_prob_fn,
num_leapfrog_steps=NUMBER_OF_LEAPFROG_STEPS,
num_samples=NUMBER_OF_SAMPLES,
burnin=NUMBER_OF_BURNIN,
):
def _trace_fn_transitioned(_, pkr):
return (
pkr.inner_results.inner_results.log_accept_ratio
)
hmc_kernel = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn,
num_leapfrog_steps=num_leapfrog_steps,
step_size=step_size)
inner_kernel = tfp.mcmc.TransformedTransitionKernel(
inner_kernel=hmc_kernel,
bijector=bijectors)
kernel = tfp.mcmc.SimpleStepSizeAdaptation(
inner_kernel=inner_kernel,
target_accept_prob=.8,
num_adaptation_steps=int(0.8*burnin),
log_accept_prob_getter_fn=lambda pkr: pkr.inner_results.log_accept_ratio
)
results, sampler_stat = tfp.mcmc.sample_chain(
num_results=num_samples,
num_burnin_steps=burnin,
current_state=init_state,
kernel=kernel,
trace_fn=_trace_fn_transitioned)
return results, sampler_stat
def sample_posterior(jdc,
observed_data,
params,
num_chains=NUMBER_OF_CHAINS,
init_state=None,
bijectors=None,
num_samples=NUMBER_OF_SAMPLES,
burnin=NUMBER_OF_BURNIN):
if init_state is None:
init_state = list(jdc.sample(NUMBER_OF_CHAINS)[:-1])
if bijectors is None:
bijectors = [tfb.Identity() for i in init_state]
target_log_prob_fn = lambda *x: jdc.log_prob(x + observed_data)
step_size = 0.1
results, sample_stats = run_chain(init_state,
bijectors,
step_size=step_size,
target_log_prob_fn=target_log_prob_fn,
num_samples=num_samples,
burnin=burnin)
stat_names = ['mean_tree_accept']
sampler_stats = dict(zip(stat_names, [sample_stats]))
posterior = dict(zip(params, results))
return _trace_to_arviz(trace=posterior, sample_stats=sampler_stats)
###Output
_____no_output_____
###Markdown
Dataset URLs
###Code
# You could change base url to local dir or a remoate raw github content
_BASE_URL = "https://raw.githubusercontent.com/ksachdeva/rethinking-tensorflow-probability/master/data"
WAFFLE_DIVORCE_DATASET_PATH = f"{_BASE_URL}/WaffleDivorce.csv"
###Output
_____no_output_____
###Markdown
Code 15.1
###Code
# simulate a pancake and return randomly ordered sides
def sim_pancake():
pancake = tfd.Categorical(logits=np.ones(3)).sample().numpy()
sides = np.array([1, 1, 1, 0, 0, 0]).reshape(3, 2).T[:, pancake]
np.random.shuffle(sides)
return sides
# sim 10,000 pancakes
pancakes = []
for i in range(10_000):
pancakes.append(sim_pancake())
pancakes = np.array(pancakes).T
up = pancakes[0]
down = pancakes[1]
# compute proportion 1/1 (BB) out of all 1/1 and 1/0
num_11_10 = np.sum(up == 1)
num_11 = np.sum((up == 1) & (down == 1))
num_11 / num_11_10
###Output
_____no_output_____
###Markdown
Code 15.2In the waffle dataset, both divorce rate and marriage rate variables are measured with substantial error and that error is reported in the form of standard errors.Also error varies across the states. Below we are plotting the measurement errors
###Code
d = pd.read_csv(WAFFLE_DIVORCE_DATASET_PATH, sep=";")
# points
ax = az.plot_pair(d[["MedianAgeMarriage", "Divorce"]].to_dict(orient="list"),
plot_kwargs=dict(ms=15, mfc="none"))
ax.set(ylim=(4, 15), xlabel="Median age marriage", ylabel="Divorce rate")
# standard errors
for i in range(d.shape[0]):
ci = d.Divorce[i] + np.array([-1, 1]) * d["Divorce SE"][i]
x = d.MedianAgeMarriage[i]
plt.plot([x, x], ci, "k")
###Output
_____no_output_____
###Markdown
In the above plot, the lenght of the vertical lines show how uncertain the observed divorce rate is. Code 15.3
###Code
dat = dict(
D_obs=tf.cast(d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values, dtype=tf.float32),
D_sd=tf.cast(d["Divorce SE"].values / d.Divorce.std(), dtype=tf.float32),
M=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0])
def model_15_1(A, M, D_sd, N):
def _generator():
alpha = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.2, name="alpha"), sample_shape=1))
betaA = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaA"), sample_shape=1))
betaM = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaM"), sample_shape=1))
sigma = yield Root(tfd.Sample(tfd.Exponential(rate=1., name="sigma"), sample_shape=1))
mu = alpha[...,tf.newaxis] + betaA[...,tf.newaxis] * A + betaM[...,tf.newaxis] * M
scale = sigma[...,tf.newaxis]
D_true = yield tfd.Independent(tfd.Normal(loc=mu, scale=scale), reinterpreted_batch_ndims=1)
D_obs = yield tfd.Independent(tfd.Normal(loc=D_true, scale=D_sd), reinterpreted_batch_ndims=1)
return tfd.JointDistributionCoroutine(_generator, validate_args=False)
jdc_15_1 = model_15_1(dat["A"], dat["M"], dat["D_sd"], dat["N"])
NUM_CHAINS_FOR_15_1 = 2
init_state = [
tf.zeros([NUM_CHAINS_FOR_15_1]),
tf.zeros([NUM_CHAINS_FOR_15_1]),
tf.zeros([NUM_CHAINS_FOR_15_1]),
tf.ones([NUM_CHAINS_FOR_15_1]),
tf.zeros([NUM_CHAINS_FOR_15_1, dat["N"]]),
]
bijectors = [
tfb.Identity(),
tfb.Identity(),
tfb.Identity(),
tfb.Exp(),
tfb.Identity()
]
trace_15_1 = sample_posterior(jdc_15_1,
observed_data=(dat["D_obs"],),
params=['alpha', 'betaA', 'betaM', 'sigma', 'D_true'],
init_state=init_state,
bijectors=bijectors)
###Output
_____no_output_____
###Markdown
Code 15.4
###Code
az.summary(trace_15_1, round_to=2, kind='all', credible_interval=0.89)
###Output
_____no_output_____
###Markdown
Code 15.5What happens when there is a measurement error on predictor variables as well ?
###Code
dat = dict(
D_obs=tf.cast(d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values, dtype=tf.float32),
D_sd=tf.cast(d["Divorce SE"].values / d.Divorce.std(), dtype=tf.float32),
M_obs=tf.cast(d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values, dtype=tf.float32),
M_sd=tf.cast(d["Marriage SE"].values / d.Marriage.std(), dtype=tf.float32),
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0])
def model_15_2(A, M_sd, D_sd, N):
def _generator():
alpha = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.2, name="alpha"), sample_shape=1))
betaA = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaA"), sample_shape=1))
betaM = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaM"), sample_shape=1))
sigma = yield Root(tfd.Sample(tfd.Exponential(rate=1., name="sigma"), sample_shape=1))
M_true = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=1., name="M_true"), sample_shape=N))
mu = alpha[...,tf.newaxis] + betaA[...,tf.newaxis] * A + betaM[...,tf.newaxis] * M_true
scale = sigma[...,tf.newaxis]
D_true = yield tfd.Independent(tfd.Normal(loc=mu, scale=scale), reinterpreted_batch_ndims=1)
D_obs = yield tfd.Independent(tfd.Normal(loc=D_true, scale=D_sd), reinterpreted_batch_ndims=1)
M_obs = yield tfd.Independent(tfd.Normal(loc=M_true, scale=M_sd, name="M_obs"), reinterpreted_batch_ndims=1)
return tfd.JointDistributionCoroutine(_generator, validate_args=False)
jdc_15_2 = model_15_2(dat["A"], dat["M_sd"], dat["D_sd"], dat["N"])
NUM_CHAINS_FOR_15_2 = 2
init_state = [
tf.zeros([NUM_CHAINS_FOR_15_2]),
tf.zeros([NUM_CHAINS_FOR_15_2]),
tf.zeros([NUM_CHAINS_FOR_15_2]),
tf.ones([NUM_CHAINS_FOR_15_2]),
tf.zeros([NUM_CHAINS_FOR_15_2, dat["N"]]), # M_True
tf.zeros([NUM_CHAINS_FOR_15_2, dat["N"]]), # D_True
]
bijectors = [
tfb.Identity(),
tfb.Identity(),
tfb.Identity(),
tfb.Exp(),
tfb.Identity(),
tfb.Identity()
]
trace_15_2 = sample_posterior(jdc_15_2,
observed_data=(dat["D_obs"], dat["M_obs"]),
params=['alpha', 'betaA', 'betaM', 'sigma', 'M_true', 'D_true'],
init_state=init_state,
bijectors=bijectors)
###Output
_____no_output_____
###Markdown
Code 15.6
###Code
post_D_true = trace_15_2.posterior["D_true"].values[0]
post_M_true = trace_15_2.posterior["M_true"].values[0]
D_est = np.mean(post_D_true, 0)
M_est = np.mean(post_M_true, 0)
plt.plot(dat["M_obs"], dat["D_obs"], "bo", alpha=0.5)
plt.gca().set(xlabel="marriage rate (std)", ylabel="divorce rate (std)")
plt.plot(M_est, D_est, "ko", mfc="none")
for i in range(d.shape[0]):
plt.plot([dat["M_obs"][i], M_est[i]], [dat["D_obs"][i], D_est[i]],
"k-", lw=1)
###Output
_____no_output_____
###Markdown
Above figure demonstrates shrinkage of both divorce rate and marriage rate. Solid points are the observed values. Open points are posterior means. Lines connect pairs of points for the same state. Both variables are shrunk towards the inferred regression relationship. With measurement error, the insight is to realize that any uncertain piece of data can be replaced by a distribution that reflects uncertainty. Code 15.7
###Code
# Simulated toy data
N = 500
A = tfd.Normal(loc=0., scale=1.0).sample((N,))
M = tfd.Normal(loc=-A, scale=1.0).sample()
D = tfd.Normal(loc=A, scale=1.0).sample()
A_obs = tfd.Normal(loc=A, scale=1.).sample()
###Output
_____no_output_____
###Markdown
Code 15.8
###Code
N = 100
S = tfd.Normal(loc=0., scale=1.).sample((N,))
H = tfd.Binomial(total_count=10, probs=tf.sigmoid(S)).sample()
###Output
_____no_output_____
###Markdown
Code 15.9Hm = Homework missingDog's decision to eat a piece of homework or not is not influenced by any relevant variable
###Code
D = tfd.Bernoulli(0.5).sample().numpy() # dogs completely random
Hm = np.where(D == 1, np.nan, H)
Hm
###Output
_____no_output_____
###Markdown
Since missing values are random, missignness does not necessiarily change the overall distribution of homework score. Code 15.10Here studying influences whether a dog eats homework S->DStudents who study a lot do not play with their Dogs and then dogs take revenge by eating homework
###Code
D = np.where(S > 0, 1, 0)
Hm = np.where(D == 1, np.nan, H)
Hm
###Output
_____no_output_____
###Markdown
Now every student who studies more than average (0) is missing homework Code 15.11The case of noisy home and its influence on homework & Dog's behavior
###Code
# TODO - use seed; have not been able to make it work with tfp
N = 1000
X = tfd.Sample(tfd.Normal(loc=0., scale=1.), sample_shape=(N,)).sample().numpy()
S = tfd.Sample(tfd.Normal(loc=0., scale=1.), sample_shape=(N,)).sample().numpy()
logits = 2 + S - 2 * X
H = tfd.Binomial(total_count=10, logits=logits).sample().numpy()
D = np.where(X > 1, 1, 0)
Hm = np.where(D == 1, np.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.12
###Code
dat = dict(H=H, S=S)
def model_15_3(S):
def _generator():
alpha = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=1., name="alpha"), sample_shape=1))
betaS = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaS"), sample_shape=1))
logits = tf.squeeze(alpha[...,tf.newaxis] + betaS[...,tf.newaxis] * S)
H = yield tfd.Independent(tfd.Binomial(total_count=10, logits=logits), reinterpreted_batch_ndims=1)
return tfd.JointDistributionCoroutine(_generator, validate_args=False)
jdc_15_3 = model_15_3(dat["S"])
NUM_CHAINS_FOR_15_3 = 4
alpha_init, betaS_init, _ = jdc_15_3.sample()
init_state = [
tf.tile(alpha_init, (NUM_CHAINS_FOR_15_3,)),
tf.tile(betaS_init, (NUM_CHAINS_FOR_15_3,))
]
bijectors = [
tfb.Identity(),
tfb.Identity(),
]
trace_15_3 = sample_posterior(jdc_15_3,
observed_data=(dat["H"],),
params=['alpha', 'betaS'],
init_state=init_state,
bijectors=bijectors)
az.summary(trace_15_3, round_to=2, kind='all', credible_interval=0.89)
###Output
_____no_output_____
###Markdown
The true coefficient on S should be 1.00. We don’t expect to get that exactly, but the estimate above is way off Code 15.13We build the model with missing data now
###Code
dat = dict(H=H[D==0], S=S[D==0])
def model_15_4(S):
def _generator():
alpha = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=1., name="alpha"), sample_shape=1))
betaS = yield Root(tfd.Sample(tfd.Normal(loc=0., scale=0.5, name="betaS"), sample_shape=1))
logits = tf.squeeze(alpha[...,tf.newaxis] + betaS[...,tf.newaxis] * S)
H = yield tfd.Independent(tfd.Binomial(total_count=10, logits=logits), reinterpreted_batch_ndims=1)
return tfd.JointDistributionCoroutine(_generator, validate_args=False)
jdc_15_4 = model_15_4(dat["S"])
NUM_CHAINS_FOR_15_4 = 2
alpha_init, betaS_init, _ = jdc_15_4.sample()
init_state = [
tf.tile(alpha_init, (NUM_CHAINS_FOR_15_4,)),
tf.tile(betaS_init, (NUM_CHAINS_FOR_15_4,))
]
bijectors = [
tfb.Identity(),
tfb.Identity(),
]
trace_15_4 = sample_posterior(jdc_15_4,
observed_data=(dat["H"],),
params=['alpha', 'betaS'],
init_state=init_state,
bijectors=bijectors)
az.summary(trace_15_4, round_to=2, kind='all', credible_interval=0.89)
###Output
_____no_output_____
###Markdown
Code 15.14
###Code
D = np.where(np.abs(X) < 1, 1, 0)
###Output
_____no_output_____
###Markdown
Code 15.15
###Code
N = 100
S = tfd.Normal(loc=0., scale=1.).sample((N,))
H = tfd.Binomial(total_count=10, logits=S).sample().numpy()
D = np.where(H < 5, 1, 0)
Hm = np.where(D == 1, np.nan, H)
Hm
###Output
_____no_output_____
###Markdown
Chapter 15. Missing Data and Other Opportunities
###Code
import math
import arviz as az
import matplotlib.pyplot as plt
import pandas as pd
import jax.numpy as jnp
from jax import ops, random, vmap
from jax.scipy.special import expit
import numpyro
import numpyro.distributions as dist
from numpyro.diagnostics import print_summary
from numpyro.distributions import constraints
from numpyro.infer import MCMC, NUTS, init_to_value
%config InlineBackend.figure_formats = ["svg"]
az.style.use("arviz-darkgrid")
numpyro.set_host_device_count(4)
###Output
_____no_output_____
###Markdown
Code 15.1
###Code
# simulate a pancake and return randomly ordered sides
def sim_pancake(seed):
pancake = dist.Categorical(logits=jnp.ones(3)).sample(random.PRNGKey(2 * seed))
sides = jnp.array([1, 1, 1, 0, 0, 0]).reshape(3, 2).T[:, pancake]
return random.permutation(random.PRNGKey(2 * seed + 1), sides)
# sim 10,000 pancakes
pancakes = vmap(sim_pancake, out_axes=1)(jnp.arange(10000))
up = pancakes[0]
down = pancakes[1]
# compute proportion 1/1 (BB) out of all 1/1 and 1/0
num_11_10 = jnp.sum(up == 1)
num_11 = jnp.sum((up == 1) & (down == 1))
num_11 / num_11_10
###Output
_____no_output_____
###Markdown
Code 15.2
###Code
WaffleDivorce = pd.read_csv("../data/WaffleDivorce.csv", sep=";")
d = WaffleDivorce
# points
ax = az.plot_pair(
d[["MedianAgeMarriage", "Divorce"]].to_dict(orient="list"),
scatter_kwargs=dict(ms=15, mfc="none"),
)
ax.set(ylim=(4, 15), xlabel="Median age marrage", ylabel="Divorce rate")
# standard errors
for i in range(d.shape[0]):
ci = d.Divorce[i] + jnp.array([-1, 1]) * d["Divorce SE"][i]
x = d.MedianAgeMarriage[i]
plt.plot([x, x], ci, "k")
###Output
_____no_output_____
###Markdown
Code 15.3
###Code
dlist = dict(
D_obs=d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values,
D_sd=d["Divorce SE"].values / d.Divorce.std(),
M=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0],
)
def model(A, M, D_sd, D_obs, N):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
mu = a + bA * A + bM * M
D_true = numpyro.sample("D_true", dist.Normal(mu, sigma))
numpyro.sample("D_obs", dist.Normal(D_true, D_sd), obs=D_obs)
m15_1 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_1.run(random.PRNGKey(0), **dlist)
###Output
_____no_output_____
###Markdown
Code 15.4
###Code
m15_1.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
D_true[0] 1.17 0.37 1.17 0.57 1.74 2401.51 1.00
D_true[1] 0.68 0.57 0.68 -0.22 1.60 3620.74 1.00
D_true[2] 0.43 0.34 0.42 -0.14 0.96 3700.02 1.00
D_true[3] 1.42 0.45 1.42 0.68 2.11 3091.26 1.00
D_true[4] -0.90 0.13 -0.90 -1.12 -0.70 4414.02 1.00
D_true[5] 0.66 0.39 0.66 0.05 1.29 3429.74 1.00
D_true[6] -1.38 0.34 -1.38 -1.92 -0.85 3758.00 1.00
D_true[7] -0.34 0.49 -0.34 -1.06 0.48 3136.96 1.00
D_true[8] -1.88 0.59 -1.88 -2.77 -0.91 2435.42 1.00
D_true[9] -0.62 0.16 -0.62 -0.88 -0.36 4020.95 1.00
D_true[10] 0.77 0.27 0.78 0.33 1.20 3201.66 1.00
D_true[11] -0.55 0.48 -0.56 -1.33 0.23 2645.62 1.00
D_true[12] 0.17 0.48 0.17 -0.62 0.89 1486.11 1.00
D_true[13] -0.87 0.23 -0.87 -1.24 -0.48 3540.33 1.00
D_true[14] 0.56 0.31 0.55 0.12 1.09 3907.98 1.00
D_true[15] 0.29 0.38 0.29 -0.30 0.91 4415.77 1.00
D_true[16] 0.50 0.42 0.49 -0.14 1.15 4748.56 1.00
D_true[17] 1.25 0.34 1.24 0.73 1.82 3141.66 1.00
D_true[18] 0.43 0.38 0.43 -0.23 0.96 3675.14 1.00
D_true[19] 0.41 0.53 0.40 -0.44 1.23 2010.13 1.00
D_true[20] -0.55 0.32 -0.55 -1.07 -0.06 3516.71 1.00
D_true[21] -1.10 0.26 -1.10 -1.52 -0.71 3084.19 1.00
D_true[22] -0.27 0.26 -0.26 -0.68 0.13 4074.25 1.00
D_true[23] -1.00 0.29 -1.00 -1.45 -0.54 3238.19 1.00
D_true[24] 0.43 0.40 0.42 -0.19 1.07 3781.77 1.00
D_true[25] -0.03 0.31 -0.03 -0.51 0.47 4161.37 1.00
D_true[26] 0.00 0.51 0.02 -0.84 0.80 3814.40 1.00
D_true[27] -0.16 0.40 -0.16 -0.84 0.43 4237.18 1.00
D_true[28] -0.26 0.48 -0.29 -1.04 0.49 3130.68 1.00
D_true[29] -1.81 0.23 -1.81 -2.15 -1.39 3863.86 1.00
D_true[30] 0.18 0.44 0.19 -0.52 0.89 3790.34 1.00
D_true[31] -1.66 0.17 -1.66 -1.92 -1.39 3620.41 1.00
D_true[32] 0.12 0.24 0.12 -0.27 0.49 3476.00 1.00
D_true[33] -0.06 0.52 -0.04 -0.90 0.74 2154.18 1.00
D_true[34] -0.12 0.22 -0.12 -0.45 0.23 4052.89 1.00
D_true[35] 1.28 0.42 1.27 0.55 1.89 3505.41 1.00
D_true[36] 0.23 0.35 0.23 -0.32 0.80 4023.45 1.00
D_true[37] -1.02 0.22 -1.01 -1.36 -0.68 4202.55 1.00
D_true[38] -0.93 0.56 -0.94 -1.93 -0.12 3218.85 1.00
D_true[39] -0.68 0.32 -0.68 -1.22 -0.19 4395.19 1.00
D_true[40] 0.24 0.54 0.24 -0.61 1.10 3664.71 1.00
D_true[41] 0.75 0.33 0.74 0.19 1.27 3079.24 1.00
D_true[42] 0.19 0.18 0.19 -0.09 0.47 3873.93 1.00
D_true[43] 0.80 0.42 0.82 0.11 1.44 2345.56 1.00
D_true[44] -0.40 0.51 -0.41 -1.26 0.36 2814.81 1.00
D_true[45] -0.39 0.24 -0.40 -0.80 -0.02 3852.89 1.00
D_true[46] 0.15 0.31 0.16 -0.35 0.63 3782.98 1.00
D_true[47] 0.57 0.46 0.58 -0.18 1.29 4495.36 1.00
D_true[48] -0.64 0.27 -0.64 -1.05 -0.21 3630.93 1.00
D_true[49] 0.84 0.62 0.85 -0.13 1.83 2526.49 1.00
a -0.05 0.10 -0.05 -0.22 0.09 2354.00 1.00
bA -0.62 0.16 -0.62 -0.88 -0.38 2044.70 1.00
bM 0.05 0.16 0.05 -0.21 0.31 1410.64 1.00
sigma 0.59 0.11 0.58 0.41 0.76 752.39 1.01
Number of divergences: 0
###Markdown
Code 15.5
###Code
dlist = dict(
D_obs=d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values,
D_sd=d["Divorce SE"].values / d.Divorce.std(),
M_obs=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
M_sd=d["Marriage SE"].values / d.Marriage.std(),
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0],
)
def model(A, M_sd, M_obs, D_sd, D_obs, N):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
M_est = numpyro.sample("M_est", dist.Normal(0, 1).expand([N]))
numpyro.sample("M_obs", dist.Normal(M_est, M_sd), obs=M_obs)
mu = a + bA * A + bM * M_est
D_est = numpyro.sample("D_est", dist.Normal(mu, sigma))
numpyro.sample("D_obs", dist.Normal(D_est, D_sd), obs=D_obs)
m15_2 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_2.run(random.PRNGKey(0), **dlist)
###Output
_____no_output_____
###Markdown
Code 15.6
###Code
post = m15_2.get_samples()
D_est = jnp.mean(post["D_est"], 0)
M_est = jnp.mean(post["M_est"], 0)
plt.plot(dlist["M_obs"], dlist["D_obs"], "bo", alpha=0.5)
plt.gca().set(xlabel="marriage rate (std)", ylabel="divorce rate (std)")
plt.plot(M_est, D_est, "ko", mfc="none")
for i in range(d.shape[0]):
plt.plot([dlist["M_obs"][i], M_est[i]], [dlist["D_obs"][i], D_est[i]], "k-", lw=1)
###Output
_____no_output_____
###Markdown
Code 15.7
###Code
N = 500
A = dist.Normal().sample(random.PRNGKey(0), (N,))
M = dist.Normal(-A).sample(random.PRNGKey(1))
D = dist.Normal(A).sample(random.PRNGKey(2))
A_obs = dist.Normal(A).sample(random.PRNGKey(3))
###Output
_____no_output_____
###Markdown
Code 15.8
###Code
N = 100
S = dist.Normal().sample(random.PRNGKey(0), (N,))
H = dist.Binomial(10, expit(S)).sample(random.PRNGKey(1))
###Output
_____no_output_____
###Markdown
Code 15.9
###Code
D = dist.Bernoulli(0.5).sample(random.PRNGKey(2)) # dogs completely random
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.10
###Code
D = jnp.where(S > 0, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.11
###Code
with numpyro.handlers.seed(rng_seed=501):
N = 1000
X = numpyro.sample("X", dist.Normal().expand([N]))
S = numpyro.sample("S", dist.Normal().expand([N]))
H = numpyro.sample("H", dist.Binomial(10, logits=2 + S - 2 * X))
D = jnp.where(X > 1, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.12
###Code
dat_list = dict(H=H, S=S)
def model(S, H):
a = numpyro.sample("a", dist.Normal(0, 1))
bS = numpyro.sample("bS", dist.Normal(0, 0.5))
logit_p = a + bS * S
numpyro.sample("H", dist.Binomial(10, logits=logit_p), obs=H)
m15_3 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_3.run(random.PRNGKey(0), **dat_list)
m15_3.print_summary()
###Output
mean std median 5.0% 95.0% n_eff r_hat
a 1.32 0.03 1.32 1.28 1.36 1268.10 1.00
bS 0.62 0.03 0.62 0.58 0.66 1078.05 1.00
Number of divergences: 0
###Markdown
Code 15.13
###Code
dat_list0 = dict(H=H[D == 0], S=S[D == 0])
def model(S, H):
a = numpyro.sample("a", dist.Normal(0, 1))
bS = numpyro.sample("bS", dist.Normal(0, 0.5))
logit_p = a + bS * S
numpyro.sample("H", dist.Binomial(10, logits=logit_p), obs=H)
m15_4 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_4.run(random.PRNGKey(0), **dat_list0)
m15_4.print_summary()
###Output
mean std median 5.0% 95.0% n_eff r_hat
a 1.92 0.04 1.92 1.86 1.97 1007.18 1.00
bS 0.72 0.03 0.72 0.67 0.78 784.14 1.00
Number of divergences: 0
###Markdown
Code 15.14
###Code
D = jnp.where(jnp.abs(X) < 1, 1, 0)
###Output
_____no_output_____
###Markdown
Code 15.15
###Code
N = 100
S = dist.Normal().sample(random.PRNGKey(0), (N,))
H = dist.Binomial(10, logits=S).sample(random.PRNGKey(1))
D = jnp.where(H < 5, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.16
###Code
milk = pd.read_csv("../data/milk.csv", sep=";")
d = milk
d["neocortex.prop"] = d["neocortex.perc"] / 100
d["logmass"] = d.mass.apply(math.log)
###Output
_____no_output_____
###Markdown
Code 15.17
###Code
dat_list = dict(
K=d["kcal.per.g"].pipe(lambda x: (x - x.mean()) / x.std()).values,
B=d["neocortex.prop"].pipe(lambda x: (x - x.mean()) / x.std()).values,
M=d.logmass.pipe(lambda x: (x - x.mean()) / x.std()).values,
)
def model(B, M, K):
a = numpyro.sample("a", dist.Normal(0, 0.5))
nu = numpyro.sample("nu", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma_B = numpyro.sample("sigma_B", dist.Exponential(1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
B_impute = numpyro.sample(
"B_impute", dist.Normal(0, 1).expand([int(jnp.isnan(B).sum())]).mask(False)
)
B = ops.index_update(B, jnp.nonzero(jnp.isnan(B))[0], B_impute)
numpyro.sample("B", dist.Normal(nu, sigma_B), obs=B)
mu = a + bB * B + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_5 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_5.run(random.PRNGKey(0), **dat_list)
###Output
_____no_output_____
###Markdown
Code 15.18
###Code
m15_5.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
B_impute[0] -0.59 0.93 -0.61 -2.04 0.88 1728.72 1.00
B_impute[1] -0.71 0.96 -0.76 -2.36 0.72 1507.54 1.00
B_impute[2] -0.74 0.95 -0.77 -2.37 0.72 1925.34 1.00
B_impute[3] -0.30 0.92 -0.30 -1.77 1.12 2162.72 1.00
B_impute[4] 0.45 0.89 0.43 -0.97 1.83 1891.39 1.00
B_impute[5] -0.18 0.92 -0.19 -1.59 1.34 2179.08 1.00
B_impute[6] 0.17 0.89 0.18 -1.07 1.73 2225.82 1.00
B_impute[7] 0.31 0.86 0.32 -1.10 1.58 2121.79 1.00
B_impute[8] 0.51 0.87 0.54 -0.76 1.98 2257.69 1.00
B_impute[9] -0.44 0.93 -0.45 -1.91 1.06 1837.62 1.00
B_impute[10] -0.28 0.89 -0.30 -1.63 1.19 2123.46 1.00
B_impute[11] 0.14 0.89 0.16 -1.28 1.56 2139.01 1.00
a 0.04 0.17 0.04 -0.23 0.30 1876.58 1.00
bB 0.50 0.24 0.51 0.15 0.89 663.53 1.01
bM -0.55 0.20 -0.55 -0.86 -0.23 765.52 1.01
nu -0.05 0.21 -0.05 -0.36 0.28 1606.00 1.00
sigma 0.84 0.14 0.83 0.62 1.05 869.98 1.01
sigma_B 1.01 0.17 0.99 0.75 1.27 1034.07 1.00
Number of divergences: 0
###Markdown
Code 15.19
###Code
obs_idx = d["neocortex.prop"].notnull().values
dat_list_obs = dict(
K=dat_list["K"][obs_idx], B=dat_list["B"][obs_idx], M=dat_list["M"][obs_idx]
)
def model(B, M, K):
a = numpyro.sample("a", dist.Normal(0, 0.5))
nu = numpyro.sample("nu", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma_B = numpyro.sample("sigma_B", dist.Exponential(1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
numpyro.sample("B", dist.Normal(nu, sigma_B), obs=B)
mu = a + bB * B + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_6 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_6.run(random.PRNGKey(0), **dat_list_obs)
m15_6.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
a 0.10 0.20 0.09 -0.22 0.43 2589.09 1.00
bB 0.61 0.28 0.62 0.18 1.03 1375.60 1.00
bM -0.64 0.25 -0.65 -1.05 -0.27 1340.98 1.00
nu 0.00 0.23 0.00 -0.37 0.37 2336.09 1.00
sigma 0.87 0.18 0.84 0.62 1.15 1389.21 1.00
sigma_B 1.05 0.19 1.03 0.73 1.31 2136.89 1.00
Number of divergences: 0
###Markdown
Code 15.20
###Code
az.plot_forest(
[az.from_numpyro(m15_5), az.from_numpyro(m15_6)],
model_names=["m15.5", "m15.6"],
var_names=["bB", "bM"],
combined=True,
hdi_prob=0.89,
)
plt.show()
###Output
_____no_output_____
###Markdown
Code 15.21
###Code
post = m15_5.get_samples()
B_impute_mu = jnp.mean(post["B_impute"], 0)
B_impute_ci = jnp.percentile(post["B_impute"], q=(5.5, 94.5), axis=0)
# B vs K
plt.plot(dat_list["B"], dat_list["K"], "o")
plt.gca().set(xlabel="neocortex percent (std)", ylabel="kcal mild (std)")
miss_idx = pd.isna(dat_list["B"]).nonzero()[0]
Ki = dat_list["K"][miss_idx]
plt.plot(B_impute_mu, Ki, "ko", mfc="none")
for i in range(12):
plt.plot(B_impute_ci[:, i], jnp.repeat(Ki[i], 2), "k", lw=1)
plt.show()
# M vs B
plt.plot(dat_list["M"], dat_list["B"], "o")
plt.gca().set(xlabel="log body mass (std)", ylabel="neocortex percent (std)")
Mi = dat_list["M"][miss_idx]
plt.plot(Mi, B_impute_mu, "ko", mfc="none")
for i in range(12):
plt.plot(jnp.repeat(Mi[i], 2), B_impute_ci[:, i], "k", lw=1)
###Output
_____no_output_____
###Markdown
Code 15.22
###Code
def model(B, M, K):
# priors
a = numpyro.sample("a", dist.Normal(0, 0.5))
muB = numpyro.sample("muB", dist.Normal(0, 0.5))
muM = numpyro.sample("muM", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
Rho_BM = numpyro.sample("Rho_BM", dist.LKJ(2, 2))
Sigma_BM = numpyro.sample("Sigma_BM", dist.Exponential(1).expand([2]))
# define B_merge as mix of observed and imputed values
B_impute = numpyro.sample(
"B_impute", dist.Normal(0, 1).expand([int(jnp.isnan(B).sum())]).mask(False)
)
B_merge = ops.index_update(B, jnp.nonzero(jnp.isnan(B))[0], B_impute)
# M and B correlation
MB = jnp.stack([M, B_merge], axis=1)
cov = jnp.outer(Sigma_BM, Sigma_BM) * Rho_BM
numpyro.sample("MB", dist.MultivariateNormal(jnp.stack([muM, muB]), cov), obs=MB)
# K as function of B and M
mu = a + bB * B_merge + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_7 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_7.run(random.PRNGKey(0), **dat_list)
post = m15_7.get_samples(group_by_chain=True)
print_summary({k: v for k, v in post.items() if k in ["bM", "bB", "Rho_BM"]})
###Output
mean std median 5.0% 95.0% n_eff r_hat
Rho_BM[0,0] 1.00 0.00 1.00 1.00 1.00 nan nan
Rho_BM[0,1] 0.61 0.14 0.63 0.39 0.81 1384.32 1.00
Rho_BM[1,0] 0.61 0.14 0.63 0.39 0.81 1384.32 1.00
Rho_BM[1,1] 1.00 0.00 1.00 1.00 1.00 1599.26 1.00
bB 0.59 0.26 0.61 0.13 0.97 937.14 1.00
bM -0.65 0.22 -0.66 -1.01 -0.28 1130.60 1.00
###Markdown
Code 15.23
###Code
B_missidx = pd.isna(dat_list["B"]).nonzero()[0]
###Output
_____no_output_____
###Markdown
Code 15.24
###Code
Moralizing_gods = pd.read_csv("../data/Moralizing_gods.csv", sep=";")
Moralizing_gods
###Output
_____no_output_____
###Markdown
Code 15.25
###Code
Moralizing_gods.moralizing_gods.value_counts(dropna=False)
###Output
_____no_output_____
###Markdown
Code 15.26
###Code
symbol = Moralizing_gods.moralizing_gods.apply(lambda x: "." if x == 1 else "o")
symbol[Moralizing_gods.moralizing_gods.isna()] = "x"
color = Moralizing_gods.moralizing_gods.apply(lambda x: "k" if pd.isna(x) else "b")
for pch in ["o", ".", "x"]:
plt.scatter(
Moralizing_gods.year[symbol == pch],
Moralizing_gods.population[symbol == pch],
marker=pch,
color=color[symbol == pch],
facecolor="none" if pch == "o" else None,
lw=1.5,
alpha=0.7,
)
plt.gca().set(xlabel="Time (year)", ylabel="Population size")
plt.show()
###Output
_____no_output_____
###Markdown
Code 15.27
###Code
dmg = Moralizing_gods
dmg.astype(str).groupby(["moralizing_gods", "writing"]).size().unstack(fill_value=0)
###Output
_____no_output_____
###Markdown
Code 15.28
###Code
dmg = Moralizing_gods
haw = dmg.polity == "Big Island Hawaii"
dmg.loc[haw, ["year", "population", "writing", "moralizing_gods"]].T.round(3)
###Output
_____no_output_____
###Markdown
Code 15.29
###Code
with numpyro.handlers.seed(rng_seed=9):
N_houses = 100
alpha = 5
beta = -3
k = 0.5
r = 0.2
cat = numpyro.sample("cat", dist.Bernoulli(k).expand([N_houses]))
notes = numpyro.sample("notes", dist.Poisson(alpha + beta * cat))
R_C = numpyro.sample("R_C", dist.Bernoulli(r).expand([N_houses]))
cat_obs = jnp.where(R_C == 1, -9, cat)
###Output
_____no_output_____
###Markdown
Code 15.30
###Code
dat = dict(notes=notes, cat=cat_obs, RC=R_C, N=N_houses - 1)
def model(N, RC, cat, notes):
# priors
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 0.5))
# sneaking cat model
k = numpyro.sample("k", dist.Beta(2, 2))
numpyro.sample("cat|RC==0", dist.Bernoulli(k), obs=cat[RC == 0])
# singing bird model
# cat NA:
custom_logprob = jnp.logaddexp(
jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes[RC == 1]),
jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes[RC == 1]),
)
numpyro.factor("notes|RC==1", custom_logprob)
# cat known present/absent:
lambda_ = jnp.exp(a + b * cat[RC == 0])
numpyro.sample("notes|RC==0", dist.Poisson(lambda_), obs=notes[RC == 0])
m15_8 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_8.run(random.PRNGKey(0), **dat)
###Output
_____no_output_____
###Markdown
Code 15.31
###Code
def model(N, RC, cat, notes, link=False):
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 0.5))
# sneaking cat model
k = numpyro.sample("k", dist.Beta(2, 2))
numpyro.sample("cat|RC==0", dist.Bernoulli(k), obs=cat[RC == 0])
# singing bird model
custom_logprob = jnp.logaddexp(
jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes[RC == 1]),
jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes[RC == 1]),
)
numpyro.factor("notes|RC==1", custom_logprob)
lambda_ = jnp.exp(a + b * cat[RC == 0])
numpyro.sample("notes|RC==0", dist.Poisson(lambda_), obs=notes[RC == 0])
if link:
lpC0 = numpyro.deterministic(
"lpC0", jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes)
)
lpC1 = numpyro.deterministic(
"lpC0", jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes)
)
numpyro.deterministic("PrC1", jnp.exp(lpC1) / (jnp.exp(lpC1) + jnp.exp(lpC0)))
m15_9 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_9.run(random.PRNGKey(0), **dat)
###Output
_____no_output_____
###Markdown
Code 15.32
###Code
with numpyro.handlers.seed(rng_seed=100):
x = numpyro.sample("x", dist.Normal().expand([10]))
y = numpyro.sample("y", dist.Normal(x))
x = jnp.concatenate([x, jnp.array([jnp.nan])])
y = jnp.concatenate([y, jnp.array([100])])
d = dict(x=x, y=y)
###Output
_____no_output_____
###Markdown
Code 15.33
###Code
Primates301 = pd.read_csv("../data/Primates301.csv", sep=";")
d = Primates301
cc = d.dropna(subset=["brain", "body"]).index
B = d.brain[cc]
M = d.body[cc]
B = B.values / max(B)
M = M.values / max(M)
###Output
_____no_output_____
###Markdown
Code 15.34
###Code
Bse = B * 0.1
Mse = M * 0.1
###Output
_____no_output_____
###Markdown
Code 15.35
###Code
dat_list = dict(B=B, M=M)
def model(M, B):
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
mu = a + b * jnp.log(M)
numpyro.sample("B", dist.LogNormal(mu, sigma), obs=B)
m15H4 = MCMC(NUTS(model), 500, 500)
m15H4.run(random.PRNGKey(0), **dat_list)
###Output
sample: 100%|██████████| 1000/1000 [00:05<00:00, 197.60it/s, 7 steps of size 2.96e-01. acc. prob=0.93]
###Markdown
Code 15.36
###Code
start = dict(M_true=dat_list["M"], B_true=dat_list["B"])
init_strategy = init_to_value(values=start)
###Output
_____no_output_____
###Markdown
Code 15.37
###Code
Primates301 = pd.read_csv("../data/Primates301.csv", sep=";")
d = Primates301
d.isna().sum()
###Output
_____no_output_____
###Markdown
Code 15.38
###Code
cc = d.dropna(subset=["body"]).index
M = d.body[cc]
M = M.values / max(M)
B = d.brain[cc]
B = B.values / B.max(skipna=True)
###Output
_____no_output_____
###Markdown
Code 15.39
###Code
start = dict(B_impute=jnp.repeat(0.5, 56))
init_strategy = init_to_value(values=start)
###Output
_____no_output_____
###Markdown
Chapter 15. Missing Data and Other Opportunities
###Code
import math
import os
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import jax.numpy as jnp
from jax import ops, random, vmap
from jax.scipy.special import expit
import numpyro
import numpyro.distributions as dist
from numpyro.diagnostics import print_summary
from numpyro.distributions import constraints
from numpyro.infer import MCMC, NUTS, init_to_value
if "SVG" in os.environ:
%config InlineBackend.figure_formats = ["svg"]
az.style.use("arviz-darkgrid")
numpyro.set_platform("cpu")
numpyro.set_host_device_count(4)
###Output
_____no_output_____
###Markdown
Code 15.1
###Code
# simulate a pancake and return randomly ordered sides
def sim_pancake(seed):
pancake = dist.Categorical(logits=jnp.ones(3)).sample(random.PRNGKey(2 * seed))
sides = jnp.array([1, 1, 1, 0, 0, 0]).reshape(3, 2).T[:, pancake]
return random.permutation(random.PRNGKey(2 * seed + 1), sides)
# sim 10,000 pancakes
pancakes = vmap(sim_pancake, out_axes=1)(jnp.arange(10000))
up = pancakes[0]
down = pancakes[1]
# compute proportion 1/1 (BB) out of all 1/1 and 1/0
num_11_10 = jnp.sum(up == 1)
num_11 = jnp.sum((up == 1) & (down == 1))
num_11 / num_11_10
###Output
_____no_output_____
###Markdown
Code 15.2
###Code
WaffleDivorce = pd.read_csv("../data/WaffleDivorce.csv", sep=";")
d = WaffleDivorce
# points
ax = az.plot_pair(
d[["MedianAgeMarriage", "Divorce"]].to_dict(orient="list"),
scatter_kwargs=dict(ms=15, mfc="none"),
)
ax.set(ylim=(4, 15), xlabel="Median age marrage", ylabel="Divorce rate")
# standard errors
for i in range(d.shape[0]):
ci = d.Divorce[i] + jnp.array([-1, 1]) * d["Divorce SE"][i]
x = d.MedianAgeMarriage[i]
plt.plot([x, x], ci, "k")
###Output
_____no_output_____
###Markdown
Code 15.3
###Code
dlist = dict(
D_obs=d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values,
D_sd=d["Divorce SE"].values / d.Divorce.std(),
M=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0],
)
def model(A, M, D_sd, D_obs, N):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
mu = a + bA * A + bM * M
D_true = numpyro.sample("D_true", dist.Normal(mu, sigma))
numpyro.sample("D_obs", dist.Normal(D_true, D_sd), obs=D_obs)
m15_1 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_1.run(random.PRNGKey(0), **dlist)
###Output
_____no_output_____
###Markdown
Code 15.4
###Code
m15_1.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
D_true[0] 1.18 0.37 1.18 0.63 1.82 2994.08 1.00
D_true[1] 0.68 0.56 0.68 -0.15 1.63 3886.40 1.00
D_true[2] 0.43 0.35 0.43 -0.10 1.02 3496.83 1.00
D_true[3] 1.41 0.47 1.42 0.65 2.11 3191.36 1.00
D_true[4] -0.90 0.13 -0.90 -1.10 -0.69 4891.79 1.00
D_true[5] 0.65 0.38 0.66 0.07 1.29 3366.77 1.00
D_true[6] -1.37 0.35 -1.37 -1.94 -0.85 3792.56 1.00
D_true[7] -0.35 0.49 -0.33 -1.10 0.44 3352.11 1.00
D_true[8] -1.88 0.60 -1.89 -2.82 -0.90 2520.68 1.00
D_true[9] -0.62 0.16 -0.62 -0.85 -0.32 3877.20 1.00
D_true[10] 0.78 0.28 0.77 0.35 1.22 4248.21 1.00
D_true[11] -0.56 0.49 -0.56 -1.35 0.21 3013.46 1.00
D_true[12] 0.15 0.49 0.17 -0.59 0.97 1573.35 1.00
D_true[13] -0.87 0.23 -0.87 -1.25 -0.50 3840.61 1.00
D_true[14] 0.56 0.31 0.55 0.11 1.09 3881.69 1.00
D_true[15] 0.28 0.38 0.29 -0.35 0.84 3537.87 1.00
D_true[16] 0.50 0.42 0.50 -0.13 1.17 4165.48 1.00
D_true[17] 1.26 0.33 1.26 0.74 1.81 3670.00 1.00
D_true[18] 0.44 0.39 0.43 -0.15 1.07 4188.21 1.00
D_true[19] 0.43 0.52 0.40 -0.40 1.26 2300.86 1.00
D_true[20] -0.55 0.32 -0.56 -1.11 -0.08 4109.26 1.00
D_true[21] -1.09 0.27 -1.09 -1.50 -0.66 3322.54 1.00
D_true[22] -0.26 0.27 -0.27 -0.69 0.16 4368.18 1.00
D_true[23] -1.01 0.28 -1.01 -1.43 -0.55 3460.48 1.00
D_true[24] 0.44 0.41 0.43 -0.20 1.09 3508.90 1.00
D_true[25] -0.03 0.31 -0.02 -0.51 0.49 5159.17 1.00
D_true[26] -0.01 0.51 -0.00 -0.89 0.74 3082.96 1.00
D_true[27] -0.16 0.39 -0.17 -0.79 0.47 4702.21 1.00
D_true[28] -0.26 0.48 -0.28 -1.07 0.49 3698.39 1.00
D_true[29] -1.81 0.24 -1.82 -2.19 -1.40 3468.19 1.00
D_true[30] 0.17 0.44 0.17 -0.50 0.87 4467.52 1.00
D_true[31] -1.66 0.16 -1.66 -1.95 -1.42 3546.08 1.00
D_true[32] 0.12 0.24 0.12 -0.25 0.49 4106.29 1.00
D_true[33] -0.08 0.53 -0.06 -0.87 0.77 2352.92 1.00
D_true[34] -0.12 0.22 -0.12 -0.46 0.23 3803.36 1.00
D_true[35] 1.28 0.43 1.27 0.64 2.01 4563.89 1.00
D_true[36] 0.24 0.36 0.23 -0.31 0.81 4425.27 1.00
D_true[37] -1.02 0.22 -1.02 -1.35 -0.67 4596.58 1.00
D_true[38] -0.93 0.55 -0.94 -1.78 -0.04 3146.40 1.00
D_true[39] -0.68 0.33 -0.68 -1.18 -0.13 4441.57 1.00
D_true[40] 0.24 0.56 0.24 -0.61 1.17 3139.58 1.00
D_true[41] 0.75 0.34 0.75 0.20 1.28 2849.56 1.00
D_true[42] 0.19 0.18 0.19 -0.10 0.46 3765.82 1.00
D_true[43] 0.79 0.42 0.80 0.13 1.46 2594.25 1.00
D_true[44] -0.40 0.51 -0.41 -1.22 0.40 3325.20 1.00
D_true[45] -0.39 0.25 -0.39 -0.78 0.00 3877.89 1.00
D_true[46] 0.15 0.30 0.16 -0.33 0.62 4323.28 1.00
D_true[47] 0.56 0.46 0.56 -0.22 1.24 3581.97 1.00
D_true[48] -0.64 0.27 -0.64 -1.06 -0.22 3759.21 1.00
D_true[49] 0.82 0.62 0.82 -0.14 1.79 3003.50 1.00
a -0.05 0.10 -0.06 -0.21 0.10 2791.80 1.00
bA -0.61 0.16 -0.62 -0.87 -0.37 2064.26 1.00
bM 0.04 0.17 0.04 -0.20 0.33 1884.70 1.00
sigma 0.60 0.11 0.59 0.42 0.76 810.11 1.00
Number of divergences: 0
###Markdown
Code 15.5
###Code
dlist = dict(
D_obs=d.Divorce.pipe(lambda x: (x - x.mean()) / x.std()).values,
D_sd=d["Divorce SE"].values / d.Divorce.std(),
M_obs=d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
M_sd=d["Marriage SE"].values / d.Marriage.std(),
A=d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std()).values,
N=d.shape[0],
)
def model(A, M_sd, M_obs, D_sd, D_obs, N):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
M_est = numpyro.sample("M_est", dist.Normal(0, 1).expand([N]))
numpyro.sample("M_obs", dist.Normal(M_est, M_sd), obs=M_obs)
mu = a + bA * A + bM * M_est
D_est = numpyro.sample("D_est", dist.Normal(mu, sigma))
numpyro.sample("D_obs", dist.Normal(D_est, D_sd), obs=D_obs)
m15_2 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_2.run(random.PRNGKey(0), **dlist)
###Output
_____no_output_____
###Markdown
Code 15.6
###Code
post = m15_2.get_samples()
D_est = jnp.mean(post["D_est"], 0)
M_est = jnp.mean(post["M_est"], 0)
plt.plot(dlist["M_obs"], dlist["D_obs"], "bo", alpha=0.5)
plt.gca().set(xlabel="marriage rate (std)", ylabel="divorce rate (std)")
plt.plot(M_est, D_est, "ko", mfc="none")
for i in range(d.shape[0]):
plt.plot([dlist["M_obs"][i], M_est[i]], [dlist["D_obs"][i], D_est[i]], "k-", lw=1)
###Output
_____no_output_____
###Markdown
Code 15.7
###Code
N = 500
A = dist.Normal().sample(random.PRNGKey(0), (N,))
M = dist.Normal(-A).sample(random.PRNGKey(1))
D = dist.Normal(A).sample(random.PRNGKey(2))
A_obs = dist.Normal(A).sample(random.PRNGKey(3))
###Output
_____no_output_____
###Markdown
Code 15.8
###Code
N = 100
S = dist.Normal().sample(random.PRNGKey(0), (N,))
H = dist.Binomial(10, expit(S)).sample(random.PRNGKey(1))
###Output
_____no_output_____
###Markdown
Code 15.9
###Code
D = dist.Bernoulli(0.5).sample(random.PRNGKey(2), (N,)) # dogs completely random
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.10
###Code
D = jnp.where(S > 0, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.11
###Code
with numpyro.handlers.seed(rng_seed=501):
N = 1000
X = numpyro.sample("X", dist.Normal().expand([N]))
S = numpyro.sample("S", dist.Normal().expand([N]))
H = numpyro.sample("H", dist.Binomial(10, logits=2 + S - 2 * X))
D = jnp.where(X > 1, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.12
###Code
dat_list = dict(H=H, S=S)
def model(S, H):
a = numpyro.sample("a", dist.Normal(0, 1))
bS = numpyro.sample("bS", dist.Normal(0, 0.5))
logit_p = a + bS * S
numpyro.sample("H", dist.Binomial(10, logits=logit_p), obs=H)
m15_3 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_3.run(random.PRNGKey(0), **dat_list)
m15_3.print_summary()
###Output
mean std median 5.0% 95.0% n_eff r_hat
a 1.32 0.03 1.32 1.28 1.36 1473.87 1.00
bS 0.62 0.03 0.62 0.58 0.66 1294.44 1.00
Number of divergences: 0
###Markdown
Code 15.13
###Code
dat_list0 = dict(H=H[D == 0], S=S[D == 0])
def model(S, H):
a = numpyro.sample("a", dist.Normal(0, 1))
bS = numpyro.sample("bS", dist.Normal(0, 0.5))
logit_p = a + bS * S
numpyro.sample("H", dist.Binomial(10, logits=logit_p), obs=H)
m15_4 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_4.run(random.PRNGKey(0), **dat_list0)
m15_4.print_summary()
###Output
mean std median 5.0% 95.0% n_eff r_hat
a 1.92 0.03 1.92 1.86 1.97 1027.57 1.01
bS 0.72 0.03 0.72 0.67 0.78 1039.30 1.00
Number of divergences: 0
###Markdown
Code 15.14
###Code
D = jnp.where(jnp.abs(X) < 1, 1, 0)
###Output
_____no_output_____
###Markdown
Code 15.15
###Code
N = 100
S = dist.Normal().sample(random.PRNGKey(0), (N,))
H = dist.Binomial(10, logits=S).sample(random.PRNGKey(1))
D = jnp.where(H < 5, 1, 0)
Hm = jnp.where(D == 1, jnp.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.16
###Code
milk = pd.read_csv("../data/milk.csv", sep=";")
d = milk
d["neocortex.prop"] = d["neocortex.perc"] / 100
d["logmass"] = d.mass.apply(math.log)
###Output
_____no_output_____
###Markdown
Code 15.17
###Code
dat_list = dict(
K=d["kcal.per.g"].pipe(lambda x: (x - x.mean()) / x.std()).values,
B=d["neocortex.prop"].pipe(lambda x: (x - x.mean()) / x.std()).values,
M=d.logmass.pipe(lambda x: (x - x.mean()) / x.std()).values,
)
def model(B, M, K):
a = numpyro.sample("a", dist.Normal(0, 0.5))
nu = numpyro.sample("nu", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma_B = numpyro.sample("sigma_B", dist.Exponential(1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
B_impute = numpyro.sample(
"B_impute", dist.Normal(0, 1).expand([int(np.isnan(B).sum())]).mask(False)
)
B = ops.index_update(B, np.nonzero(np.isnan(B))[0], B_impute)
numpyro.sample("B", dist.Normal(nu, sigma_B), obs=B)
mu = a + bB * B + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_5 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_5.run(random.PRNGKey(0), **dat_list)
###Output
_____no_output_____
###Markdown
Code 15.18
###Code
m15_5.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
B_impute[0] -0.59 0.93 -0.61 -2.06 0.86 1655.83 1.00
B_impute[1] -0.72 0.94 -0.73 -2.31 0.72 1612.19 1.00
B_impute[2] -0.74 0.93 -0.76 -2.18 0.83 1767.06 1.00
B_impute[3] -0.31 0.89 -0.33 -1.68 1.14 2108.70 1.00
B_impute[4] 0.45 0.90 0.44 -0.88 1.91 2078.89 1.00
B_impute[5] -0.18 0.93 -0.19 -1.67 1.21 2398.70 1.00
B_impute[6] 0.18 0.89 0.19 -1.24 1.60 2515.01 1.00
B_impute[7] 0.30 0.87 0.30 -1.03 1.69 2381.34 1.00
B_impute[8] 0.51 0.89 0.53 -0.94 1.82 2092.03 1.00
B_impute[9] -0.44 0.92 -0.44 -1.94 0.94 1808.37 1.00
B_impute[10] -0.26 0.89 -0.28 -1.62 1.19 2006.01 1.00
B_impute[11] 0.16 0.89 0.18 -1.32 1.53 2103.16 1.00
a 0.03 0.17 0.04 -0.26 0.27 2079.00 1.00
bB 0.50 0.24 0.50 0.09 0.85 687.56 1.00
bM -0.55 0.20 -0.55 -0.84 -0.20 871.45 1.00
nu -0.05 0.21 -0.05 -0.37 0.29 1669.64 1.00
sigma 0.84 0.14 0.83 0.62 1.04 930.30 1.00
sigma_B 1.01 0.17 0.99 0.76 1.27 1055.17 1.00
Number of divergences: 0
###Markdown
Code 15.19
###Code
obs_idx = d["neocortex.prop"].notnull().values
dat_list_obs = dict(
K=dat_list["K"][obs_idx], B=dat_list["B"][obs_idx], M=dat_list["M"][obs_idx]
)
def model(B, M, K):
a = numpyro.sample("a", dist.Normal(0, 0.5))
nu = numpyro.sample("nu", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma_B = numpyro.sample("sigma_B", dist.Exponential(1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
numpyro.sample("B", dist.Normal(nu, sigma_B), obs=B)
mu = a + bB * B + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_6 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_6.run(random.PRNGKey(0), **dat_list_obs)
m15_6.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
a 0.10 0.19 0.10 -0.19 0.43 2099.18 1.00
bB 0.61 0.28 0.62 0.15 1.02 1260.84 1.00
bM -0.64 0.25 -0.65 -1.05 -0.27 1227.34 1.00
nu -0.01 0.23 0.00 -0.35 0.37 2052.60 1.00
sigma 0.87 0.19 0.83 0.60 1.13 1141.14 1.00
sigma_B 1.05 0.19 1.03 0.74 1.31 1891.93 1.00
Number of divergences: 0
###Markdown
Code 15.20
###Code
az.plot_forest(
[az.from_numpyro(m15_5), az.from_numpyro(m15_6)],
model_names=["m15.5", "m15.6"],
var_names=["bB", "bM"],
combined=True,
hdi_prob=0.89,
)
plt.show()
###Output
_____no_output_____
###Markdown
Code 15.21
###Code
post = m15_5.get_samples()
B_impute_mu = jnp.mean(post["B_impute"], 0)
B_impute_ci = jnp.percentile(post["B_impute"], q=(5.5, 94.5), axis=0)
# B vs K
plt.plot(dat_list["B"], dat_list["K"], "o")
plt.gca().set(xlabel="neocortex percent (std)", ylabel="kcal mild (std)")
miss_idx = pd.isna(dat_list["B"]).nonzero()[0]
Ki = dat_list["K"][miss_idx]
plt.plot(B_impute_mu, Ki, "ko", mfc="none")
for i in range(12):
plt.plot(B_impute_ci[:, i], jnp.repeat(Ki[i], 2), "k", lw=1)
plt.show()
# M vs B
plt.plot(dat_list["M"], dat_list["B"], "o")
plt.gca().set(xlabel="log body mass (std)", ylabel="neocortex percent (std)")
Mi = dat_list["M"][miss_idx]
plt.plot(Mi, B_impute_mu, "ko", mfc="none")
for i in range(12):
plt.plot(jnp.repeat(Mi[i], 2), B_impute_ci[:, i], "k", lw=1)
###Output
_____no_output_____
###Markdown
Code 15.22
###Code
def model(B, M, K):
# priors
a = numpyro.sample("a", dist.Normal(0, 0.5))
muB = numpyro.sample("muB", dist.Normal(0, 0.5))
muM = numpyro.sample("muM", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
Rho_BM = numpyro.sample("Rho_BM", dist.LKJ(2, 2))
Sigma_BM = numpyro.sample("Sigma_BM", dist.Exponential(1).expand([2]))
# define B_merge as mix of observed and imputed values
B_impute = numpyro.sample(
"B_impute", dist.Normal(0, 1).expand([int(np.isnan(B).sum())]).mask(False)
)
B_merge = ops.index_update(B, np.nonzero(np.isnan(B))[0], B_impute)
# M and B correlation
MB = jnp.stack([M, B_merge], axis=1)
cov = jnp.outer(Sigma_BM, Sigma_BM) * Rho_BM
numpyro.sample("MB", dist.MultivariateNormal(jnp.stack([muM, muB]), cov), obs=MB)
# K as function of B and M
mu = a + bB * B_merge + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_7 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_7.run(random.PRNGKey(0), **dat_list)
post = m15_7.get_samples(group_by_chain=True)
print_summary({k: v for k, v in post.items() if k in ["bM", "bB", "Rho_BM"]})
###Output
mean std median 5.0% 95.0% n_eff r_hat
Rho_BM[0,0] 1.00 0.00 1.00 1.00 1.00 nan nan
Rho_BM[0,1] 0.60 0.14 0.62 0.39 0.80 1416.93 1.00
Rho_BM[1,0] 0.60 0.14 0.62 0.39 0.80 1416.93 1.00
Rho_BM[1,1] 1.00 0.00 1.00 1.00 1.00 1718.78 1.00
bB 0.58 0.26 0.59 0.14 0.98 704.15 1.01
bM -0.64 0.23 -0.64 -1.02 -0.30 942.12 1.00
###Markdown
Code 15.23
###Code
B_missidx = pd.isna(dat_list["B"]).nonzero()[0]
###Output
_____no_output_____
###Markdown
Code 15.24
###Code
Moralizing_gods = pd.read_csv("../data/Moralizing_gods.csv", sep=";")
Moralizing_gods
###Output
_____no_output_____
###Markdown
Code 15.25
###Code
Moralizing_gods.moralizing_gods.value_counts(dropna=False)
###Output
_____no_output_____
###Markdown
Code 15.26
###Code
symbol = Moralizing_gods.moralizing_gods.apply(lambda x: "." if x == 1 else "o")
symbol[Moralizing_gods.moralizing_gods.isna()] = "x"
color = Moralizing_gods.moralizing_gods.apply(lambda x: "k" if pd.isna(x) else "b")
for pch in ["o", ".", "x"]:
plt.scatter(
Moralizing_gods.year[symbol == pch],
Moralizing_gods.population[symbol == pch],
marker=pch,
color=color[symbol == pch],
facecolor="none" if pch == "o" else None,
lw=1.5,
alpha=0.7,
)
plt.gca().set(xlabel="Time (year)", ylabel="Population size")
plt.show()
###Output
_____no_output_____
###Markdown
Code 15.27
###Code
dmg = Moralizing_gods
dmg.astype(str).groupby(["moralizing_gods", "writing"]).size().unstack(fill_value=0)
###Output
_____no_output_____
###Markdown
Code 15.28
###Code
dmg = Moralizing_gods
haw = dmg.polity == "Big Island Hawaii"
dmg.loc[haw, ["year", "population", "writing", "moralizing_gods"]].T.round(3)
###Output
_____no_output_____
###Markdown
Code 15.29
###Code
with numpyro.handlers.seed(rng_seed=9):
N_houses = 100
alpha = 5
beta = -3
k = 0.5
r = 0.2
cat = numpyro.sample("cat", dist.Bernoulli(k).expand([N_houses]))
notes = numpyro.sample("notes", dist.Poisson(alpha + beta * cat))
R_C = numpyro.sample("R_C", dist.Bernoulli(r).expand([N_houses]))
cat_obs = jnp.where(R_C == 1, -9, cat)
###Output
_____no_output_____
###Markdown
Code 15.30
###Code
dat = dict(notes=notes, cat=cat_obs.copy(), RC=R_C.copy(), N=N_houses - 1)
def model(N, RC, cat, notes):
# priors
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 0.5))
# sneaking cat model
k = numpyro.sample("k", dist.Beta(2, 2))
numpyro.sample("cat|RC==0", dist.Bernoulli(k), obs=cat[RC == 0])
# singing bird model
# cat NA:
custom_logprob = jnp.logaddexp(
jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes[RC == 1]),
jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes[RC == 1]),
)
numpyro.factor("notes|RC==1", custom_logprob)
# cat known present/absent:
lambda_ = jnp.exp(a + b * cat[RC == 0])
numpyro.sample("notes|RC==0", dist.Poisson(lambda_), obs=notes[RC == 0])
m15_8 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_8.run(random.PRNGKey(0), **dat)
###Output
_____no_output_____
###Markdown
Code 15.31
###Code
def model(N, RC, cat, notes, link=False):
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 0.5))
# sneaking cat model
k = numpyro.sample("k", dist.Beta(2, 2))
numpyro.sample("cat|RC==0", dist.Bernoulli(k), obs=cat[RC == 0])
# singing bird model
custom_logprob = jnp.logaddexp(
jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes[RC == 1]),
jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes[RC == 1]),
)
numpyro.factor("notes|RC==1", custom_logprob)
lambda_ = jnp.exp(a + b * cat[RC == 0])
numpyro.sample("notes|RC==0", dist.Poisson(lambda_), obs=notes[RC == 0])
if link:
lpC0 = numpyro.deterministic(
"lpC0", jnp.log(1 - k) + dist.Poisson(jnp.exp(a)).log_prob(notes)
)
lpC1 = numpyro.deterministic(
"lpC1", jnp.log(k) + dist.Poisson(jnp.exp(a + b)).log_prob(notes)
)
numpyro.deterministic("PrC1", jnp.exp(lpC1) / (jnp.exp(lpC1) + jnp.exp(lpC0)))
m15_9 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_9.run(random.PRNGKey(0), **dat)
###Output
_____no_output_____
###Markdown
Code 15.32
###Code
with numpyro.handlers.seed(rng_seed=100):
x = numpyro.sample("x", dist.Normal().expand([10]))
y = numpyro.sample("y", dist.Normal(x))
x = jnp.concatenate([x, jnp.array([jnp.nan])])
y = jnp.concatenate([y, jnp.array([100])])
d = dict(x=x, y=y)
###Output
_____no_output_____
###Markdown
Code 15.33
###Code
Primates301 = pd.read_csv("../data/Primates301.csv", sep=";")
d = Primates301
cc = d.dropna(subset=["brain", "body"]).index
B = d.brain[cc]
M = d.body[cc]
B = B.values / max(B)
M = M.values / max(M)
###Output
_____no_output_____
###Markdown
Code 15.34
###Code
Bse = B * 0.1
Mse = M * 0.1
###Output
_____no_output_____
###Markdown
Code 15.35
###Code
dat_list = dict(B=B, M=M)
def model(M, B):
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
mu = a + b * jnp.log(M)
numpyro.sample("B", dist.LogNormal(mu, sigma), obs=B)
m15H4 = MCMC(NUTS(model), 500, 500)
m15H4.run(random.PRNGKey(0), **dat_list)
###Output
sample: 100%|██████████| 1000/1000 [00:06<00:00, 158.50it/s, 7 steps of size 2.37e-01. acc. prob=0.95]
###Markdown
Code 15.36
###Code
start = dict(M_true=dat_list["M"], B_true=dat_list["B"])
init_strategy = init_to_value(values=start)
###Output
_____no_output_____
###Markdown
Code 15.37
###Code
Primates301 = pd.read_csv("../data/Primates301.csv", sep=";")
d = Primates301
d.isna().sum()
###Output
_____no_output_____
###Markdown
Code 15.38
###Code
cc = d.dropna(subset=["body"]).index
M = d.body[cc]
M = M.values / max(M)
B = d.brain[cc]
B = B.values / B.max(skipna=True)
###Output
_____no_output_____
###Markdown
Code 15.39
###Code
start = dict(B_impute=jnp.repeat(0.5, 56))
init_strategy = init_to_value(values=start)
###Output
_____no_output_____
###Markdown
Code 15.6
###Code
post = m15_2.get_samples()
D_est = np.mean(post["D_est"], 0)
M_est = np.mean(post["M_est"], 0)
plt.plot(dlist["M_obs"], dlist["D_obs"], "bo", alpha=0.5)
plt.gca().set(xlabel="marriage rate (std)", ylabel="divorce rate (std)")
plt.plot(M_est, D_est, "ko", mfc="none")
for i in range(d.shape[0]):
plt.plot([dlist["M_obs"][i], M_est[i]], [dlist["D_obs"][i], D_est[i]],
"k-", lw=1)
###Output
_____no_output_____
###Markdown
Code 15.7
###Code
N = 500
A = dist.Normal().sample(PRNGKey(0), (N,))
M = dist.Normal(-A).sample(PRNGKey(1))
D = dist.Normal(A).sample(PRNGKey(2))
A_obs = dist.Normal(A).sample(PRNGKey(3))
###Output
_____no_output_____
###Markdown
Code 15.8
###Code
N = 100
S = dist.Normal().sample(PRNGKey(0), (N,))
H = dist.Binomial(10, expit(S)).sample(PRNGKey(1))
###Output
_____no_output_____
###Markdown
Code 15.9
###Code
D = dist.Bernoulli(0.5).sample(PRNGKey(2)) # dogs completely random
Hm = np.where(D == 1, np.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.10
###Code
D = np.where(S > 0, 1, 0)
Hm = np.where(D == 1, np.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.11
###Code
with numpyro.handlers.seed(rng_seed=501):
N = 1000
X = numpyro.sample("X", dist.Normal(), sample_shape=(N,))
S = numpyro.sample("S", dist.Normal(), sample_shape=(N,))
H = numpyro.sample("H", dist.Binomial(10, logits=2 + S - 2 * X))
D = np.where(X > 1, 1, 0)
Hm = np.where(D == 1, np.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.12
###Code
dat_list = dict(H=H, S=S)
def model(S, H):
a = numpyro.sample("a", dist.Normal(0, 1))
bS = numpyro.sample("bS", dist.Normal(0, 0.5))
logit_p = a + bS * S
numpyro.sample("H", dist.Binomial(10, logits=logit_p), obs=H)
m15_3 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_3.run(PRNGKey(0), **dat_list)
m15_3.print_summary()
###Output
mean std median 5.0% 95.0% n_eff r_hat
a 1.29 0.02 1.29 1.25 1.33 1528.55 1.00
bS 0.58 0.03 0.58 0.54 0.62 1271.14 1.00
Number of divergences: 0
###Markdown
Code 15.13
###Code
dat_list0 = dict(H=H[D == 0], S=S[D == 0])
def model(S, H):
a = numpyro.sample("a", dist.Normal(0, 1))
bS = numpyro.sample("bS", dist.Normal(0, 0.5))
logit_p = a + bS * S
numpyro.sample("H", dist.Binomial(10, logits=logit_p), obs=H)
m15_4 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_4.run(PRNGKey(0), **dat_list0)
m15_4.print_summary()
###Output
mean std median 5.0% 95.0% n_eff r_hat
a 1.86 0.03 1.86 1.80 1.91 997.61 1.00
bS 0.65 0.03 0.65 0.59 0.70 1034.76 1.00
Number of divergences: 0
###Markdown
Code 15.14
###Code
D = np.where(np.abs(X) < 1, 1, 0)
###Output
_____no_output_____
###Markdown
Code 15.15
###Code
N = 100
S = dist.Normal().sample(PRNGKey(0), (N,))
H = dist.Binomial(10, logits=S).sample(PRNGKey(1))
D = np.where(H < 5, 1, 0)
Hm = np.where(D == 1, np.nan, H)
###Output
_____no_output_____
###Markdown
Code 15.16
###Code
milk = pd.read_csv("../data/milk.csv", sep=";")
d = milk
d["neocortex.prop"] = d["neocortex.perc"] / 100
d["logmass"] = d.mass.pipe(onp.log)
###Output
_____no_output_____
###Markdown
Code 15.17
###Code
dat_list = dict(
K=d["kcal.per.g"].pipe(lambda x: (x - x.mean()) / x.std()).values,
B=d["neocortex.prop"].pipe(lambda x: (x - x.mean()) / x.std()).values,
M=d.logmass.pipe(lambda x: (x - x.mean()) / x.std()).values)
def model(B, M, K):
a = numpyro.sample("a", dist.Normal(0, 0.5))
nu = numpyro.sample("nu", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma_B = numpyro.sample("sigma_B", dist.Exponential(1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
B_impute = numpyro.param("B_impute", np.zeros(onp.isnan(B).sum()))
B = ops.index_update(B, onp.nonzero(onp.isnan(B))[0], B_impute)
numpyro.sample("B", dist.Normal(nu, sigma_B), obs=B)
mu = a + bB * B + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_3 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_3.run(PRNGKey(0), **dat_list)
###Output
_____no_output_____
###Markdown
Code 15.18
###Code
m15_3.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
B_impute[0] -0.54 0.94 -0.54 -1.99 0.98 2001.19 1.00
B_impute[1] -0.67 0.93 -0.68 -2.14 0.69 1698.09 1.00
B_impute[2] -0.71 0.93 -0.72 -2.13 0.80 1911.13 1.00
B_impute[3] -0.28 0.89 -0.27 -1.64 1.18 1900.20 1.00
B_impute[4] 0.46 0.95 0.48 -0.95 2.04 2230.10 1.00
B_impute[5] -0.20 0.87 -0.19 -1.59 1.16 2312.67 1.00
B_impute[6] 0.21 0.87 0.22 -1.05 1.64 2164.04 1.00
B_impute[7] 0.28 0.89 0.28 -1.03 1.73 1809.97 1.00
B_impute[8] 0.47 0.93 0.53 -1.04 1.89 1602.35 1.00
B_impute[9] -0.44 0.95 -0.46 -1.97 1.03 1723.26 1.00
B_impute[10] -0.29 0.87 -0.30 -1.60 1.10 2063.17 1.00
B_impute[11] 0.13 0.90 0.15 -1.36 1.44 1966.08 1.00
a 0.03 0.16 0.03 -0.22 0.30 1827.52 1.00
bB 0.49 0.23 0.50 0.12 0.84 779.06 1.01
bM -0.54 0.20 -0.54 -0.88 -0.24 1005.91 1.00
nu -0.05 0.21 -0.05 -0.39 0.30 1640.34 1.00
sigma 0.84 0.14 0.83 0.63 1.06 1041.27 1.00
sigma_B 1.01 0.17 0.99 0.75 1.26 1286.48 1.00
Number of divergences: 0
###Markdown
Code 15.19
###Code
obs_idx = ~onp.isnan(d["neocortex.prop"].values)
dat_list_obs = dict(K=dat_list["K"][obs_idx],
B=dat_list["B"][obs_idx],
M=dat_list["M"][obs_idx])
def model(B, M, K):
a = numpyro.sample("a", dist.Normal(0, 0.5))
nu = numpyro.sample("nu", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma_B = numpyro.sample("sigma_B", dist.Exponential(1))
sigma = numpyro.sample("sigma", dist.Exponential(1))
numpyro.sample("B", dist.Normal(nu, sigma_B), obs=B)
mu = a + bB * B + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_4 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_4.run(PRNGKey(0), **dat_list_obs)
m15_4.print_summary(0.89)
###Output
mean std median 5.5% 94.5% n_eff r_hat
a 0.10 0.20 0.10 -0.21 0.40 2319.53 1.00
bB 0.59 0.28 0.61 0.19 1.05 1127.87 1.00
bM -0.63 0.25 -0.64 -1.03 -0.25 1125.74 1.00
nu 0.00 0.23 -0.00 -0.38 0.37 2095.51 1.00
sigma 0.88 0.18 0.85 0.59 1.13 1575.27 1.00
sigma_B 1.04 0.19 1.01 0.75 1.30 1983.85 1.00
Number of divergences: 0
###Markdown
Code 15.20
###Code
az.plot_forest([az.from_numpyro(m15_3), az.from_numpyro(m15_4)],
model_names=["m15.3", "m15.4"], var_names=["bB", "bM"],
combined=True, credible_interval=0.89);
###Output
_____no_output_____
###Markdown
Code 15.21
###Code
post = m15_3.get_samples()
B_impute_mu = np.mean(post["B_impute"], 0)
B_impute_ci = np.percentile(post["B_impute"], q=(5.5, 94.5), axis=0)
# B vs K
plt.plot(dat_list["B"], dat_list["K"], "o")
plt.gca().set(xlabel="neocortex percent (std)", ylabel="kcal mild (std)")
miss_idx = onp.nonzero(onp.isnan(dat_list["B"]))[0]
Ki = dat_list["K"][miss_idx]
plt.plot(B_impute_mu, Ki, "ko", mfc="none")
for i in range(12):
plt.plot(B_impute_ci[:, i], np.repeat(Ki[i], 2), "k", lw=1)
plt.show()
# M vs B
plt.plot(dat_list["M"], dat_list["B"], "o")
plt.gca().set(xlabel="log body mass (std)", ylabel="neocortex percent (std)")
Mi = dat_list["M"][miss_idx]
plt.plot(Mi, B_impute_mu, "ko", mfc="none")
for i in range(12):
plt.plot(np.repeat(Mi[i], 2), B_impute_ci[:, i], "k", lw=1)
###Output
_____no_output_____
###Markdown
Code 15.22
###Code
def model(B, M, K):
# priors
a = numpyro.sample("a", dist.Normal(0, 0.5))
muB = numpyro.sample("muB", dist.Normal(0, 0.5))
muM = numpyro.sample("muM", dist.Normal(0, 0.5))
bB = numpyro.sample("bB", dist.Normal(0, 0.5))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
Rho_BM = numpyro.sample("Rho_BM", dist.LKJ(2, 2))
Sigma_BM = numpyro.sample("Sigma_BM", dist.Exponential(1), sample_shape=(2,))
# define B_merge as mix of observed and imputed values
B_impute = numpyro.param("B_impute", np.zeros(onp.isnan(B).sum()))
B_merge = ops.index_update(B, onp.nonzero(onp.isnan(B))[0], B_impute)
# M and B correlation
MB = np.stack([M, B_merge], axis=1)
cov = np.outer(Sigma_BM, Sigma_BM) * Rho_BM
numpyro.sample("MB", dist.MultivariateNormal(np.stack([muM, muB]), cov),
obs=MB)
# K as function of B and M
mu = a + bB * B_merge + bM * M
numpyro.sample("K", dist.Normal(mu, sigma), obs=K)
m15_5 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_5.run(PRNGKey(0), **dat_list)
post = m15_5.get_samples(group_by_chain=True)
print_summary({k: v for k, v in post.items() if k in ["bM", "bB", "Rho_BM"]})
###Output
mean std median 5.0% 95.0% n_eff r_hat
Rho_BM[0,0] 1.00 0.00 1.00 1.00 1.00 nan nan
Rho_BM[0,1] 0.60 0.13 0.62 0.41 0.82 1500.42 1.00
Rho_BM[1,0] 0.60 0.13 0.62 0.41 0.82 1500.42 1.00
Rho_BM[1,1] 1.00 0.00 1.00 1.00 1.00 1440.31 1.00
bB 0.59 0.26 0.60 0.18 1.02 1083.74 1.00
bM -0.65 0.22 -0.66 -1.03 -0.32 1202.18 1.00
###Markdown
Code 15.23
###Code
B_missidx = onp.nonzero(onp.isnan(dat_list["B"]))[0]
###Output
_____no_output_____
###Markdown
Code 15.24
###Code
Moralizing_gods = pd.read_csv("../data/Moralizing_gods.csv", sep=";")
Moralizing_gods
###Output
_____no_output_____
###Markdown
Code 15.25
###Code
Moralizing_gods.moralizing_gods.value_counts(dropna=False)
###Output
_____no_output_____
###Markdown
Code 15.26
###Code
symbol = onp.where(Moralizing_gods.moralizing_gods.values == 1, "b", "w")
isnan = onp.isnan(Moralizing_gods.moralizing_gods.values)
symbol = onp.where(isnan, "k", symbol)
plt.scatter(Moralizing_gods.year[isnan], Moralizing_gods.population[isnan],
marker="x", color="k", facecolor=symbol[isnan], lw=1)
plt.scatter(Moralizing_gods.year[~isnan], Moralizing_gods.population[~isnan],
marker="o", color="b", facecolor=symbol[~isnan], alpha=0.8)
plt.gca().set(xlabel="Time (year)", ylabel="Population size");
###Output
_____no_output_____
###Markdown
Code 15.27
###Code
dmg = Moralizing_gods
dmg.astype(str).groupby(["moralizing_gods", "writing"]).size().unstack(
fill_value=0)
###Output
_____no_output_____
###Markdown
Code 15.28
###Code
dmg = Moralizing_gods
haw = dmg.polity == "Big Island Hawaii"
dmg.loc[haw, ["year", "population", "writing", "moralizing_gods"]].T.round(3)
###Output
_____no_output_____
###Markdown
Code 15.29
###Code
with numpyro.handlers.seed(rng_seed=9):
N_houses = 100
alpha = 5
beta = -3
k = 0.5
r = 0.2
cat = numpyro.sample("cat", dist.Bernoulli(k), sample_shape=(N_houses,))
notes = numpyro.sample("notes", dist.Poisson(alpha + beta * cat))
R_C = numpyro.sample("R_C", dist.Bernoulli(r), sample_shape=(N_houses,))
cat_obs = np.where(R_C == 1, -9, cat)
###Output
_____no_output_____
###Markdown
Code 15.30
###Code
dat = dict(notes=notes, cat=cat_obs, RC=R_C, N=N_houses - 1)
def model(N, RC, cat, notes):
# priors
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 0.5))
# sneaking cat model
k = numpyro.sample("k", dist.Beta(2, 2))
numpyro.sample("cat|RC==0", dist.Bernoulli(k), obs=cat[RC == 0])
# singing bird model
# cat NA:
custom_logprob = np.logaddexp(
np.log(k) + dist.Poisson(np.exp(a + b)).log_prob(notes[RC == 1]),
np.log(1 - k) + dist.Poisson(np.exp(a)).log_prob(notes[RC == 1]))
numpyro.sample("notes|RC==1", dist.Delta(log_density=custom_logprob), obs=0.)
# cat known present/absent:
lambda_ = np.exp(a + b * cat[RC == 0])
numpyro.sample("notes|RC==0", dist.Poisson(lambda_), obs=notes[RC == 0])
m15_6 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_6.run(PRNGKey(0), **dat)
###Output
_____no_output_____
###Markdown
Code 15.31
###Code
def model(N, RC, cat, notes, link=False):
a = numpyro.sample("a", dist.Normal(0, 1))
b = numpyro.sample("b", dist.Normal(0, 0.5))
# sneaking cat model
k = numpyro.sample("k", dist.Beta(2, 2))
numpyro.sample("cat|RC==0", dist.Bernoulli(k), obs=cat[RC == 0])
# singing bird model
custom_logprob = np.logaddexp(
np.log(k) + dist.Poisson(np.exp(a + b)).log_prob(notes[RC == 1]),
np.log(1 - k) + dist.Poisson(np.exp(a)).log_prob(notes[RC == 1]))
numpyro.sample("notes|RC==1", dist.Delta(log_density=custom_logprob), obs=0.)
lambda_ = np.exp(a + b * cat[RC == 0])
numpyro.sample("notes|RC==0", dist.Poisson(lambda_), obs=notes[RC == 0])
if link:
lpC0 = np.log(1 - k) + dist.Poisson(np.exp(a)).log_prob(notes)
lpC1 = np.log(k) + dist.Poisson(np.exp(a + b)).log_prob(notes)
PrC1 = np.exp(lpC1) / (np.exp(lpC1) + np.exp(lpC0))
numpyro.sample("lpC0", dist.Delta(lpC0), obs=lpC0)
numpyro.sample("lpC0", dist.Delta(lpC1), obs=lpC1)
numpyro.sample("lpC0", dist.Delta(PrC1), obs=PrC1)
m15_7 = MCMC(NUTS(model), 500, 500, num_chains=4)
m15_7.run(PRNGKey(0), **dat)
###Output
_____no_output_____
###Markdown
Code 15.32
###Code
with numpyro.handlers.seed(rng_seed=100):
x = numpyro.sample("x", dist.Normal(), sample_shape=(10,))
y = numpyro.sample("y", dist.Normal(x))
x = np.concatenate([x, np.array([np.nan])])
y = np.concatenate([y, np.array([100])])
d = dict(x=x, y=y)
###Output
_____no_output_____ |
Twitter/Twitter_Get_tweets_stats_from_profile.ipynb | ###Markdown
Twitter - Get tweets stats from profile **Tags:** twitter tweets scrap snippet **Author:** [Tannia Dubon](https://www.linkedin.com/in/tanniadubon/) Input Import library
###Code
import os
import re
import pandas as pd
#install developer snscrape package via command line
os.system("pip3 install git+https://github.com/JustAnotherArchivist/snscrape.git")
###Output
_____no_output_____
###Markdown
Variables
###Code
#criteria for searching by username
username = "JupyterNaas"
tweet_count = 500
###Output
_____no_output_____
###Markdown
Model Scrap and save results in JSON
###Code
#search by username using command line
os.system("snscrape --jsonl --max-results {} twitter-search from:{} > user-tweets.json".format(tweet_count, username))
###Output
_____no_output_____
###Markdown
Read JSON
###Code
# Reads the json generated from the CLI command above and creates a pandas dataframe
df = pd.read_json('user-tweets.json', lines=True, convert_dates=True, keep_default_dates=True)
df
###Output
_____no_output_____
###Markdown
Clean dataframe to keep only necessary columns - URL- TITLE- CONTENT- HASTAGS- DATE- LIKES- RETWEETS
###Code
#copy dataframe
df1 = df.copy()
#keep only the columns needed
df1 = df1[['url','content','hashtags','date','likeCount','retweetCount']]
#convert columns to upper case to follow naas df convention
df1.columns = df1.columns.str.upper()
#convert time to ISO format to follow naas date convention
df1.DATE = pd.to_datetime(df1.DATE).dt.strftime("%Y-%m-%d")
#clean HASHTAGS column to provide searchable items in columns
df1.HASHTAGS = df1.HASHTAGS.fillna("[]")
df1.HASHTAGS = df1.apply(lambda row: ", ".join(list(row.HASHTAGS)) if row.HASHTAGS != '[]' else "", axis=1)
#display results
df1
###Output
_____no_output_____
###Markdown
Output Save to df
###Code
df1.to_csv("tweets_from_URL.csv", index=False)
###Output
_____no_output_____
###Markdown
Twitter - Get tweets stats from profile **Tags:** twitter tweets scrap Input Import library
###Code
import os
import re
import pandas as pd
#install developer snscrape package via command line
os.system("pip3 install git+https://github.com/JustAnotherArchivist/snscrape.git")
###Output
_____no_output_____
###Markdown
Variables
###Code
#criteria for searching by username
username = "JupyterNaas"
tweet_count = 500
###Output
_____no_output_____
###Markdown
Model Scrap and save results in JSON
###Code
#search by username using command line
os.system("snscrape --jsonl --max-results {} twitter-search from:{} > user-tweets.json".format(tweet_count, username))
###Output
_____no_output_____
###Markdown
Read JSON
###Code
# Reads the json generated from the CLI command above and creates a pandas dataframe
df = pd.read_json('user-tweets.json', lines=True, convert_dates=True, keep_default_dates=True)
df
###Output
_____no_output_____
###Markdown
Clean dataframe to keep only necessary columns - URL- TITLE- CONTENT- HASTAGS- DATE- LIKES- RETWEETS
###Code
#copy dataframe
df1 = df.copy()
#keep only the columns needed
df1 = df1[['url','content','hashtags','date','likeCount','retweetCount']]
#convert columns to upper case to follow naas df convention
df1.columns = df1.columns.str.upper()
#convert time to ISO format to follow naas date convention
df1.DATE = pd.to_datetime(df1.DATE).dt.strftime("%Y-%m-%d")
#clean HASHTAGS column to provide searchable items in columns
df1.HASHTAGS = df1.HASHTAGS.fillna("[]")
df1.HASHTAGS = df1.apply(lambda row: ", ".join(list(row.HASHTAGS)) if row.HASHTAGS != '[]' else "", axis=1)
#display results
df1
###Output
_____no_output_____
###Markdown
Output Save to df
###Code
df1.to_csv("tweets_from_URL.csv", index=False)
###Output
_____no_output_____
###Markdown
Twitter - Get tweets stats from profile **Tags:** twitter tweets scrap snippet content dataframe **Author:** [Tannia Dubon](https://www.linkedin.com/in/tanniadubon/) Input Import library
###Code
import os
import re
import pandas as pd
#install developer snscrape package via command line
os.system("pip3 install git+https://github.com/JustAnotherArchivist/snscrape.git")
###Output
_____no_output_____
###Markdown
Variables
###Code
#criteria for searching by username
username = "JupyterNaas"
tweet_count = 500
###Output
_____no_output_____
###Markdown
Model Scrap and save results in JSON
###Code
#search by username using command line
os.system("snscrape --jsonl --max-results {} twitter-search from:{} > user-tweets.json".format(tweet_count, username))
###Output
_____no_output_____
###Markdown
Read JSON
###Code
# Reads the json generated from the CLI command above and creates a pandas dataframe
df = pd.read_json('user-tweets.json', lines=True, convert_dates=True, keep_default_dates=True)
df
###Output
_____no_output_____
###Markdown
Clean dataframe to keep only necessary columns - URL- TITLE- CONTENT- HASTAGS- DATE- LIKES- RETWEETS
###Code
#copy dataframe
df1 = df.copy()
#keep only the columns needed
df1 = df1[['url','content','hashtags','date','likeCount','retweetCount']]
#convert columns to upper case to follow naas df convention
df1.columns = df1.columns.str.upper()
#convert time to ISO format to follow naas date convention
df1.DATE = pd.to_datetime(df1.DATE).dt.strftime("%Y-%m-%d")
#clean HASHTAGS column to provide searchable items in columns
df1.HASHTAGS = df1.HASHTAGS.fillna("[]")
df1.HASHTAGS = df1.apply(lambda row: ", ".join(list(row.HASHTAGS)) if row.HASHTAGS != '[]' else "", axis=1)
#display results
df1
###Output
_____no_output_____
###Markdown
Output Save to df
###Code
df1.to_csv("tweets_from_URL.csv", index=False)
###Output
_____no_output_____
###Markdown
Twitter - Get tweets stats from profile **Tags:** twitter tweets scrap snippet content dataframe **Author:** [Tannia Dubon](https://www.linkedin.com/in/tanniadubon/) Input Import library
###Code
import os
import re
import pandas as pd
#install developer snscrape package via command line
os.system("pip3 install git+https://github.com/JustAnotherArchivist/snscrape.git")
###Output
_____no_output_____
###Markdown
Variables
###Code
#criteria for searching by username
username = "JupyterNaas"
tweet_count = 500
###Output
_____no_output_____
###Markdown
Model Scrap and save results in JSON
###Code
#search by username using command line
os.system("snscrape --jsonl --max-results {} twitter-search from:{} > user-tweets.json".format(tweet_count, username))
###Output
_____no_output_____
###Markdown
Read JSON
###Code
# Reads the json generated from the CLI command above and creates a pandas dataframe
df = pd.read_json('user-tweets.json', lines=True, convert_dates=True, keep_default_dates=True)
df
###Output
_____no_output_____
###Markdown
Clean dataframe to keep only necessary columns - URL- TITLE- CONTENT- HASTAGS- DATE- LIKES- RETWEETS
###Code
#copy dataframe
df1 = df.copy()
#keep only the columns needed
df1 = df1[['url','content','hashtags','date','likeCount','retweetCount']]
#convert columns to upper case to follow naas df convention
df1.columns = df1.columns.str.upper()
#convert time to ISO format to follow naas date convention
df1.DATE = pd.to_datetime(df1.DATE).dt.strftime("%Y-%m-%d")
#clean HASHTAGS column to provide searchable items in columns
df1.HASHTAGS = df1.HASHTAGS.fillna("[]")
df1.HASHTAGS = df1.apply(lambda row: ", ".join(list(row.HASHTAGS)) if row.HASHTAGS != '[]' else "", axis=1)
#display results
df1
###Output
_____no_output_____
###Markdown
Output Save to df
###Code
df1.to_csv("tweets_from_URL.csv", index=False)
###Output
_____no_output_____
###Markdown
Twitter - Get tweets stats from profile **Tags:** twitter tweets scrap Input Import library
###Code
import os
import re
import pandas as pd
#install developer snscrape package via command line
os.system("pip3 install git+https://github.com/JustAnotherArchivist/snscrape.git")
###Output
_____no_output_____
###Markdown
Variables
###Code
#criteria for searching by username
username = "JupyterNaas"
tweet_count = 500
###Output
_____no_output_____
###Markdown
Model Scrap and save results in JSON
###Code
#search by username using command line
os.system("snscrape --jsonl --max-results {} twitter-search from:{} > user-tweets.json".format(tweet_count, username))
###Output
_____no_output_____
###Markdown
Read JSON
###Code
# Reads the json generated from the CLI command above and creates a pandas dataframe
df = pd.read_json('user-tweets.json', lines=True, convert_dates=True, keep_default_dates=True)
df
###Output
_____no_output_____
###Markdown
Clean dataframe to keep only necessary columns - URL- TITLE- CONTENT- HASTAGS- DATE- LIKES- RETWEETS
###Code
#copy dataframe
df1 = df.copy()
#keep only the columns needed
df1 = df1[['url','content','hashtags','date','likeCount','retweetCount']]
#convert columns to upper case to follow naas df convention
df1.columns = df1.columns.str.upper()
#convert time to ISO format to follow naas date convention
df1.DATE = pd.to_datetime(df1.DATE).dt.strftime("%Y-%m-%d")
#clean HASHTAGS column to provide searchable items in columns
df1.HASHTAGS = df1.HASHTAGS.fillna("[]")
df1.HASHTAGS = df1.apply(lambda row: ", ".join(list(row.HASHTAGS)) if row.HASHTAGS != '[]' else "", axis=1)
#display results
df1
###Output
_____no_output_____
###Markdown
Output Save to df
###Code
df1.to_csv("tweets_from_URL.csv", index=False)
###Output
_____no_output_____ |
Missions_to_Mars/.ipynb_checkpoints/mission_to_mars-checkpoint.ipynb | ###Markdown
Mission to Mars
###Code
# Dependencies
from bs4 import BeautifulSoup
from splinter import Browser
import pandas as pd
import requests
import time
###Output
_____no_output_____
###Markdown
NASA Mars News
###Code
# Grab News URL and visit the page
url = 'https://mars.nasa.gov/news/'
browser.visit(nasa_url)
# Start BeautifulSoup and print the scoop
nasa_news = BeautifulSoup(browser.html, 'html.parser')
print(nasa_news)
###Output
_____no_output_____
###Markdown
NASA Mars News
###Code
# URL of page to be scraped
url = 'https://mars.nasa.gov/news/'
browser.visit(url)
# # Retrieve page with the requests module
response = requests.get(url)
# Create BeautifulSoup object; parse with 'lxml'
soup = BeautifulSoup(response.text, 'lxml')
# Extract title text
news_title = soup.title.text
print(f"The latest news from August 24th, 2020 from NASA Mars News is : {news_title}")
# Print first paragraph texts
news_p = soup.find_all('p')[1].text
print(f"The first paragraph from the news is: {news_p}")
# Print title and paragraph
print(news_title)
print(news_p)
###Output
NASA Engineers Checking InSight's Weather Sensors – NASA’s Mars Exploration Program
An electronics issue is suspected to be preventing the sensors from sharing their data about Mars weather with the spacecraft.
###Markdown
JPL Mars Space Images - Featured Image
###Code
# #executable path to driver (for Mac use)
# executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
# browser = Browser('chrome', **executable_path, headless=False)
# URL of page to be scraped
image_url_featured= 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(image_url_featured)
# HTML Object
html_image = browser.html
# Parse HTML with Beautiful Soup
soup = BeautifulSoup(html_image, 'html.parser')
# Retrieve background-image url from style tag
featured_image_url = soup.find('article')['style'].replace('background-image: url(','').replace(');', '')[1:-1]
# Website Url
main_url = 'https://www.jpl.nasa.gov'
# Concatenate website url with scrapped route
featured_image_url = main_url + featured_image_url
# Display full link to featured image
featured_image_url
###Output
_____no_output_____
###Markdown
Mars Facts Visit the Mars Facts webpage [here](https://space-facts.com/mars/) and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.
###Code
# URL of page to be scraped
space_facts_url= 'https://space-facts.com/mars/'
browser.visit(space_facts_url)
# Parse HTML with Beautiful Soup
space_facts_url = browser.html
soup = BeautifulSoup(space_facts_url, 'html.parser')
###Output
_____no_output_____
###Markdown
* Use Pandas to convert the data to a HTML table string.
###Code
tables = pd.read_html(space_facts_url)
tables
#we need only the first table
space_facts_df = tables[0]
space_facts_df.rows = ['Equatorial Diameter', 'Polar Diameter', 'Mass', 'Moons',
'Orbit Distance', 'Orbit Period', 'Surface Temperature:', 'First Record',
'Recorded By']
space_facts_df
facts_df = space_facts_df.rename(columns={
0: "Description",
1: "Value"})
facts_df
#remove the index
import pandas as pd
facts_df.set_index("Description", inplace=True)
facts_df
###Output
_____no_output_____
###Markdown
Use Pandas to convert the data to a HTML table string.
###Code
html_table = facts_df.to_html()
html_table
#save the table we just scraped in a html file
facts_df.to_html('table.html')
#oped the table to make sure it works
!open table.html
###Output
_____no_output_____
###Markdown
Mars Hemispheres
###Code
# URL of page to be scraped
hemisphere_url= 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(hemisphere_url)
# Parse HTML with Beautiful Soup
hemisphere_url = browser.html
soup = BeautifulSoup(hemisphere_url, 'html.parser')
# Retreive all items that contain mars hemispheres information
items = soup.find_all('div', class_='item')
# Create empty list for hemisphere urls
hemisphere_image_urls = []
# Store the main_ul
hemispheres_main_url = 'https://astrogeology.usgs.gov'
# Loop through the items previously stored
for i in items:
# Store title
title = i.find('h3').text
# Store link that leads to full image website
partial_img_url = i.find('a', class_='itemLink product-item')['href']
# Visit the link that contains the full image website
browser.visit(hemispheres_main_url + partial_img_url)
# HTML Object of individual hemisphere information website
partial_img_html = browser.html
# Parse HTML with Beautiful Soup for every individual hemisphere information website
soup = BeautifulSoup( partial_img_html, 'html.parser')
# Retrieve full image source
img_url = hemispheres_main_url + soup.find('img', class_='wide-image')['src']
# Append the retreived information into a list of dictionaries
hemisphere_image_urls.append({"title" : title, "img_url" : img_url})
# Display hemisphere_image_urls
hemisphere_image_urls
###Output
_____no_output_____
###Markdown
Mac Users
###Code
# https://splinter.readthedocs.io/en/latest/drivers/chrome.html
!which chromedriver
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
###Output
_____no_output_____
###Markdown
Windows Users
###Code
# executable_path = {'executable_path': 'chromedriver.exe'}
# browser = Browser('chrome', **executable_path, headless=False)
url = 'http://books.toscrape.com/'
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
sidebar = soup.find('ul', class_='nav-list')
categories = sidebar.find_all('li')
category_list = []
url_list = []
book_url_list = []
for category in categories:
title = category.text.strip()
category_list.append(title)
book_url = category.find('a')['href']
url_list.append(book_url)
book_url_list = ['http://books.toscrape.com/' + url for url in url_list]
titles_and_urls = zip(category_list, book_url_list)
try:
for title_url in titles_and_urls:
browser.click_link_by_partial_text('next')
except ElementDoesNotExist:
print("Scraping Complete")
book_url_list
###Output
_____no_output_____
###Markdown
Mac Users
###Code
executable_path = {"executable_path": "/usr/local/bin/chromedriver"}
browser=Browser("chrome", **executable_path, headless=False)
###Output
_____no_output_____
###Markdown
Windows Users
###Code
# executable_path = {'executable_path': 'chromedriver.exe'}
# browser = Browser('chrome', **executable_path, headless=False)
###Output
_____no_output_____
###Markdown
NASA Mars News
###Code
# Visit Nasa news url through splinter module
url = 'https://mars.nasa.gov/news/'
browser.visit(url)
# HTML Object
html = browser.html
# Parse HTML with Beautiful Soup
soup = BeautifulSoup(html, 'html.parser')
# Retrieve the latest element that contains news title and news_paragraph
news_title = soup.find("div", class_='list_text').find("div", class_='content_title').text
news_p = soup.find('div', class_='article_teaser_body').text
# Display scrapped data
print(news_title)
print(news_p)
###Output
NASA, ULA Launch Mars 2020 Perseverance Rover Mission to Red Planet
The agency's Mars 2020 mission is on its way. It will land at Jezero Crater in about seven months, on Feb. 18, 2021.
###Markdown
JPL Mars Space Images - Featured Image
###Code
# Visit Mars Space Images through splinter module
image_url_featured = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(image_url_featured)
# HTML Object
html_image = browser.html
# Parse HTML with Beautiful Soup
soup = BeautifulSoup(html_image, 'html.parser')
# Retrieve background-image url from style tag
featured_image_url = soup.find('article')['style'].replace('background-image: url(','').replace(');', '')[1:-1]
# Website Url
main_url = 'https://www.jpl.nasa.gov'
# Concatenate website url with scrapped route
featured_image_url = main_url + featured_image_url
# Display full link to featured image
featured_image_url
###Output
_____no_output_____
###Markdown
Mars Weather
###Code
url = 'https://twitter.com/marswxreport?lang=en'
browser.visit(url)
time.sleep(1)
# Parse HTML with Beautiful Soup
html = browser.html
soup = BeautifulSoup(html, "html.parser")
mars_weather = soup.find_all(class_='css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0')[30].text
print(mars_weather)
###Output
InSight sol 596 (2020-07-30) low -90.7ºC (-131.3ºF) high -15.4ºC (4.3ºF)
winds from the WNW at 7.7 m/s (17.3 mph) gusting to 19.9 m/s (44.6 mph)
pressure at 7.90 hPa
###Markdown
Mars Facts
###Code
# Visit Mars facts url
facts_url = 'http://space-facts.com/mars/'
# Use Panda's `read_html` to parse the url
mars_facts = pd.read_html(facts_url)
# Find the mars facts DataFrame in the list of DataFrames as assign it to `mars_df`
mars_df = mars_facts[0]
# Assign the columns `['Description', 'Value']`
mars_df.columns = ['Description','Value']
# Set the index to the `Description` column without row indexing
mars_df.set_index('Description', inplace=True)
# Save html code to folder Assets
mars_df.to_html()
data = mars_df.to_dict(orient='records') # Here's our added param..
mars_df.html = mars_df.to_html()
# Display mars_df
mars_df
mars_df.html
###Output
_____no_output_____
###Markdown
Mars Hemispheres
###Code
# Visit hemispheres website through splinter module
hemispheres_url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(hemispheres_url)
# HTML Object
html_hemispheres = browser.html
# Parse HTML with Beautiful Soup
soup = BeautifulSoup(html_hemispheres, 'html.parser')
# Retreive all items that contain mars hemispheres information
items = soup.find_all('div', class_='item')
# Create empty list for hemisphere urls
hemisphere_image_urls = []
# Store the main_ul
hemispheres_main_url = 'https://astrogeology.usgs.gov'
# Loop through the items previously stored
for i in items:
# Store title
title = i.find('h3').text
# Store link that leads to full image website
partial_img_url = i.find('a', class_='itemLink product-item')['href']
# Visit the link that contains the full image website
browser.visit(hemispheres_main_url + partial_img_url)
# HTML Object of individual hemisphere information website
partial_img_html = browser.html
# Parse HTML with Beautiful Soup for every individual hemisphere information website
soup = BeautifulSoup( partial_img_html, 'html.parser')
# Retrieve full image source
img_url = hemispheres_main_url + soup.find('img', class_='wide-image')['src']
# Append the retreived information into a list of dictionaries
hemisphere_image_urls.append({"title" : title, "img_url" : img_url})
# Display hemisphere_image_urls
hemisphere_image_urls
###Output
_____no_output_____
###Markdown
Scraping with Pandas
###Code
# !pip install splinter
# !pip install webdriver_manager
import pandas as pd
import os
from bs4 import BeautifulSoup as bs
from urllib.request import urlopen
from splinter import Browser
from webdriver_manager.chrome import ChromeDriverManager
url = 'https://mars.nasa.gov/news/'
html = urlopen(url)
soup = bs(html, 'lxml')
type(soup)
title = soup.find_all("div", {"class": "content_title"})[0].text
news_title = title.strip('\n')
news_title
news = soup.find_all("div", {"class": "rollover_description_inner"})[0].text
news_para = news.strip('\n')
news_para
def init():
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
return browser
###Output
_____no_output_____
###Markdown
Splinter to image: https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html
###Code
browser=init()
url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html'
browser.visit(url)
print(url)
html = browser.html
# Parse HTML with Beautiful Soup
soup = bs(html, 'html.parser')
# Retrieve all elements that contain book information
imgLinkString=soup.find_all("a",{"class": "showimg fancybox-thumbs"})#[0].href
for a in imgLinkString:
imgLink=a['href']
featured_image_url="https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/"+imgLink
featured_image_url
###Output
_____no_output_____
###Markdown
Pulling in table from: https://space-facts.com/mars/
###Code
url = 'https://space-facts.com/mars/'
# browser.visit(url)
print(url)
tables = pd.read_html(url)
tables
df = tables[0]
html_table = df.to_html()
html_table=html_table.replace('\n', '')
df.to_html('MarsTable.html')
html_table
# !open MarsTable.html
# df.head(10)
###Output
_____no_output_____
###Markdown
dictionary of urls from: https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars
###Code
browser=init()
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url)
image_urls=[]
for i in range(0,4):
browser=init()
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url)
browser.find_by_css("a.product-item h3")[i].click()
html = browser.html
soup = bs(html, "html.parser")
# print(soup)
title = soup.find('h2').text
# title
image_url = soup.find_all('a','target'=='_blank')[4]["href"]
# image_url
image_url = {
"title": title,
"img_url": image_url}
image_urls.append(image_url)
image_urls
###Output
_____no_output_____
###Markdown
NASA Mars News
###Code
# URL
news_url = 'https://redplanetscience.com/'
request = requests.get(news_url)
browser.visit(news_url)
# HTML parser
html = browser.html
soup = bs(html, "html.parser")
# Scrape title and text from most recent article
news_title = soup.find_all('div', class_='content_title')[0].text
news_text = soup.find_all('div', class_ ='article_teaser_body')[0].text
print(news_title)
print("----------------------------------------------------------------")
print(news_text)
###Output
Screening Soon: 'The Pathfinders' Trains Lens on Mars
----------------------------------------------------------------
With the Mars 2020 mission ramping up, the documentary — the first of four about past JPL missions to the Red Planet to be shown at Caltech — tells a gripping backstory.
###Markdown
JPL Mars Space Images - Featured Image
###Code
# URL
image_url = 'https://spaceimages-mars.com/'
request = requests.get(image_url)
browser.visit(image_url)
# HTML parser
html = browser.html
soup = bs(html, "html.parser")
# Scrape image url
image_relative_path = soup.find_all('img', class_='headerimage fade-in')[0]["src"]
featured_image_url = f"{image_url}{image_relative_path}"
print(featured_image_url)
###Output
https://spaceimages-mars.com/image/featured/mars1.jpg
###Markdown
Mars Facts
###Code
# URL
fact_url = 'https://galaxyfacts-mars.com/'
request = requests.get(fact_url)
browser.visit(fact_url)
html = browser.html
soup = bs(html, "html.parser")
# Scrape table data
mars_data= pd.read_html(fact_url)
mars_data
# Create a data frame
mars_df= mars_data[1]
mars_df= mars_df.rename(columns={0:"Element",1:"Value"})
mars_df.set_index("Element",inplace=True)
mars_df
# Convert to html string
mars_html = mars_df.to_html()
print(mars_html)
###Output
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Value</th>
</tr>
<tr>
<th>Element</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>Equatorial Diameter:</th>
<td>6,792 km</td>
</tr>
<tr>
<th>Polar Diameter:</th>
<td>6,752 km</td>
</tr>
<tr>
<th>Mass:</th>
<td>6.39 × 10^23 kg (0.11 Earths)</td>
</tr>
<tr>
<th>Moons:</th>
<td>2 ( Phobos & Deimos )</td>
</tr>
<tr>
<th>Orbit Distance:</th>
<td>227,943,824 km (1.38 AU)</td>
</tr>
<tr>
<th>Orbit Period:</th>
<td>687 days (1.9 years)</td>
</tr>
<tr>
<th>Surface Temperature:</th>
<td>-87 to -5 °C</td>
</tr>
<tr>
<th>First Record:</th>
<td>2nd millennium BC</td>
</tr>
<tr>
<th>Recorded By:</th>
<td>Egyptian astronomers</td>
</tr>
</tbody>
</table>
###Markdown
Mars Hemispheres
###Code
# URL
hemis_url = 'https://marshemispheres.com/'
request = requests.get(hemis_url)
browser.visit(hemis_url)
html = browser.html
soup = bs(html, "html.parser")
# Scrape
all_hemis = soup.find_all('div',class_='item')
# Iterate through each hemisphere and scrape data
for hemi in all_hemis:
# Title
hemisphere= hemi.find('div', class_='description')
title = hemisphere.h3.text
# Go to each hemisphere link
hemi_link = hemisphere.a['href']
img_link = (hemis_url + hemi_link)
request = requests.get(img_link)
browser.visit(img_link)
html = browser.html
soup = bs(html, "html.parser")
image_full_link = soup.find('li').a['href']
image_https = f"{hemis_url}{image_full_link}"
# Create dictionary
img_dict = {
'title': title,
'img_url': image_https
}
print(title)
print('---------------------------------')
print(image_https)
print('---------------------------------')
# Dictionary with all of the info
mars_dictionary = {
"news_title": news_title,
"news_text": news_text,
"featured_image_url": featured_image_url,
"mars_html": str(mars_html),
"hemisphere_images": img_dict
}
mars_dictionary
browser.quit()
###Output
_____no_output_____
###Markdown
Mission to Mars
###Code
import numpy as np
import pandas as pd
from splinter import Browser
from bs4 import BeautifulSoup as bs
import time as tm
import requests
import re
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
import os
###Output
_____no_output_____
###Markdown
NASA Mars News
###Code
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://redplanetscience.com'
browser.visit(url)
tm.sleep(3)
html = browser.html
soup = bs(html, 'html.parser')
print(soup.prettify())
# results are returned as an iterable list
results = soup.find_all('div', class_="slide")
results
titles = soup.find_all('div', class_='content_title')
titles
latest_title = titles[0].text.strip()
print(f'''The Latest Title on NASA Mars News Site:
{latest_title}''')
para = soup.find_all('div', class_='article_teaser_body')
para
latest_para = para[0].text.strip()
print(f'''The Latest paragraph on NASA Mars News Site:
{latest_para}''')
browser.quit
###Output
_____no_output_____
###Markdown
JPL Mars Space Images
###Code
# url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
url2 = 'https://spaceimages-mars.com/'
browser.visit(url2)
tm.sleep(3)
html = browser.html
soup = bs(html, 'html.parser')
all_images = soup.find_all('img')
all_images
featured_url = soup.find("img", class_ = "headerimage fade-in")['src']
print(featured_url)
featured_image_url = url2 +'/'+ featured_url
featured_image_url
browser.quit
###Output
_____no_output_____
###Markdown
Mars Facts
###Code
url3 = 'https://galaxyfacts-mars.com/'
browser.visit(url3)
tm.sleep(3)
html = browser.html
soup = bs(html, 'html.parser')
print(soup.prettify())
tables = pd.read_html(url3)
tables
comp_table = pd.DataFrame(tables[0])
comp_table
comp_table = comp_table.drop([0])
comp_table
comp_table.columns = ['','Mars', 'Earth']
comp_table = comp_table.set_index('')
comp_table
comp_table_html = comp_table.to_html()
print(comp_table_html)
###Output
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th>Mars</th>
<th>Earth</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>Diameter:</td>
<td>6,779 km</td>
<td>12,742 km</td>
</tr>
<tr>
<th>2</th>
<td>Mass:</td>
<td>6.39 × 10^23 kg</td>
<td>5.97 × 10^24 kg</td>
</tr>
<tr>
<th>3</th>
<td>Moons:</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>Distance from Sun:</td>
<td>227,943,824 km</td>
<td>149,598,262 km</td>
</tr>
<tr>
<th>5</th>
<td>Length of Year:</td>
<td>687 Earth days</td>
<td>365.24 days</td>
</tr>
<tr>
<th>6</th>
<td>Temperature:</td>
<td>-87 to -5 °C</td>
<td>-88 to 58°C</td>
</tr>
</tbody>
</table>
###Markdown
Mars Hemispheres
###Code
url4 = 'https://marshemispheres.com/'
browser.visit(url4)
html = browser.html
soup = bs(html, 'html.parser')
print(soup.prettify())
items = soup.find_all('div', class_='item')
titles = []
for i in items:
titles.append(i.find('h3').text.strip())
titles
stopwords = ['thumbnail']
for word in list(titles): # iterating on a copy since removing will mess things up
if word in stopwords:
words.remove(word)
titles
items2 = soup.find_all('a', class_='itemLink product-item')
items2
srcs_all = []
for i in items2:
srcs_all.append(url4 + i['href'])
srcs_all
srcs_unique = np.unique(srcs_all)
srcs_unique
srcs_unique_list = srcs_unique.tolist()
del srcs_unique_list[0]
srcs_unique_list
titles = []
for i in items:
titles.append(i('h3'))
titles
for i in srcs_unique_list:
browser.visit(i)
tm.sleep(1)
html = browser.html
soup = bs(html, 'html.parser')
high_def = url4 + soup.find('img', class_='wide-image')['src']
srcs.append(high_def)
srcs
all_thumbs_mars = soup.find_all("img", class_ = "thumb")
all_thumbs_mars
all_thumbs_mars[1]['src']
titles
srcs
Hemi_Dict = []
for i in range(len(titles)):
Hemi_Dict.append({'title':titles[i],'img_url':srcs[i]})
Hemi_Dict
for i in range(len(Hemi_Dict)):
print(Hemi_Dict[i]['title'])
print(Hemi_Dict[i]['img_url'] + '\n')
###Output
Cerberus Hemisphere Enhanced thumbnail
https://marshemispheres.com/images/39d3266553462198bd2fbc4d18fbed17_cerberus_enhanced.tif_thumb.png
Schiaparelli Hemisphere Enhanced thumbnail
https://marshemispheres.com/images/08eac6e22c07fb1fe72223a79252de20_schiaparelli_enhanced.tif_thumb.png
Syrtis Major Hemisphere Enhanced thumbnail
https://marshemispheres.com/images/55a0a1e2796313fdeafb17c35925e8ac_syrtis_major_enhanced.tif_thumb.png
Valles Marineris Hemisphere Enhanced thumbnail
https://marshemispheres.com/images/4e59980c1c57f89c680c0e1ccabbeff1_valles_marineris_enhanced.tif_thumb.png
###Markdown
Mars Facts* Visit the Mars Facts webpage [here](https://galaxyfacts-mars.com) and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.* Use Pandas to convert the data to a HTML table string.
###Code
pd.read_html('https://galaxyfacts-mars.com/')[0].to_html()
###Output
_____no_output_____
###Markdown
Mars Hemispheres* Visit the astrogeology site [here](https://marshemispheres.com/) to obtain high resolution images for each of Mar's hemispheres.* You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.* Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys `img_url` and `title`.* Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere.
###Code
browser.visit('https://marshemispheres.com/')
hemispheres = []
for i in range(4):
hemisphere = {}
hemisphere['title'] = browser.find_by_css('a.itemLink h3')[i].text
browser.find_by_css('a.itemLink h3')[i].click()
hemisphere['url'] = browser.find_by_text('Sample')['href']
browser.back()
hemispheres.append(hemisphere)
browser.quit()
hemispheres
###Output
_____no_output_____
###Markdown
Step 1 - ScrapingComplete your initial scraping using Jupyter Notebook, BeautifulSoup, Pandas, and Requests/Splinter.* Create a Jupyter Notebook file called `mission_to_mars.ipynb` and use this to complete all of your scraping and analysis tasks. The following outlines what you need to scrape. NASA Mars News* Scrape the NASA Mars News Site and collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later. Example:news_title = "NASA's Next Mars Mission to Investigate Interior of Red Planet" news_p = "Preparation of NASA's next spacecraft to Mars, InSight, has ramped up this summer, on course for launch next May from Vandenberg Air Force Base in central California -- the first interplanetary launch in history from America's West Coast."
###Code
# Dependencies (You may need to !pip install splinter or BeautifulSoup)
from splinter import Browser
from bs4 import BeautifulSoup
import pandas as pd
import re
!pip install splinter
!pip install bs4
# Mac Users
# https://splinter.readthedocs.io/en/latest/drivers/chrome.html
!which chromedriver
# Link to chromedriver
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
# URL for NASA Mars news
url = 'https://mars.nasa.gov/news/'
browser.visit(url)
#Scrape the NASA Mars News Site
#Assign the text to variables that you can reference later.
html = browser.html
soup1 = BeautifulSoup(html, 'html.parser')
#Find the Latest news Title
article = soup1.find("div", class_='list_text')
news_title = article.find("div", class_="content_title").text
print(news_title)
#Find the Latest news paragraph
news_p =soup1.body.find("div", class_="article_teaser_body").text
print(news_p)
###Output
6 Things to Know About NASA's Ingenuity Mars Helicopter
The first helicopter attempting to fly on another planet is a marvel of engineering. Get up to speed with these key facts about its plans.
###Markdown
JPL Mars Space Images - Featured Image* Visit the url for JPL Featured Space Image https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars.* Use splinter to navigate the site and find the image url for the current Featured Mars Image and assign the url string to a variable called `featured_image_url`.* Make sure to find the image url to the full size `.jpg` image.* Make sure to save a complete url string for this image.
###Code
# Mac Users
# https://splinter.readthedocs.io/en/latest/drivers/chrome.html
#!which chromedriver
# Link to chromedriver
# executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
# browser = Browser('chrome', **executable_path, headless=False)
# Visit the following URL for JPL Featured Space Image
url2 = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(url2)
# Click the Full Image button.
browser.click_link_by_id('full_image')
# Click on "More Info" to get the full image
browser.click_link_by_partial_text('more info')
# DEPRECATION CODE SUGGESTION
# Click on "More Info" to get the full image
browser.links.find_by_partial_text('more info')
# Create the soup item
image_html = browser.html
soup2 = BeautifulSoup(image_html, 'html.parser')
# Inspect Console
# The large image is inside element with class = "lede"
result2= soup2.find(class_="lede")
result2
# Note: The href is within the 'a' element.
# Add/Concatenate the base url portion to get the full url to --> goto full size image
featured_image_url=result2.a
featured_image_url = 'https://www.jpl.nasa.gov' + featured_image_url['href']
featured_image_url
###Output
_____no_output_____
###Markdown
Mars Weather* Visit the Mars Weather twitter account here and scrape the latest Mars weather tweet from the page. Save the tweet text for the weather report as a variable called mars_weather.* Note: Be sure you are not signed in to twitter, or scraping may become more difficult.* Note: Twitter frequently changes how information is presented on their website. If you are having difficulty getting the correct html tag data, consider researching Regular Expression Patterns and how they can be used in combination with the .find() method. Example:`mars_weather = 'Sol 1801 (Aug 30, 2017), Sunny, high -21C/-5F, low -80C/-112F, pressure at 8.82 hPa, daylight 06:09-17:55'` Mars Facts* Visit the Mars Facts webpage here and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.* Use Pandas to convert the data to a HTML table string. Mars Hemispheres* Visit the USGS Astrogeology site here to obtain high resolution images for each of Mar's hemispheres.* You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.* Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys img_url and title.* Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere. Example:`hemisphere_image_urls = [ {"title": "Valles Marineris Hemisphere", "img_url": "..."}, {"title": "Cerberus Hemisphere", "img_url": "..."}, {"title": "Schiaparelli Hemisphere", "img_url": "..."}, {"title": "Syrtis Major Hemisphere", "img_url": "..."},]`
###Code
Start here...
# Make sure to find the image url to the full size .jpg image
full_image = browser.find_by_id("full_image")
full_image.click()
# Navigate to the "more info" page
browser.is_element_present_by_text("more info", wait_time=1)
more_info = browser.find_link_by_partial_text("more info")
more_info.click()
# Create image object
soup_image = BeautifulSoup(browser.html, "html.parser")
# Get image source info
img_url = soup_image.select_one("figure.lede a img").get("src")
img_url
# Design an XPATH selector to grab the "Hustle and Bustle at Center of Milky Way" featured image
#xpath = '//td//a[@class="image"]/img'
# Use splinter to Click the "Hustle and Bustle at Center of Milky Way" image
# to bring up the full resolution image
# results = browser.find_by_xpath(xpath)
# img = results[0]
# img.click()
# Scrape the browser into soup and use soup to find the full resolution image of mars
# Save the image url to a variable called `img_url`
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
featuredd_image_url = soup.find("img", class_="jpg")["src"]
featuredd_image_url
# Use the requests library to download and save the image from the `img_url` above
import requests
import shutil
response = requests.get(img_url, stream=True)
with open('img.png', 'wb') as out_file:
shutil.copyfileobj(response.raw, out_file)
# Display the image with IPython.display
from IPython.display import Image
Image(url='img.png')
# Click the 'Next' button on each page
try:
browser.click_link_by_partial_text('next')
except:
print("Scraping Complete")
# classroom = db.classroom.find()
# classroom
# saved_students = []
# for student in classroom:
# saved_students.append(student)
# print(student)
# url: https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars
###Output
_____no_output_____
###Markdown
Mission to Mars
###Code
#Dependencies
import pandas as pd
from bs4 import BeautifulSoup as bs
import requests
from splinter import Browser
from splinter.exceptions import ElementDoesNotExist
###Output
_____no_output_____
###Markdown
Mars News
###Code
executable_path = {'executable_path': 'chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
# url for site to be scraped
url = 'https://mars.nasa.gov/news/'
browser.visit(url)
#Mars latest news title and paragraph text
html = browser.html
soup = bs(html, 'html.parser')
#browser click link for first article
browser.links.find_by_partial_text("Join NASA")
html = browser.html
soup = bs(html, 'html.parser')
results = soup.find_all('div', class_="grid_layout")
# print(results)
for result in results:
title = result.find('h1', class_="article_title")
# print(title)
title_text = title.text
print(title_text)
paragraph = result.find('p')
# print(paragraph)
p_text = paragraph.text
print(p_text)
###Output
_____no_output_____
###Markdown
JPL Mars Space Images - Featured Image
###Code
executable_path = {'executable_path': 'chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
# url for site to be scraped
url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(url)
#browser click full image button
browser.click_link_by_text('FULL IMAGE')
html = browser.html
soup = bs(html, 'html.parser')
browser.click_link_by_text('more info')
results = soup.find('div', class_="grid_layout")
print(results)
full_image_url = soup.find('a', class_='main_image')
print(full_image_url)
###Output
None
###Markdown
Mars Weather
###Code
# Mars Weather
# url to be scraped
url = 'https://twitter.com/marswxreport?lang=en'
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
# find the latest tweet
results = soup.find('div', class_="css-1dbjc4n")
print(results)
for result in results:
latest_tweet = result.find('div', class_="css-901oao r-hkyrab r-1qd0xha r-a023e6 r-16dba41 r-ad9z0x r-bcqeeo r-bnwqim r-qvutc0")
print(latest_tweet)
# print the text from the tweet
mars_weather = latest_tweet.text
print(mars_weather)
###Output
InSight sol 578 (2020-07-12) low -88.3ºC (-126.9ºF) high -5.2ºC (22.7ºF)
winds from the WNW at 5.0 m/s (11.1 mph) gusting to 13.4 m/s (30.1 mph)
pressure at 7.70 hPa
###Markdown
Mars Facts
###Code
# url to be scraped
url = 'https://space-facts.com/mars/'
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
# main = soup.find('table', id='tablepress-p-mars-no-2')
# main
mars_facts = pd.read_html(url)
mars_facts
###Output
_____no_output_____
###Markdown
Mars Hemispheres
###Code
# Mars Hemispheres
# url to be scraped
hemispheres_url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(hemispheres_url)
html = browser.html
soup = bs(html, 'html.parser')
# You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image
results = soup.find('div', class_="collapsible results")
print(results)
# generate list of mars hemisphere image urls
url_list = []
for result in results.find_all('a', href=True):
if result.text:
full_image_url = (f"{hemispheres_url}result.text")
url_list.append(full_image_url)
image_title = results.find('h3')
print(url_list)
# Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere
#name. Use a Python dictionary to store the data using the keys img_url and title.
###Output
['https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Marsresult.text', 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Marsresult.text', 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Marsresult.text', 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Marsresult.text']
###Markdown
web-scraping-challenge
###Code
# import dependencies
from splinter import Browser
from bs4 import BeautifulSoup as bs
import requests
import os
executable_path = {'executable_path': 'C:\\Users\\home\\Downloads\\tmp\\chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://mars.nasa.gov/news/'
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
###Output
_____no_output_____
###Markdown
NASA Mars News
###Code
news_title = soup.find_all("div", class_="content_title")
#news_title
news_title[1].text.strip()
news_p = soup.find("div", class_="article_teaser_body").text
news_p
# closing the browser after scraping
browser.quit()
###Output
_____no_output_____
###Markdown
JPL Mars Space Images - Featured Image
###Code
executable_path = {'executable_path': 'C:\\Users\\home\\Downloads\\tmp\\chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
featured_image_url = soup.find_all("article", class_="carousel_item")
featured_image_url = soup.find("a", class_="button fancybox")
featured_image_url = 'https://www.jpl.nasa.gov' + featured_image_url['data-fancybox-href']
featured_image_url
# closing the browser after scraping
browser.quit()
###Output
_____no_output_____
###Markdown
Mars Facts
###Code
executable_path = {'executable_path': 'C:\\Users\\home\\Downloads\\tmp\\chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://space-facts.com/mars/'
browser.visit(url)
import pandas as pd
facts = pd.read_html(url)
#facts
facts_df = facts[0]
facts_df.columns = ['descr', 'fact']
facts_df
html_table = facts_df.to_html()
html_table
#facts_df.to_html('mars_facts.html')
# closing the browser after scraping
browser.quit()
###Output
_____no_output_____
###Markdown
Mars Hemispheres
###Code
executable_path = {'executable_path': 'C:\\Users\\home\\Downloads\\tmp\\chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url)
products = browser.find_by_tag('h3')
# storing <h3> tag into a list
prod_lst = []
for t in products:
prod_lst.append(t.value)
# pre-define url
url = 'https://astrogeology.usgs.gov'
results = []
# looping through list to acquire img_url
for prod in prod_lst:
# define an empty dictionary to save each title & url pair
tmp_dict = {}
# click through products list to scrap page into soup
browser.click_link_by_partial_text(prod)
html = browser.html
soup = bs(html, 'html.parser')
# accessing & parsing url
img_url = soup.find("div", class_="page-background")
img_url = img_url['style'].split("'")[1]
img_url = f"{url}{img_url}"
# assigning key, value pair into dictionary
tmp_dict['title'] = prod
tmp_dict['img_url'] = img_url
# storing title & url to the list
results.append(tmp_dict)
# going back to the products page
browser.back()
# close the browser after scraping
browser.quit()
###Output
_____no_output_____
###Markdown
dependencies
###Code
import requests
from bs4 import BeautifulSoup
from splinter import Browser
import pandas as pd
import pymongo
###Output
_____no_output_____
###Markdown
NASA Mars News * Scrape the [NASA Mars News Site](https://mars.nasa.gov/news/) and collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later.
###Code
executable_path = {'executable_path': 'chromedriver.exe'}
def NasaMarsNews():
url = 'https://mars.nasa.gov/news/?page=1&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest'
browser = Browser('chrome', **executable_path, headless=True)
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
title = soup.find('div', class_='bottom_gradient').text
news = soup.find('div', class_='article_teaser_body').text
browser.quit()
return (title, news)
###Output
_____no_output_____
###Markdown
check if the NasaMarsNews is scraping correctly
###Code
NasaMarsNews()
###Output
_____no_output_____
###Markdown
JPL Mars Space Images - Featured Image Visit the url for JPL Featured Space Image [here](https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars).
###Code
#src="www.jpl.nasa.gov/spaceimages/images/largesize/PIA24012_hires.jpg"
#url = 'https://photojournal.jpl.nasa.gov/jpegMod/PIA23896_modest.jpg'
def MarsSpaceImages ():
url = 'https://www.jpl.nasa.gov/spaceimages/details.php?id=PIA23896'
browser = Browser('chrome', **executable_path, headless=False)
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
relative_path = soup.find('figure', class_='lede').a['href']
featured_image_url = 'https://www.jpl.nasa.gov' + relative_path
return(featured_image_url)
###Output
_____no_output_____
###Markdown
check if the function is scraping correctly
###Code
MarsSpaceImages()
###Output
_____no_output_____
###Markdown
Mars Weather Visit the Mars Weather twitter account [here](https://twitter.com/marswxreport?lang=en) and scrape the latest Mars weather tweet from the page. Save the tweet text for the weather report as a variable called `mars_weather`.
###Code
def MarsWeather():
url = 'https://twitter.com/marswxreport?lang=en'
browser = Browser('chrome', **executable_path, headless=True)
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
## mars_weather_all = soup.find_all('span')
mars_weather_all = soup.select_one('span')
print(mars_weather_all);
for i in range(len(mars_weather_all)):
print(i);
if("InSight" in mars_weather_all[i].text):
mars_weather = mars_weather_all[i].text
break
return(mars_weather)
###Output
_____no_output_____
###Markdown
check if the function is scraping correctly
###Code
MarsWeather()
from selenium import webdriver
def MarsWeather():
url = 'https://twitter.com/marswxreport?lang=en'
driver = webdriver.Chrome()
driver.get(url)
html1 = driver.page_source
driver.close()
soup = BeautifulSoup(html1, 'html.parser')
mars_weather = soup.find_all('div', class_="css-901oao r-hkyrab r-1qd0xha r-a023e6 r-16dba41 r-ad9z0x r-bcqeeo r-bnwqim r-qvutc0")
return mars_weather[0].text
###Output
_____no_output_____
###Markdown
check if the function is scraping correctly
###Code
MarsWeather()
###Output
_____no_output_____
###Markdown
Mars Facts Visit the Mars Facts webpage [here](https://space-facts.com/mars/) and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.
###Code
def MarsFacts():
url = 'https://space-facts.com/mars/'
tables = pd.read_html(url)
df = tables[0]
df.columns = ['Characteristic Feather', 'Value']
df.set_index('Characteristic Feather', inplace=True)
df.index.names = ['']
return(df)
MarsFacts()
def MarsFacts():
url="https://space-facts.com/mars/"
browser=Browser("chrome", **executable_path, headless=False)
browser.visit(url)
fact_table=(pd.read_html(browser.html))
fact_table=fact_table[0]
fact_table.columns=['Characteristic Feather', 'Value']
fact_table=fact_table.to_html(classes='marsinformation')
fact_table=fact_table.replace('\n', ' ')
browser.quit()
return (fact_table)
MarsFacts()
###Output
_____no_output_____
###Markdown
Mars Hemispheres Visit the USGS Astrogeology site [here](https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars) to obtain high resolution images for each of Mar's hemispheres. Cerberus Hemisphere
###Code
#finding all of the URL
url_list = []
base ='https://astrogeology.usgs.gov'
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
results = soup.find_all('div', class_="item")
for result in results:
relative_url = result.find('a')["href"]
final_url = base + relative_url
url_list.append(final_url)
url_list
def Mars_Hemispheres(url_list):
base ='https://astrogeology.usgs.gov'
browser = Browser('chrome', **executable_path, headless=False)
hemisphere_image_urls = []
for url in url_list:
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
title = soup.title.text
title = title.split('|')
title = title[0]
relative_path = soup.find('img', class_="wide-image")['src']
image_url = base + relative_path
image_dic = {"title":title,
"img_url":image_url}
hemisphere_image_urls.append(image_dic)
return(hemisphere_image_urls)
Mars_Hemispheres(url_list)
###Output
_____no_output_____
###Markdown
NASA Mars News
###Code
html = browser.html
soup = BeautifulSoup(html, "html.parser")
elem_slide = soup.select_one('div.list_text')
news_title = elem_slide.find('div', class_='content_title').get_text()
news_title
news_p = elem_slide.find('div', class_='article_teaser_body').get_text()
news_p
###Output
_____no_output_____
###Markdown
JPL Mars Space Images - Featured Image
###Code
url = "https://spaceimages-mars.com"
browser.visit(url)
full_image = browser.find_by_tag('button')[1]
full_image.click()
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
url_img = soup.find("img", class_="fancybox-image")["src"]
url_img
featured_image_url = f'https://spaceimages-mars.com/{url_img}'
featured_image_url
###Output
_____no_output_____
###Markdown
Mars Facts
###Code
url = "https://galaxyfacts-mars.com"
mars_tables = pd.read_html(url)
mars_facts = mars_tables[0]
mars_facts.head()
mars_facts.columns=['Description', 'Mars', 'Earth']
mars_facts.set_index('Description', inplace=True)
mars_facts
mars_facts.to_html()
###Output
_____no_output_____
###Markdown
Mars Hemispheres
###Code
url = 'https://marshemispheres.com'
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html,'html.parser')
hemispheres = soup.find_all("div", class_="item")
hemisphere_image_urls = []
for x in hemispheres:
title = x.find("h3").text
hemispheres_img = x.find("a", class_="itemLink product-item")["href"]
browser.visit(url + "/" + hemispheres_img)
html = browser.html
web_info = BeautifulSoup(html, "html.parser")
img_url = url + web_info.find("img", class_="wide-image")["src"]
hemisphere_image_urls.append({"title" : title, "img_url" : img_url})
print("")
print(title)
print(img_url)
###Output
Cerberus Hemisphere Enhanced
https://marshemispheres.comimages/f5e372a36edfa389625da6d0cc25d905_cerberus_enhanced.tif_full.jpg
Schiaparelli Hemisphere Enhanced
https://marshemispheres.comimages/3778f7b43bbbc89d6e3cfabb3613ba93_schiaparelli_enhanced.tif_full.jpg
Syrtis Major Hemisphere Enhanced
https://marshemispheres.comimages/555e6403a6ddd7ba16ddb0e471cadcf7_syrtis_major_enhanced.tif_full.jpg
Valles Marineris Hemisphere Enhanced
https://marshemispheres.comimages/b3c7c6c9138f57b4756be9b9c43e3a48_valles_marineris_enhanced.tif_full.jpg
###Markdown
Step 1 - Scraping NASA Mars News
###Code
# URL of page to be scraped
url = 'https://mars.nasa.gov/news/?page=0&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest'
# Retrieve page with the requests module
response = requests.get(url)
response
# Create BeautifulSoup object; parse with 'html.parser'
soup = BeautifulSoup(response.text, 'html.parser')
# Examine the results, then determine element that contains sought info
#print(soup.prettify())
# results are returned as an iterable list
results = soup.find_all("div", class_="slide")
#print(results)
# Loop through returned results
article_list = []
for result in results:
# Error handling
try:
# Identify and return title of article
time.sleep(3)
news_title = result.find('div', class_="content_title")
title = news_title.find('a').text
# Identify and return description of article
news_p = result.find('div', class_="rollover_description")
description = news_p.find('div', class_="rollover_description_inner").text
# Print results only if title, price, and link are available
if (news_title and news_p):
print('-------------')
print(f'Article Title: {title}')
article_list.append(result)
print(f'Article Description: {description}')
#description_list.append(result)
except AttributeError as e:
print(e)
#Print the latest news title and description
print(f'Latest News Title: {title}')
print(f'Description of Latest News Title: {description}')
###Output
Latest News Title:
NASA's Curiosity Mars Rover Finds a Clay Cache
Description of Latest News Title:
The rover recently drilled two samples, and both showed the highest levels of clay ever found during the mission.
###Markdown
JPL Mars Space Images - Featured Image
###Code
# https://splinter.readthedocs.io/en/latest/drivers/chrome.html
!which chromedriver
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
url2 = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(url2)
# HTML object
html = browser.html
# Parse HTML with Beautiful Soup
soup = BeautifulSoup(html, 'html.parser')
images = soup.find_all('article', class_='carousel_item')
#images
#Loop through images to get partial url
for image in images:
image_url = image.find('a')['data-fancybox-href']
#print(image_url)
#create varible for featured image url
main_url = 'https://www.jpl.nasa.gov/'
featured_image_url = main_url + image_url
print(f"Click this link to see the current featured image: {featured_image_url}")
###Output
Click this link to see the current featured image: https://www.jpl.nasa.gov//spaceimages/images/mediumsize/PIA18328_ip.jpg
###Markdown
Mars Weather
###Code
#Set-up twitter connections
#twitter_url = 'https://twitter.com/marswxreport?lang=en'
#browser.visit(twitter_url)
# HTML object
#html = browser.html
# Parse HTML with Beautiful Soup
#soup = BeautifulSoup(html, 'html.parser')
# Find the div that contain tweets
#recent_tweets = soup.find('div', class_="css-901oao r-jwli3a r-1qd0xha r-a023e6 r-16dba41 r-ad9z0x r-bcqeeo r-bnwqim r-qvutc0").text
#print(recent_tweets)
#Create a variable for most recent Mars weather data:
#mars_weather = recent_tweets
#print(f"The most recent tweet regaring Mars weather: {mars_weather}")
###Output
_____no_output_____
###Markdown
Mars Facts
###Code
#Create link for pandas URL
pandas_url = 'https://space-facts.com/mars/'
tables = pd.read_html(pandas_url)
#tables
df = tables[0]
df.columns = ['Topic', 'Data']
df.set_index(['Topic'])
df
#Create html table
html_table = df.to_html()
html_table.replace('\n', '')
df.to_html('table.html')
###Output
_____no_output_____
###Markdown
Mars Hemispheres
###Code
#Create url to hemispheres page
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
url_start = 'https://astrogeology.usgs.gov'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
#soup
#Create a loop to get the link to each page that will be looped through to collect title and image url
img_urls = []
hems = soup.find_all('div', class_="item")
for item in hems:
link = item.find('a')
href = link['href']
url = url_start + href
print(url)
img_urls.append(url)
#print(img_urls)
#executable_path = {'executable_path': 'chromedriver.exe'}
#browser = Browser('chrome', **executable_path, headless=True)
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
# find all div classes 'item'
hemispheres = soup.find_all('div', class_="item")
# get list of URLs for each hemisphere
img_url_text = []
for item in hemispheres:
# Use Beautiful Soup's find() method to navigate and retrieve attributes
link = item.find('a')
href = link['href']
#print('-----------')
#print(header.text)
url = ('https://astrogeology.usgs.gov' + href)
#print(url)
#dict = {'title':header.text}
#titles.append(dict)
img_url_text.append(url)
hemisphere_image_urls = []
#title_list = []
for url in img_url_text:
time.sleep(2)
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
titles = soup.find('h2',class_="title")
images = soup.find("img", class_="wide-image")["src"]
browser.links.find_by_text('Sample')
image = browser.windows[0].next.url
full_img_url = 'https://astrogeology.usgs.gov' + images
urls = {
'title':titles.text,
'img_url':full_img_url
}
#title_list.append(titles)
hemisphere_image_urls.append(urls)
print(titles.text)
print(full_img_url)
print('-----------')
print(hemisphere_image_urls)
###Output
[{'title': 'Cerberus Hemisphere Enhanced', 'img_url': '/cache/images/cfa62af2557222a02478f1fcd781d445_cerberus_enhanced.tif_full.jpg'}, {'title': 'Schiaparelli Hemisphere Enhanced', 'img_url': '/cache/images/3cdd1cbf5e0813bba925c9030d13b62e_schiaparelli_enhanced.tif_full.jpg'}, {'title': 'Syrtis Major Hemisphere Enhanced', 'img_url': '/cache/images/ae209b4e408bb6c3e67b6af38168cf28_syrtis_major_enhanced.tif_full.jpg'}, {'title': 'Valles Marineris Hemisphere Enhanced', 'img_url': '/cache/images/7cf2da4bf549ed01c17f206327be4db7_valles_marineris_enhanced.tif_full.jpg'}]
###Markdown
NASA Mars News* Scrape the NASA Mars News Site and collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later.
###Code
executable_path = {'executable_path': '/Users/ekaster/Documents/DATABOOTCAMP-MATERIAL/chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
mars_news_url = 'https://mars.nasa.gov/news/'
browser.visit(mars_news_url)
response = requests.get(mars_news_url)
news_soup = bs(response.text, 'html.parser')
news_title = news_soup.find("div", class_="content_title").text
news_p = news_soup.find("div", class_="rollover_description_inner").text
print(news_title)
print(news_p)
###Output
NASA to Broadcast Mars 2020 Perseverance Launch, Prelaunch Activities
Starting July 27, news activities will cover everything from mission engineering and science to returning samples from Mars to, of course, the launch itself.
###Markdown
JPL Mars Space Images - Featured Image* Visit the url for JPL Featured Space Image here.* Use splinter to navigate the site and find the image url for the current Featured Mars Image and assign the url string to a variable called featured_image_url.* Make sure to find the image url to the full size .jpg image.* Make sure to save a complete url string for this image.
###Code
executable_path = {'executable_path' : '/Users/ekaster/Documents/DATABOOTCAMP-MATERIAL/chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
mars_img_url = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(mars_img_url)
browser.click_link_by_partial_text('FULL IMAGE')
browser.click_link_by_partial_text('more info')
browser_soup = bs(browser.html, 'html.parser')
get_img_url = browser_soup.find('img', class_='main_image')
img_src_url = get_img_url.get('src')
featured_image_url = "https://www.jpl.nasa.gov" + img_src_url
print(featured_image_url)
###Output
https://www.jpl.nasa.gov/spaceimages/images/largesize/PIA17470_hires.jpg
###Markdown
Mars Weather* Visit the Mars Weather twitter account here and scrape the latest Mars weather tweet from the page. Save the tweet text for the weather report as a variable called mars_weather.* Note: Be sure you are not signed in to twitter, or scraping may become more difficult.* Note: Twitter frequently changes how information is presented on their website. If you are having difficulty getting the correct html tag data, consider researching Regular Expression Patterns and how they can be used in combination with the .find() method.
###Code
executable_path = {'executable_path' : '/Users/ekaster/Documents/DATABOOTCAMP-MATERIAL/chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
mars_twitter_url = 'https://twitter.com/marswxreport?lang=en'
browser.visit(mars_twitter_url)
twitter_soup = bs(browser.html, 'html.parser')
print(twitter_soup.prettify())
latest_tweets = twitter_soup.find('div', class_='css-901oao r-hkyrab r-1qd0xha r-a023e6 r-16dba41 r-ad9z0x r-bcqeeo r-bnwqim r-qvutc0').text
print(latest_tweets)
###Output
InSight sol 597 (2020-08-01) low -91.0ºC (-131.8ºF) high -16.9ºC (1.6ºF)
winds from the WNW at 8.0 m/s (17.9 mph) gusting to 20.2 m/s (45.1 mph)
pressure at 7.90 hPa
###Markdown
Mars Facts* Visit the Mars Facts webpage here and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.* Use Pandas to convert the data to a HTML table string.
###Code
request_mars_space_facts = requests.get("https://space-facts.com/mars/")
mars_space_table_read = pd.read_html(request_mars_space_facts.text)
mars_space_table_read
mars_df = mars_space_table_read[0]
mars_df.columns = ['Description','Value']
mars_df.set_index(['Description'], inplace=True)
mars_df
mars_data_html = mars_df.to_html()
mars_data_html
mars_data_html.replace('\n', '')
###Output
_____no_output_____
###Markdown
Mars Hemispheres* Visit the USGS Astrogeology site here to obtain high resolution images for each of Mar's hemispheres.* You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.* Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys img_url and title.* Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere.
###Code
executable_path = {'executable_path': '/Users/ekaster/Documents/DATABOOTCAMP-MATERIAL/chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
mars_hemi_url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(mars_hemi_url)
mars_hemi_html = browser.html
mars_hemi_soup = bs(mars_hemi_html, 'html.parser')
items = mars_hemi_soup.find_all('div', class_='item')
mars_hemi_img_url = []
mars_hemi_main_url = 'https://astrogeology.usgs.gov'
for i in items:
title = i.find('h3').text
partial_img_url = i.find('a', class_='itemLink product-item')['href']
browser.visit(mars_hemi_main_url + partial_img_url)
partial_img_html = browser.html
soup = bs( partial_img_html, 'html.parser')
img_url = mars_hemi_main_url + soup.find('img', class_='wide-image')['src']
mars_hemi_img_url.append({"title" : title, "img_url" : img_url})
mars_hemi_img_url
###Output
_____no_output_____
###Markdown
Mission to MarsWeb application that scrapes various websites for data related to the Mission to Mars
###Code
import pandas as pd
from splinter import Browser
from bs4 import BeautifulSoup
import requests
import os
#NASA Mars News
executable_path = {'executable_path': 'chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
#Scrape the Nasa News
url_nasa_news = "https://mars.nasa.gov/news/"
browser.visit(url_nasa_news)
#Save News title and paragraph
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
#first class with content_title was giving "Mars now"
news_title = soup.find_all("div", class_= "content_title")[1].text
news_p = soup.find("div", class_= "article_teaser_body").text
print(news_title)
print(news_p)
#JPL Mars Space Images - Featured Image
url_mars_image = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(url_mars_image)
browser.click_link_by_partial_text('FULL IMAGE')
browser.click_link_by_partial_text('more info')
html=browser.html
soup = BeautifulSoup(html, 'html.parser')
image_page = soup.find("figure", class_="lede")
image_url = image_page.find("a")["href"]
featured_image_url =f"https://www.jpl.nasa.gov{image_url}"
print(featured_image_url)
# Mars Weather
url_mars_weather = "https://twitter.com/marswxreport?lang=en"
response = requests.get(url_mars_weather)
soup = BeautifulSoup(response.text, 'html.parser')
#print(soup)
mars_weather = soup.find("p", class_= "TweetTextSize TweetTextSize--normal js-tweet-text tweet-text").text
print(mars_weather)
# Mars Facts
url_mars_facts = "https://space-facts.com/mars/"
mars_facts = pd.read_html(url_mars_facts)
mars_facts_df = pd.DataFrame(mars_facts[0])
mars_facts_df.rename(columns={0:"Mars Facts", 1:"Value"})
#store html table
html_table = mars_facts_df.to_html()
html_table.replace("\n", "")
print(html_table)
#save df as html file
mars_facts_df.to_html("mars_facts.html")
# Mars Hemispheres
mars_hemispheres_url = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars"
hemispheres_response = requests.get(mars_hemispheres_url)
soup = BeautifulSoup(hemispheres_response.text, 'html.parser')
#image data
image_data = soup.find_all("div", class_="item")
#hemisphere dictionary
hemisphere_image_urls = []
for image in image_data:
image_title = image.find("h3").text
image_url = image.find("a", class_="itemLink product-item")['href']
full_image_url = f'https://astrogeology.usgs.gov{image_url}'
# find image src
image_response = requests.get(full_image_url)
image_soup = BeautifulSoup(image_response.text, 'html.parser')
image_src = image_soup.find('img', class_="wide-image")["src"]
img_src_url = f'https://astrogeology.usgs.gov{image_src}'
#add image title and source url to hemisphere dictionary
hemisphere_image_urls.append({"title":image_title, "img_url": img_src_url })
print(hemisphere_image_urls)
###Output
[{'title': 'Cerberus Hemisphere Enhanced', 'img_url': 'https://astrogeology.usgs.gov/cache/images/cfa62af2557222a02478f1fcd781d445_cerberus_enhanced.tif_full.jpg'}, {'title': 'Schiaparelli Hemisphere Enhanced', 'img_url': 'https://astrogeology.usgs.gov/cache/images/3cdd1cbf5e0813bba925c9030d13b62e_schiaparelli_enhanced.tif_full.jpg'}, {'title': 'Syrtis Major Hemisphere Enhanced', 'img_url': 'https://astrogeology.usgs.gov/cache/images/ae209b4e408bb6c3e67b6af38168cf28_syrtis_major_enhanced.tif_full.jpg'}, {'title': 'Valles Marineris Hemisphere Enhanced', 'img_url': 'https://astrogeology.usgs.gov/cache/images/7cf2da4bf549ed01c17f206327be4db7_valles_marineris_enhanced.tif_full.jpg'}]
###Markdown
NASA Mars News
###Code
mars={}
url = 'https://mars.nasa.gov/news/'
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html,'html.parser')
resultNasaMars = soup.findAll("div",class_="content_title")
nasaTitle = resultNasaMars[1].a.text
result = soup.find("div" ,class_="article_teaser_body")
nasaPara = result.text
mars["news_title"] = nasaTitle
mars["news_p"] = nasaPara
mars
###Output
_____no_output_____
###Markdown
JPL Mars Space Images
###Code
url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(url)
browser.find_by_id("full_image").click()
browser.find_link_by_partial_text("more info").click()
html = browser.html
soup = BeautifulSoup(html,'html.parser')
resultJPLimage = soup.find("figure",class_="lede")
resultJPLimage.a.img["src"]
imgJPL = 'https://www.jpl.nasa.gov/' + resultJPLimage.a.img["src"]
mars['featured_image_url'] = imgJPL
mars
###Output
_____no_output_____
###Markdown
Mars Facts
###Code
mars_df = pd.read_html('https://space-facts.com/mars/')[0]
mars_df.columns = ["Description","Value"]
mars_df.set_index("Description", inplace = True)
mars["facts"] = mars_df.to_html()
mars
###Output
_____no_output_____
###Markdown
NASA Mars News JPL Mars Space Images - Featured Image
###Code
imageurl = 'https://spaceimages-mars.com/'
browser.visit(imageurl)
figure=browser.find_by_tag('button')[1]
figure.click()
html = browser.html
imagesoup = bs(html, 'html.parser')
image = imagesoup.find("img", class_= "fancybox-image").get('src')
print (image)
feature_image_url=f"https://spaceimages-mars.com/{image}"
print(feature_image_url)
mars_facts='https://galaxyfacts-mars.com/'
mars_fact_table=pd.read_html(mars_facts)
df = mars_fact_table[0]
df.columns = ['Mars-Earth Comparison', 'Mars', 'Earth']
html_table = df.to_html()
html_table
astrogeology_url="https://marshemispheres.com/"
browser.visit(astrogeology_url)
html = browser.html
soup = bs(html, 'html.parser')
main_url = soup.find_all('div', class_='item')
titles=[]
hemisphere_img_urls=[]
for img in main_url:
title = img.find('h3').text
url = img.find('a')['href']
hem_img_url= astrogeology_url+url
browser.visit(hem_img_url)
html = browser.html
soup = bs(html, 'html.parser')
hemisphere_img_original= soup.find('div',class_='downloads')
hemisphere_img_url=hemisphere_img_original.find('a')['href']
print(hemisphere_img_url)
img_data=dict({'title':title, 'img_url':hemisphere_img_url})
hemisphere_img_urls.append(img_data)
hemisphere_img_urls
###Output
_____no_output_____
###Markdown
[News - Mars Exploration Program]
###Code
import pandas as pd
from bs4 import BeautifulSoup
from splinter import Browser
from webdriver_manager.chrome import ChromeDriverManager
import time
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
url = "https://redplanetscience.com/"
browser.visit(url)
time.sleep(2)
url
mars_pg = browser.html
soup = BeautifulSoup(mars_pg, 'html.parser')
soup
road_trip=soup.find_all("div",class_ = "content_title")[0].text
road_trip
road_trip_log=soup.find_all("div",class_ = "article_teaser_body")[0].text
road_trip_log
###Output
_____no_output_____
###Markdown
JPL MARS SPACE IMAGES
###Code
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
url = "https://spaceimages-mars.com/"
browser.visit(url)
time.sleep(2)
url
selfie=browser.links.find_by_partial_text("FULL IMAGE")
selfie.click()
images=browser.html
soup_pic = BeautifulSoup(images, 'html.parser')
#get image source
image_url=soup_pic.find("img", class_="headerimage fade-in")["src"]
image_url
#get image url
featured_image_url=f"https://spaceimages-mars.com/{image_url}"
print(featured_image_url)
###Output
https://spaceimages-mars.com/image/featured/mars3.jpg
###Markdown
Mars facts
###Code
mars_facts=pd.read_html("https://galaxyfacts-mars.com/")[0]
print(mars_facts)
mars_facts.columns=["Info","Mars","Earth"]
mars_facts.set_index("Info",inplace=True)
mars_facts=mars_facts.iloc[1:7].to_html()
mars_facts
###Output
_____no_output_____
###Markdown
Mars Hemispheres
###Code
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
url = "https://marshemispheres.com/"
browser.visit(url)
time.sleep(2)
hemispheres = []
items = browser.find_by_tag('h3')
#for the length of all items in loop
items_length = len(items)
for item in range(4):
diff_hemispheres = {}
browser.find_by_tag("h3")[item].click()
images=browser.html
hemisphere_pic = BeautifulSoup(images, 'html.parser')
title=hemisphere_pic.find("h2", class_="title").text
diff_hemispheres["title"]=title
figure=hemisphere_pic.find("img", class_="thumb")["src"]
diff_hemispheres["image_url"]=url + figure
hemispheres.append(diff_hemispheres)
browser.back()
hemispheres
browser.quit()
###Output
_____no_output_____
###Markdown
Step 1 - ScrapingScrape the [NASA Mars News Site](https://mars.nasa.gov/news/) and collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later.
###Code
url = "https://mars.nasa.gov/news/"
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
# retrieve the latest news title
news_title = soup.find('div',class_='content_title').text.strip()
print(news_title)
# retrieve tthe latest news paragraph
news_p = soup.find('div', class_='article_teaser_body').text.strip()
print(news_p)
###Output
_____no_output_____
###Markdown
JPL Mars Space Images - Featured Image* Visit the url for JPL Featured Space Image [here](https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars).* Use splinter to navigate the site and find the image url for the current Featured Mars Image and assign the url string to a variable called `featured_image_url`.* Make sure to find the image url to the full size `.jpg` image.* Make sure to save a complete url string for this image.
###Code
jpl_url="https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(jpl_url)
# HTML object
html=browser.html
# Parse HTML
soup=bs(html,"html.parser")
# Retrieve image url
image=soup.find('div', class_="sm:object-cover object-cover")
image_url = image.find('img')['src']
featured_image_url=image_url
print(featured_image_url)
###Output
https://d2pn8kiwq2w21t.cloudfront.net/images/jpegPIA24508.2e16d0ba.fill-400x400-c50.jpg
###Markdown
Mars Facts* Visit the Mars Facts webpage [here](https://space-facts.com/mars/) and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.* Use Pandas to convert the data to a HTML table string.
###Code
# scrape the html and retrieve all tables
fact_url = 'https://space-facts.com/mars/'
tables=pd.read_html(fact_url)
print(tables)
mars_fact=tables[0]
mars_fact=mars_fact.rename(columns={0:"Description",1:"Value"})
mars_fact.set_index("Description",inplace=True)
mars_fact
html_table=mars_fact.to_html()
print(html_table)
###Output
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Value</th>
</tr>
<tr>
<th>Description</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>Equatorial Diameter:</th>
<td>6,792 km</td>
</tr>
<tr>
<th>Polar Diameter:</th>
<td>6,752 km</td>
</tr>
<tr>
<th>Mass:</th>
<td>6.39 × 10^23 kg (0.11 Earths)</td>
</tr>
<tr>
<th>Moons:</th>
<td>2 (Phobos & Deimos)</td>
</tr>
<tr>
<th>Orbit Distance:</th>
<td>227,943,824 km (1.38 AU)</td>
</tr>
<tr>
<th>Orbit Period:</th>
<td>687 days (1.9 years)</td>
</tr>
<tr>
<th>Surface Temperature:</th>
<td>-87 to -5 °C</td>
</tr>
<tr>
<th>First Record:</th>
<td>2nd millennium BC</td>
</tr>
<tr>
<th>Recorded By:</th>
<td>Egyptian astronomers</td>
</tr>
</tbody>
</table>
###Markdown
Mars Hemispheres* Visit the USGS Astrogeology site [here](https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars) to obtain high resolution images for each of Mar's hemispheres.* You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.* Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys `img_url` and `title`.* Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere.
###Code
hemi_url = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars"
browser.visit(hemi_url)
html=browser.html
soup=bs(html,'html.parser')
mars_hem = soup.find('div', class_='collapsible results')
images = mars_hem.find_all('div', class_='item')
hemisphere_image_urls=[]
for img in images:
result = img.find('div', class_='description')
title = result.h3.text
img_url = img.find('a')['href']
browser.visit("https://astrogeology.usgs.gov"+ img_url)
html=browser.html
soup=bs(html,'html.parser')
downloads = soup.find('div', class_='downloads')
img_src = downloads.find("a")["href"]
if (title and img_src):
print('-'*70)
print(title)
print(img_src)
hemisphere_image_urls.append({
'title': title,
'img_url': img_src
})
pprint.pprint(hemisphere_image_urls)
###Output
[{'img_url': 'https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/cerberus_enhanced.tif/full.jpg',
'title': 'Cerberus Hemisphere Enhanced'},
{'img_url': 'https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/schiaparelli_enhanced.tif/full.jpg',
'title': 'Schiaparelli Hemisphere Enhanced'},
{'img_url': 'https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/syrtis_major_enhanced.tif/full.jpg',
'title': 'Syrtis Major Hemisphere Enhanced'},
{'img_url': 'https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/valles_marineris_enhanced.tif/full.jpg',
'title': 'Valles Marineris Hemisphere Enhanced'}]
###Markdown
Web Scraping
###Code
# setup splinter
executable_path = {"executable_path": ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless = False)
url = "https://redplanetscience.com/"
browser.visit(url)
html = browser.html
soup = bs(html, "html.parser")
#soup
#latest News Title
news_title = soup.find('div', class_="content_title")
print(f"{news_title.text}")
#Paragraph Text
news_p = soup.find("div", class_="article_teaser_body")
print(f"{news_p.text}")
###Output
SuperCam is a rock-vaporizing instrument that will help scientists hunt for Mars fossils.
###Markdown
JPL Mars Space Images - Featured Image
###Code
executable_path = {"executable_path": "/Users/nallu/.wdm/drivers/chromedriver/win32/92.0.4515.107/chromedriver.exe"}
browser = Browser("chrome", **executable_path, headless=False)
url = "https://spaceimages-mars.com"
browser.visit(url)
html = browser.html
soup = bs(html, "html.parser")
print(soup.prettify())
#split = soup.find("div", class_="floating_text_area")
#split
img_url = soup.find("a",class_='showimg fancybox-thumbs')['href']
#img_url
featured_image_url = f"https://spaceimages-mars.com/{img_url}"
print(featured_image_url)
###Output
https://spaceimages-mars.com/image/featured/mars2.jpg
###Markdown
Mars Facts
###Code
#using pandas to scrape html
mars_facts = pd.read_html("https://galaxyfacts-mars.com")[0]
print(mars_facts)
mars_facts_df = mars_facts[[0,1]]
mars_facts_df
mars_facts_df.columns = ["Label", "Value"]
mars_facts_df = mars_facts_df.set_index("Label")
mars_facts_df
mars_facts_df.to_html()
###Output
_____no_output_____
###Markdown
Mars Hemispheres
###Code
executable_path = {"executable_path": "/Users/nallu/.wdm/drivers/chromedriver/win32/92.0.4515.107/chromedriver.exe"}
browser = Browser("chrome", **executable_path, headless=False)
url = "https://marshemispheres.com/"
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
print(soup.prettify())
items = soup.find_all('div', class_='item')
items
hemi_main_url = 'https://marshemispheres.com/'
hemi_img_urls = []
for item in items:
title = item.find('h3').text
image_url = item.find('a', class_='itemLink product-item')['href']
browser.visit(hemi_main_url + image_url)
image_html = browser.html
soup = bs(image_html, 'html.parser')
image_url = hemi_main_url + soup.find('img', class_='wide-image')['src']
hemi_img_urls.append({"Title" : title, "Image_URL" : image_url})
hemi_img_urls
browser.quit()
###Output
_____no_output_____
###Markdown
NASA Mars NewsScrape the Mars News Site https://redplanetscience.com/ and collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later.
###Code
# Dependencies
from splinter import Browser
from bs4 import BeautifulSoup
import pandas as pd
# Create Splinter browser
executable_path = {'executable_path': 'chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
###Output
_____no_output_____
###Markdown
collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later.
###Code
# Access and visit the NASA Mars News Site URL
news_url = 'https://redplanetscience.com/'
browser.visit(news_url)
html = browser.html
# Parse HTML with BeautifulSoup
soup = BeautifulSoup(html,'html.parser')
# Retrieve all elements that contain news title
latest_news = soup.find_all('div', class_="list_text")
# Get the latest news
news = latest_news[0]
# Use BeautifulSoup' find() method to navigate and retrieve attributes
news_title = news.find('div', class_="content_title").text
news_p = news.find('div', class_="article_teaser_body").text
# display information
print('------------')
print(news_title)
print(news_p)
###Output
------------
How NASA's Mars Helicopter Will Reach the Red Planet's Surface
The small craft will seek to prove that powered, controlled flight is possible on another planet. But just getting it onto the surface of Mars will take a whole lot of ingenuity.
###Markdown
JPL Mars Space Images - Featured ImageVisit the url for the Featured Space Image site here https://spaceimages-mars.com/.Use splinter to navigate the site and find the image url for the current Featured Mars Image and assign the url string to a variable called featured_image_url.Make sure to find the image url to the full size .jpg image.Make sure to save a complete url string for this image.
###Code
# Access and visit the JPL Mars Space Images URL
featured_space_url = 'https://spaceimages-mars.com'
browser.visit(featured_space_url)
img_html = browser.html
# Parse HTML with BeautifulSoup
soup = BeautifulSoup(img_html, 'html.parser')
img_url_rel = soup.find('img', class_='thumbimg').get('src')
featured_img_url = featured_space_url + img_url_rel
featured_img_url
###Output
_____no_output_____
###Markdown
Mars FactsVisit the Mars Facts webpage https://galaxyfacts-mars.com/ and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.Use Pandas to convert the data to a HTML table string.
###Code
# Access and visit the Mars facts webpage
mars_facts_url = 'https://galaxyfacts-mars.com/'
# Get any tabular data from the webpage
facts_tables = pd.read_html(mars_facts_url)
facts_tables
# Datatype of facts_tables
type(facts_tables)
# Slice off the dataframe that we want usin normal indexing
facts_df = facts_tables[0]
facts_df.columns = ['Description', 'Mars', 'Earth']
# Preview the Dataframe
facts_df
# Convert the Dataframe to HTML
html_table = facts_df.to_html()
# Preview of the html_table
html_table
# Strip unwanted newlines to clean up the table
html_table.replace('\n', '')
###Output
_____no_output_____
###Markdown
Mars HemispheresVisit the astrogeology site here https://marshemispheres.com/ to obtain high resolution images for each of Mar's hemispheres.You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys img_url and title.Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere.
###Code
url = 'https://marshemispheres.com/'
browser.visit(url)
# Create a list to hold the images and titles.
hemisphere_image_urls = []
# Get a list of all of the hemispheres
links = browser.find_by_css('a.product-item img')
# Next, loop through those links, click the link, find the sample anchor, return the href
for i in range(len(links)):
hemisphere = {}
# We have to find the elements on each loop to avoid a stale element exception
browser.find_by_css('a.product-item img')[i].click()
# Next, we find the Sample image anchor tag and extract the href
sample_elem = browser.links.find_by_text('Sample').first
hemisphere['img_url'] = sample_elem['href']
# Get Hemisphere title
hemisphere['title'] = browser.find_by_css('h2.title').text
# Append hemisphere object to list
hemisphere_image_urls.append(hemisphere)
# Finally, we navigate backwards
browser.back()
hemisphere_image_urls
# close the browser
browser.quit()
###Output
_____no_output_____
###Markdown
Visit the NASA mars news site
###Code
news_url = "https://mars.nasa.gov/news/"
browser.visit(news_url)
html = browser.html
soup = bs(html, "html.parser")
article = soup.find("div", class_='list_text')
news_title = article.find("div", class_="content_title")
news_p = article.find("div", class_ ="article_teaser_body")
print(news_title.text)
print(news_p.text)
###Output
The MarCO Mission Comes to an End
The pair of briefcase-sized satellites made history when they sailed past Mars in 2019.
###Markdown
JPL Space Images Featured Image
###Code
url = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
image = soup.find('img', class_='thumb')['src']
featured_image_url = "https://www.jpl.nasa.gov/spaceimages/images/largesize/PIA16225_hires.jpg"
print(featured_image_url)
###Output
https://www.jpl.nasa.gov/spaceimages/images/largesize/PIA16225_hires.jpg
###Markdown
Mars Weather
###Code
url = 'https://twitter.com/marswxreport?lang=en'
browser.visit(url)
time.sleep(5)
html = browser.html
weather_soup = bs(html, 'html.parser')
# First, find a tweet with the data-name `Mars Weather`
mars_weather_tweet = weather_soup.find('div', attrs={"class": "tweet", "data-name": "Mars Weather"})
#search in p tags or span tags containing the tweet text
try:
mars_weather = mars_weather_tweet.find("p", "tweet-text").get_text()
mars_weather
except AttributeError:
pattern = re.compile(r'sol')
mars_weather = weather_soup.find('span', text=pattern).text
mars_weather
mars_weather
###Output
_____no_output_____
###Markdown
Mars Hemispheres
###Code
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url)
hemisphere_image_urls = []
# First, get a list of all of the hemispheres
links = browser.find_by_css("a.product-item h3")
# Next, loop through those links, click the link, find the sample anchor, return the href
for i in range(len(links)):
hemisphere = {}
# We have to find the elements on each loop to avoid a stale element exception
browser.find_by_css("a.product-item h3")[i].click()
# Next, we find the Sample image anchor tag and extract the href
sample_elem = browser.find_link_by_text('Sample').first
hemisphere['img_url'] = sample_elem['href']
# Get Hemisphere title
hemisphere['title'] = browser.find_by_css("h2.title").text
# Append hemisphere object to list
hemisphere_image_urls.append(hemisphere)
# Finally, we navigate backwards
browser.back()
hemisphere_image_urls
###Output
_____no_output_____
###Markdown
Mars Facts
###Code
import pandas as pd
df = pd.read_html('https://space-facts.com/mars/')[0]
df.columns=['description', 'value']
df.set_index('description', inplace=True)
df
df.to_html()
browser.quit()
###Output
_____no_output_____
###Markdown
Mac users
###Code
# https://splinter.readthedocs.io/en/latest/drivers/chrome.html
!which chromedriver
###Output
which: no chromedriver in (/c/Users/Hermela/anaconda3/Scripts/condabin:/c/Users/Hermela/bin:/mingw64/bin:/usr/local/bin:/usr/bin:/usr/bin:/mingw64/bin:/usr/bin:/c/Users/Hermela/bin:/c/Program Files (x86)/Intel/Intel(R) Management Engine Components/iCLS:/c/Program Files/Intel/Intel(R) Management Engine Components/iCLS:/c/WINDOWS/system32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/WINDOWS/System32/WindowsPowerShell/v1.0:/c/WINDOWS/System32/OpenSSH:/c/Program Files (x86)/Intel/Intel(R) Management Engine Components/DAL:/c/Program Files/Intel/Intel(R) Management Engine Components/DAL:/c/Program Files (x86)/Intel/Intel(R) Management Engine Components/IPT:/c/Program Files/Intel/Intel(R) Management Engine Components/IPT:/cmd:/c/Program Files/MongoDB/Server/4.4/bin:/c/Users/Hermela/anaconda3:/c/Users/Hermela/anaconda3/Library/mingw-w64/bin:/c/Users/Hermela/anaconda3/Library/usr/bin:/c/Users/Hermela/anaconda3/Library/bin:/c/Users/Hermela/anaconda3/Scripts:/c/Users/Hermela/AppData/Local/Microsoft/WindowsApps:/c/Users/Hermela/AppData/Local/GitHubDesktop/bin:/c/Users/Hermela/AppData/Local/Programs/Microsoft VS Code/bin:/usr/bin/vendor_perl:/usr/bin/core_perl)
###Markdown
Windows Users
###Code
executable_path = {'executable_path':"C:\\Users\\Hermela\\Documents\\bootcamp\\web-scraping-challenge\\app\\chromedriver.exe"}
browser = Browser('chrome', **executable_path)
###Output
_____no_output_____
###Markdown
NASA Mars News Site
###Code
url = 'https://mars.nasa.gov/news/'
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
slide_element = soup.select_one("ul.item_list li.slide")
slide_element.find("div", class_="content_title")
news_title = slide_element.find("div", class_="content_title").get_text()
print(news_title)
news_para = slide_element.find('div', class_="article_teaser_body").get_text()
news_para
###Output
_____no_output_____
###Markdown
JPL Mars Space Images - Featured Image
###Code
executable_path = {'executable_path':"C:\\Users\\Hermela\\Documents\\bootcamp\\web-scraping-challenge\\app\\chromedriver.exe"}
browser = Browser('chrome', **executable_path)
# Visit URL
url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(url)
full_image = browser.find_by_id("full_image")
full_image.click()
# Find the more info button and click that
browser.is_element_present_by_text('more info', wait_time=1)
more_info = browser.find_link_by_partial_text('more info')
more_info.click()
html = browser.html
img_soup = BeautifulSoup(html, 'html.parser')
img_url_rel = img_soup.select_one('figure.lede a img').get("src")
img_url_rel
# Use the base url to create an absolute url
img_url = f'https://www.jpl.nasa.gov{img_url_rel}'
img_url
###Output
_____no_output_____
###Markdown
Mars Facts
###Code
# Visit URL
url = 'https://space-facts.com/mars/'
browser.visit(url)
tables = pd.read_html(url)
tables
###Output
_____no_output_____
###Markdown
Mars Hemispheres
###Code
# Visit URL
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url)
hemisphere_image_urls = []
# List of all the hemispheres
links = browser.find_by_css("a.product-item h3")
# loop through hemispheres
for item in range(len(links)):
hemisphere = {}
# Find Element on Each Loop to Avoid a Stale Element Exception
browser.find_by_css("a.product-item h3")[item].click()
# Find Sample Image Anchor Tag & Extract <href>
sample_element = browser.find_link_by_text("Sample").first
hemisphere["img_url"] = sample_element["href"]
# Get hemisphere title
hemisphere["title"] = browser.find_by_css("h2.title").text
# Append hemisphere Object to List
hemisphere_image_urls.append(hemisphere)
# Navigate backwards
browser.back()
hemisphere_image_urls
###Output
_____no_output_____
###Markdown
NASA Mars News
###Code
# Visit the NASA Mars News Site and scrape the latest News Title and Paragraph Text
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://mars.nasa.gov/news/?page=0&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest'
browser.visit(url)
time.sleep(1)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
news_article = soup.find("div", class_='list_text')
print(news_article)
news_title = news_article.find("div", class_="content_title").text
news_p = news_article.find('div',class_="article_teaser_body").text
print(news_title)
print(news_p)
###Output
NASA Readies Perseverance Mars Rover's Earthly Twin
Did you know NASA's next Mars rover has a nearly identical sibling on Earth for testing? Even better, it's about to roll for the first time through a replica Martian landscape.
###Markdown
JPL Mars Space Images - Featured Image
###Code
#Visit the url for JPL and scrape the Featured Space Image
url = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(url)
time.sleep(1)
html = browser.html
# Find 'FULL IMAGE' button and have splinter click it
full_img_button = browser.links.find_by_partial_text('FULL IMAGE')
full_img_button.click()
time.sleep(1)
# Find 'more info' button and have splintr click
more_info_element = browser.links.find_by_partial_text('more info')
more_info_element.click()
time.sleep(1)
# Get the result html with beautiful soup
html = browser.html
soup = BeautifulSoup(html, "html.parser")
# Use beautiful soup to find full size jpeg image
figure_element = soup.find('figure', class_='lede')
full_image_url = figure_element.a['href']
print(full_image_url)
featured_image_url = f'https://www.jpl.nasa.gov{full_image_url}'
print(featured_image_url)
###Output
https://www.jpl.nasa.gov/spaceimages/images/largesize/PIA19980_hires.jpg
###Markdown
Mars Weather
###Code
url = "https://twitter.com/marswxreport?lang=en"
html = browser.html
browser.visit(url)
soup = BeautifulSoup(html,"html.parser")
time.sleep(1)
tweet = soup.find_all('div',class_='css-1dbjc4n').first
#tweet.find('span',class_='css-901oao')
print(tweet.text)
###Output
Don’t miss what’s happeningPeople on Twitter are the first to know.Log inSign up216TweetSee new TweetsConversationMars Weather@MarsWxReportInSight sol 610 (2020-08-14) low -93.8ºC (-136.8ºF) high -16.8ºC (1.7ºF)
winds from the W at 6.5 m/s (14.5 mph) gusting to 16.6 m/s (37.0 mph)
pressure at 7.90 hPa5:51 PM · Aug 14, 2020·Daily Weather Report2 Retweets16 Likes
###Markdown
Mars Facts
###Code
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
url = "https://space-facts.com/mars/"
browser.visit(url)
time.sleep(1)
html = browser.html
soup = BeautifulSoup(html,"html.parser")
df = pd.read_html(url)
df[0]
mars_facts_df = df[0]
mars_facts_df.columns = ['Description','Values']
mars_facts_df = mars_facts_df.set_index('Description',drop=True)
html = mars_facts_df.to_html()
print(html)
###Output
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Values</th>
</tr>
<tr>
<th>Description</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>Equatorial Diameter:</th>
<td>6,792 km</td>
</tr>
<tr>
<th>Polar Diameter:</th>
<td>6,752 km</td>
</tr>
<tr>
<th>Mass:</th>
<td>6.39 × 10^23 kg (0.11 Earths)</td>
</tr>
<tr>
<th>Moons:</th>
<td>2 (Phobos & Deimos)</td>
</tr>
<tr>
<th>Orbit Distance:</th>
<td>227,943,824 km (1.38 AU)</td>
</tr>
<tr>
<th>Orbit Period:</th>
<td>687 days (1.9 years)</td>
</tr>
<tr>
<th>Surface Temperature:</th>
<td>-87 to -5 °C</td>
</tr>
<tr>
<th>First Record:</th>
<td>2nd millennium BC</td>
</tr>
<tr>
<th>Recorded By:</th>
<td>Egyptian astronomers</td>
</tr>
</tbody>
</table>
###Markdown
Mars Hemispheres
###Code
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
url = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars"
browser.visit(url)
time.sleep(1)
html = browser.html
soup = BeautifulSoup(html,"html.parser")
hemispheres = soup.find_all('h3')
#print(hemispheres)
hemisphere_dict ={}
hemisphere_image_urls = []
for hemisphere in hemispheres:
title = hemisphere.get_text()
#print(title)
browser.click_link_by_partial_text(title)
img_url = browser.links.find_by_partial_text('Sample')['href']
hemisphere_dict = {'title:': title,'url': img_url}
hemisphere_image_urls.append(hemisphere_dict)
browser.visit(url)
print(hemisphere_image_urls)
###Output
_____no_output_____
###Markdown
**Create Database in MongoDB**![title](Images/mongo.png) **Connect to Mongo DB Mars DB**
###Code
conn = 'mongodb://localhost:27017'
client = pymongo.MongoClient(conn)
# Define database and collection
db = client.mars
collection = db.items
###Output
_____no_output_____
###Markdown
**Get executable_path**
###Code
!which chromedriver
###Output
/usr/local/bin/chromedriver
###Markdown
**Step 1 - Scraping** **NASA Mars News**Scrape the NASA Mars News Site and collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later.
###Code
def latest_nasa_news():
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
url = "https://mars.nasa.gov/news/?page=0&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest"
browser.visit(url)
#need timer to ensure page has load before scraping?
time.sleep(5)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
news_date = soup.find('div', class_='list_date').text
news_title = soup.find('div', class_='content_title').text
news_p = soup.find('div', class_='article_teaser_body').text
print(news_date)
print(news_title)
print(news_p)
#how to print multiple variables?
latest_nasa_news()
###Output
November 27, 2019
NASA's Briefcase-Size MarCO Satellite Picks Up Honors
The twin spacecraft, the first of their kind to fly into deep space, earn a Laureate from Aviation Week & Space Technology.
###Markdown
**JPL Mars Space Images - Featured Image**Latest Mars image
###Code
def latest_mars_image():
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
url_mars_image = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(url_mars_image)
#need timer to ensure page has load before scraping?
time.sleep(5)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
image = soup.find('img', class_='thumb')
#image output <img alt="Indus Vallis" class="thumb" src="/spaceimages/images/wallpaper/PIA23573-640x350.jpg" title="Indus Vallis"/>
#how to save image url and path to diplay in webpage?
#need to call image?
latest_mars_image()
###Output
_____no_output_____
###Markdown
**Twitter Latest Mars Weather**
###Code
def latest_mars_weather():
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
url_mars_weather = "https://twitter.com/marswxreport?lang=en"
browser.visit(url_mars_weather)
#need timer to ensure page has load before scraping?
time.sleep(5)
soup = BeautifulSoup(browser.html, 'html.parser')
latest_weather = soup.find('p', class_='TweetTextSize').text
print('Current Weather on Mars')
print(latest_weather)
#how to print multiple variables?
latest_mars_weather()
import requests
import lxml.html as lh
import pandas as pd
def mars_facts():
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
url_mars_facts = "http://space-facts.com/mars/"
browser.visit(url_mars_facts)
#need timer to ensure page has load before scraping?
time.sleep(5)
soup = BeautifulSoup(html, 'html.parser')
mars_facts_table = soup.find("table", {"class": "tablepress tablepress-id-p-mars"})
df_mars_facts = pd.read_html(str(mars_facts_table))
print(df_mars_facts)
mars_facts()
latest_weather = soup.find('td', class_='column-2')
for weather in latest_weather:
print('----------------------------------')
print(weather)
###Output
----------------------------------
6,792 km
----------------------------------
<br/>
###Markdown
**Mars Hemispheres**Visit the USGS Astrogeology site here to obtain high resolution images for each of Mar's hemispheres.You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys img_url and title.Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere.
###Code
def mars_image():
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
url = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars"
browser.visit(url)
#need a pause to ensure page has load before scraping?
soup = BeautifulSoup(browser.html, 'html.parser')
div = soup.find('div', class_='results').findAll('div', class_='description')
print(div)
mars_image()
###Output
[<div class="description"><a class="itemLink product-item" href="/search/map/Mars/Viking/cerberus_enhanced"><h3>Cerberus Hemisphere Enhanced</h3></a><span class="subtitle" style="float:left">image/tiff 21 MB</span><span class="pubDate" style="float:right"></span><br/><p>Mosaic of the Cerberus hemisphere of Mars projected into point perspective, a view similar to that which one would see from a spacecraft. This mosaic is composed of 104 Viking Orbiter images acquired…</p></div>, <div class="description"><a class="itemLink product-item" href="/search/map/Mars/Viking/schiaparelli_enhanced"><h3>Schiaparelli Hemisphere Enhanced</h3></a><span class="subtitle" style="float:left">image/tiff 35 MB</span><span class="pubDate" style="float:right"></span><br/><p>Mosaic of the Schiaparelli hemisphere of Mars projected into point perspective, a view similar to that which one would see from a spacecraft. The images were acquired in 1980 during early northern…</p></div>, <div class="description"><a class="itemLink product-item" href="/search/map/Mars/Viking/syrtis_major_enhanced"><h3>Syrtis Major Hemisphere Enhanced</h3></a><span class="subtitle" style="float:left">image/tiff 25 MB</span><span class="pubDate" style="float:right"></span><br/><p>Mosaic of the Syrtis Major hemisphere of Mars projected into point perspective, a view similar to that which one would see from a spacecraft. This mosaic is composed of about 100 red and violet…</p></div>, <div class="description"><a class="itemLink product-item" href="/search/map/Mars/Viking/valles_marineris_enhanced"><h3>Valles Marineris Hemisphere Enhanced</h3></a><span class="subtitle" style="float:left">image/tiff 27 MB</span><span class="pubDate" style="float:right"></span><br/><p>Mosaic of the Valles Marineris hemisphere of Mars projected into point perspective, a view similar to that which one would see from a spacecraft. The distance is 2500 kilometers from the surface of…</p></div>]
###Markdown
NASA Mars News - Title and Paragraph ScrapingScrape the Mars News Site and collect the latest News Titles and Paragraph Text (under the News Title). Assign the text to variables that you can reference later
###Code
# Dependencies
import os
import pandas as pd
from splinter import Browser
from bs4 import BeautifulSoup as bs
import requests
# Set up splinter for later
executable_path = {'executable_path': '/Users/cheyennemartin/Downloads/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
# URL to obtain title and paragraphs from Mars News
url = 'https://redplanetscience.com/'
browser.visit(url)
html = browser.html
# Create a Beautiful Soup object; parse with 'html.parser'
soup = bs(html, 'html.parser')
# Print formatted version of the soup
print(soup.prettify())
# Define results of titles and use soup.find to scrape the titles from the div class = content_title
# Use .text to clean up the title names and print tresults
tresults = soup.find('div', class_='content_title').text
print(tresults)
# Find classification for the paragraph description following the title
# Use .text to clean up the paragraphs and print presults
presults = soup.find('div', class_='article_teaser_body').text
print(presults)
###Output
In time-lapse video, taken at JPL, captures the first time NASA's Mars 2020 rover carries its full weight on its legs and wheels.
###Markdown
JPL Mars Space Images - Featured Image1. Visit the url for the Featured Space Image Site2. Use splinter to navigate the site and find the image url for the current Featured Mars Image and assign the url string to a variable called 'featured_image_url'3. Make sure to save a complete url string for this image
###Code
# Set the url to the Featured Space Image site
url = 'https://spaceimages-mars.com'
# Visit the url
browser.visit(url)
# Find the 'FULL IMAGE' text and .click() on it to open the img source
browser.links.find_by_partial_text('FULL IMAGE').click()
html = browser.html
soup = bs(html, 'html.parser')
# Print formatted version
print(soup.prettify())
# Pull the img src by using a .get() function
img_url = soup.find('img', class_="fancybox-image").get("src")
featured_image_url = url + img_url
featured_image_url
###Output
_____no_output_____
###Markdown
Mars Facts1. Visit the Mars Facts webpage and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.2. Use Pandas to convert the data to a HTML table string
###Code
# Set the url to the Galazy Facts - Mars
url = "https://galaxyfacts-mars.com"
# Read the html with a Data Frame
df_facts = pd.read_html(url)
# There's only 2 tables, pull the second one, which is index[1]
df_facts[1]
# Create a Mars Data Frame
mars_df = df_facts[1]
mars_df
# Rename the column titles
mars_df.columns = ['Description', 'Values']
mars_df
# Set the index to be on the Description
mars_df.set_index = ['Description']
mars_df
# Read the html table into the Data Frame
html_table = mars_df.to_html()
html_table
# Save the html table
mars_df.to_html('mars_table.html')
###Output
_____no_output_____
###Markdown
Mars Hemisphere1. Visit the astrogeology site to obtain high resolution images for each of Mar's hemispheres.2. You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.3. Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys `img_url` and `title`4. Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere
###Code
import time
from webdriver_manager.chrome import ChromeDriverManager
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
url = "https://marshemispheres.com/index.html"
browser.visit(url)
# Set nextpage_urls and imgtitles to empty lists
nextpage_urls = []
imgtitles = []
base_url = 'https://marshemispheres.com/'
# Parse HTML
html = browser.html
soup = bs(html, "html.parser")
# Find all of the elements that contain hemisphere photo info
divs = soup.find_all('div', class_='description')
# Set counter to zero
counter = 0
for div in divs:
link = div.find('a')
href = link['href']
img_title = div.a.find('h3')
imgtitles.append(img_title)
next_page = base_url + href
nextpage_urls.append(next_page)
counter = counter+1
if (counter == 4):
break
# Print the empty lists to get the HTML urls and imgtitles
print(nextpage_urls)
print(imgtitles)
# Create loop for photo on the next page with an empty list
images = []
for nextpage_url in nextpage_urls:
url = nextpage_url
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
link2 = soup.find('img', class_='wide-image')
forfinal = link2['src']
full_img = base_url + forfinal
images.append(full_img)
nextpage_urls = []
images
# Create loop for hemisphere photos with empty list
hemis_photo_urls = []
cerberus = {'title':imgtitles[0], 'img_url':images[0]}
schiaparelli = {'title':imgtitles[1], 'img_url':images[1]}
syrtis = {'title':imgtitles[2], 'img_url':images[2]}
valles = {'title':imgtitles[3], 'img_url':images[3]}
hemis_photo_urls = [cerberus, schiaparelli, syrtis, valles]
print(hemis_photo_urls)
# Quit the browser
browser.quit()
###Output
_____no_output_____
###Markdown
Scraping
###Code
#Path to chromedriver and setting up browser
executable_path = {'executable_path': 'C:/Users/osafi/Desktop/BOOT CAMP/12 WEB SCRAPING/Web_Scrapping_Challenge_OS/Missions_to_Mars/chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
###Output
_____no_output_____
###Markdown
Nasa Mars News
###Code
#Visit Mars News Website
url = "https://mars.nasa.gov/news/"
browser.visit(url)
time.sleep(5)
html = browser.html
soup = bs(html, 'html.parser')
#Navigate and find latest news article and title
mars = soup.find('div', class_ = "list_text")
news_title = mars.find('div', class_ = "content_title").text
news_p = mars.find('div', class_ = "article_teaser_body").text
print(news_title)
print(news_p)
###Output
Sensors on Mars 2020 Spacecraft Answer Long-Distance Call From Earth
Instruments tailored to collect data during the descent of NASA's next rover through the Red Planet's atmosphere have been checked in flight.
###Markdown
JPL Mars Space Images - Featured Image
###Code
#Visit JPL url
image_url = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(image_url)
time.sleep(5)
html = browser.html
soup = bs(html, 'html.parser')
#Click to access image
browser.click_link_by_id("full_image")
html = browser.html
soup = bs(html, 'html.parser')
#Extracting the partial href
time.sleep(10)
more_info = soup.find('div', class_="addthis_toolbox addthis_default_style")['addthis:url']
more_info
#Use href to get to full size image
browser.click_link_by_partial_href(more_info)
html = browser.html
soup = bs(html, 'html.parser')
#Extract image url
featured_image = soup.find('img', class_="main_image")['src']
featured_image_url = "https://www.jpl.nasa.gov" + featured_image
print(featured_image_url)
###Output
https://www.jpl.nasa.gov/spaceimages/images/largesize/PIA19343_hires.jpg
###Markdown
Mars Weather
###Code
#Visit Mars Weather Twitter page
weather_url = "https://twitter.com/marswxreport?lang=en"
browser.visit(weather_url)
#Adding delay
time.sleep(5)
html = browser.html
soup = bs(html, 'html.parser')
#Extracting latest tweet
results = soup.find('div', class_="css-901oao r-hkyrab r-1qd0xha r-a023e6 r-16dba41 r-ad9z0x r-bcqeeo r-bnwqim r-qvutc0")
mars_weather = results.find('span').text
mars_weather
###Output
_____no_output_____
###Markdown
Mars Facts
###Code
#Visit Mars facts page
facts_url = "https://space-facts.com/mars/"
browser.visit(facts_url)
time.sleep(5)
html = browser.html
soup = bs(html, 'html.parser')
#Extracting table
facts = pd.read_html(facts_url)
mars_facts = pd.DataFrame(facts[0])
mars_facts
#Converting table to html string
mars_facts_string = mars_facts.to_html(header = False, index = False)
print(mars_facts_string)
###Output
<table border="1" class="dataframe">
<tbody>
<tr>
<td>Equatorial Diameter:</td>
<td>6,792 km</td>
</tr>
<tr>
<td>Polar Diameter:</td>
<td>6,752 km</td>
</tr>
<tr>
<td>Mass:</td>
<td>6.39 × 10^23 kg (0.11 Earths)</td>
</tr>
<tr>
<td>Moons:</td>
<td>2 (Phobos & Deimos)</td>
</tr>
<tr>
<td>Orbit Distance:</td>
<td>227,943,824 km (1.38 AU)</td>
</tr>
<tr>
<td>Orbit Period:</td>
<td>687 days (1.9 years)</td>
</tr>
<tr>
<td>Surface Temperature:</td>
<td>-87 to -5 °C</td>
</tr>
<tr>
<td>First Record:</td>
<td>2nd millennium BC</td>
</tr>
<tr>
<td>Recorded By:</td>
<td>Egyptian astronomers</td>
</tr>
</tbody>
</table>
###Markdown
Mars Hemispheres
###Code
#Visit hemispheres page
hemi_url = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars"
browser.visit(hemi_url)
time.sleep(5)
html = browser.html
soup = bs(html, 'html.parser')
#Extracting all images
hemisphere_image_urls = []
results = soup.find('div', class_ = 'result-list')
hemi_pics = results.find_all('div', class_ = 'item')
hemi_pics
#Looping through each one to find all image urls and appending to a list
for i in hemi_pics:
title = i.find('h3').text
title = title.replace("Enhanced", "")
href = i.find('a')['href']
image_url = "https://astrogeology.usgs.gov/" + href
browser.visit(image_url)
time.sleep(5)
html = browser.html
soup = bs(html, 'html.parser')
full_size = soup.find('div', class_ = 'downloads')
img_url = full_size.find('a')['href']
hemisphere_image_urls.append({'title': title, 'img_url': img_url})
hemisphere_image_urls
###Output
_____no_output_____
###Markdown
NASA Mars News
###Code
!which chromedriver
url = 'https://mars.nasa.gov/news/'
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
news_title = soup.find('div', class_='content_title').text
news_p = soup.find('div', class_='article_teaser_body').text
date = soup.find('div', class_='list_date').tex
print("------------------------------------------------------------------------------------")
print("Title: ",news_title)
print("Paragraph:",news_p)
print("Date posted:",date)
# Retrieve the parent divs for all latest news
results = soup.find_all('div', class_='list_text')
# loop over results to get list
for result in results:
# scrape the news title
news_title = result.find('div', class_='content_title').text
# scrape the news paragraph
news_p = result.find('div', class_='article_teaser_body').text
# scrape article posted date
date = result.find('div', class_='list_date').text
# print article data
print("------------------------------------------------------------------------------------")
print("Title: ",news_title)
print("Paragraph:",news_p)
print("Date posted:",date)
# Dictionary to be inserted into MongoDB
post = {
'title': news_title,
'paragraph': news_p,
'date': date
}
# Insert dictionary into MongoDB as a document
collection.insert_one(post)
###Output
------------------------------------------------------------------------------------
Title: NASA's MAVEN Observes Martian Night Sky Pulsing in Ultraviolet Light
Paragraph: Vast areas of the Martian night sky pulse in ultraviolet light, according to images from NASA’s MAVEN spacecraft. The results are being used to illuminate complex circulation patterns in the Martian atmosphere.
Date posted: August 6, 2020
------------------------------------------------------------------------------------
Title: 8 Martian Postcards to Celebrate Curiosity's Landing Anniversary
Paragraph: The NASA rover touched down eight years ago, on Aug. 5, 2012, and will soon be joined by a second rover, Perseverance.
Date posted: August 3, 2020
------------------------------------------------------------------------------------
Title: NASA, ULA Launch Mars 2020 Perseverance Rover Mission to Red Planet
Paragraph: The agency's Mars 2020 mission is on its way. It will land at Jezero Crater in about seven months, on Feb. 18, 2021.
Date posted: July 30, 2020
------------------------------------------------------------------------------------
Title: NASA's Perseverance Rover Will Carry First Spacesuit Materials to Mars
Paragraph: In a Q&A, spacesuit designer Amy Ross explains how five samples, including a piece of helmet visor, will be tested aboard the rover, which is targeting a July 30 launch.
Date posted: July 28, 2020
------------------------------------------------------------------------------------
Title: A New Video Captures the Science of NASA's Perseverance Mars Rover
Paragraph: With a targeted launch date of July 30, the next robotic scientist NASA is sending to the to the Red Planet has big ambitions.
Date posted: July 27, 2020
------------------------------------------------------------------------------------
Title: NASA Invites Public to Share Excitement of Mars 2020 Perseverance Rover Launch
Paragraph: There are lots of ways to participate in the historic event, which is targeted for July 30.
Date posted: July 23, 2020
------------------------------------------------------------------------------------
Title: NASA's Mars Perseverance Rover Passes Flight Readiness Review
Paragraph: The agency's Mars 2020 mission has one more big prelaunch review – the Launch Readiness Review, on July 27.
Date posted: July 22, 2020
------------------------------------------------------------------------------------
Title: NASA to Broadcast Mars 2020 Perseverance Launch, Prelaunch Activities
Paragraph: Starting July 27, news activities will cover everything from mission engineering and science to returning samples from Mars to, of course, the launch itself.
Date posted: July 17, 2020
------------------------------------------------------------------------------------
Title: 6 Things to Know About NASA's Ingenuity Mars Helicopter
Paragraph: The first helicopter attempting to fly on another planet is a marvel of engineering. Get up to speed with these key facts about its plans.
Date posted: July 14, 2020
------------------------------------------------------------------------------------
Title: Join NASA for the Launch of the Mars 2020 Perseverance Rover
Paragraph: No matter where you live, choose from a menu of activities to join NASA as we "Countdown to Mars" and launch the Perseverance rover to the Red Planet.
Date posted: July 10, 2020
------------------------------------------------------------------------------------
Title: NASA's Perseverance Rover Attached to Atlas V Rocket
Paragraph: Ready for its launch later in the month, the Mars-bound rover will touch terra firma no more.
Date posted: July 9, 2020
------------------------------------------------------------------------------------
Title: 7 Things to Know About the Mars 2020 Perseverance Rover Mission
Paragraph: NASA's next rover to the Red Planet is slated to launch no earlier than July 30. These highlights will get you up to speed on the ambitious mission.
Date posted: July 8, 2020
------------------------------------------------------------------------------------
Title: NASA's InSight Flexes Its Arm While Its 'Mole' Hits Pause
Paragraph: Now that the lander's robotic arm has helped the mole get underground, it will resume science activities that have been on hold.
Date posted: July 7, 2020
------------------------------------------------------------------------------------
Title: Curiosity Mars Rover's Summer Road Trip Has Begun
Paragraph: After more than a year in the "clay-bearing unit," Curiosity is making a mile-long journey around some deep sand so that it can explore higher up Mount Sharp.
Date posted: July 6, 2020
------------------------------------------------------------------------------------
Title: How NASA's Mars Helicopter Will Reach the Red Planet's Surface
Paragraph: The small craft will seek to prove that powered, controlled flight is possible on another planet. But just getting it onto the surface of Mars will take a whole lot of ingenuity.
Date posted: June 23, 2020
------------------------------------------------------------------------------------
Title: The Launch Is Approaching for NASA's Next Mars Rover, Perseverance
Paragraph: The Red Planet's surface has been visited by eight NASA spacecraft. The ninth will be the first that includes a roundtrip ticket in its flight plan.
Date posted: June 17, 2020
------------------------------------------------------------------------------------
Title: NASA to Hold Mars 2020 Perseverance Rover Launch Briefing
Paragraph: Learn more about the agency's next Red Planet mission during a live event on June 17.
Date posted: June 15, 2020
------------------------------------------------------------------------------------
Title: While Stargazing on Mars, NASA's Curiosity Rover Spots Earth and Venus
Paragraph: This new portrait of the Red Planet's neighbors was taken during a time when there's more dust in the air on Mars.
Date posted: June 15, 2020
------------------------------------------------------------------------------------
Title: NASA's Mars Rover Drivers Need Your Help
Paragraph: Using an online tool to label Martian terrain types, you can train an artificial intelligence algorithm that could improve the way engineers guide the Curiosity rover.
Date posted: June 12, 2020
------------------------------------------------------------------------------------
Title: Three New Views of Mars' Moon Phobos
Paragraph: Taken with the infrared camera aboard NASA's Odyssey orbiter, they reveal temperature variations on the small moon as it drifts into and out of Mars’ shadow.
Date posted: June 8, 2020
------------------------------------------------------------------------------------
Title: The Extraordinary Sample-Gathering System of NASA's Perseverance Mars Rover
Paragraph: Two astronauts collected Moon rocks on Apollo 11. It will take three robotic systems working together to gather up the first Mars rock samples for return to Earth.
Date posted: June 2, 2020
------------------------------------------------------------------------------------
Title: The Detective Aboard NASA's Perseverance Rover
Paragraph: An instrument called SHERLOC will, with the help of its partner WATSON, hunt for signs of ancient life by detecting organic molecules and minerals.
Date posted: May 26, 2020
------------------------------------------------------------------------------------
Title: MAVEN Maps Electric Currents around Mars that are Fundamental to Atmospheric Loss
Paragraph: Five years after NASA’s MAVEN spacecraft entered into orbit around Mars, data from the mission has led to the creation of a map of electric current systems in the Martian atmosphere.
Date posted: May 25, 2020
------------------------------------------------------------------------------------
Title: Air Deliveries Bring NASA's Perseverance Mars Rover Closer to Launch
Paragraph: A NASA Wallops Flight Facility cargo plane transported more than two tons of equipment — including the rover's sample collection tubes — to Florida for this summer's liftoff.
Date posted: May 21, 2020
------------------------------------------------------------------------------------
Title: NASA Wins 4 Webbys, 4 People's Voice Awards
Paragraph: Winners include the JPL-managed "Send Your Name to Mars" campaign, NASA's Global Climate Change website and Solar System Interactive.
Date posted: May 19, 2020
------------------------------------------------------------------------------------
Title: NASA's Perseverance Rover Goes Through Trials by Fire, Ice, Light and Sound
Paragraph: The agency's new Mars rover is put through a series of tests in vacuum chambers, acoustic chambers and more to get ready for the Red Planet.
Date posted: May 18, 2020
------------------------------------------------------------------------------------
Title: NASA's Perseverance Rover Mission Getting in Shape for Launch
Paragraph: Stacking spacecraft components on top of each other is one of the final assembly steps before a mission launches to the Red Planet.
Date posted: May 7, 2020
------------------------------------------------------------------------------------
Title: NASA Perseverance Mars Rover Scientists Train in the Nevada Desert
Paragraph: Team members searched for signs of ancient microscopic life there, just as NASA's latest rover will on the Red Planet next year.
Date posted: May 6, 2020
------------------------------------------------------------------------------------
Title: NASA's Perseverance Rover Will Look at Mars Through These 'Eyes'
Paragraph: A pair of zoomable cameras will help scientists and rover drivers with high-resolution color images.
Date posted: May 1, 2020
------------------------------------------------------------------------------------
Title: Meet the People Behind NASA's Perseverance Rover
Paragraph: These are the scientists and engineers who built NASA's next Mars rover and who will guide it to a safe landing in Jezero Crater.
Date posted: April 30, 2020
------------------------------------------------------------------------------------
Title: Q&A with the Student Who Named Ingenuity, NASA's Mars Helicopter
Paragraph: As a longtime fan of space exploration, Vaneeza Rupani appreciates the creativity and collaboration involved with trying to fly on another planet.
Date posted: April 29, 2020
------------------------------------------------------------------------------------
Title: Alabama High School Student Names NASA's Mars Helicopter
Paragraph: Vaneeza Rupani's essay was chosen as the name for the small spacecraft, which will mark NASA's first attempt at powered flight on another planet.
Date posted: April 29, 2020
------------------------------------------------------------------------------------
Title: How NASA's Perseverance Mars Team Adjusted to Work in the Time of Coronavirus
Paragraph: Like much of the rest of the world, the Mars rover team is pushing forward with its mission-critical work while putting the health and safety of their colleagues and community first.
Date posted: April 21, 2020
------------------------------------------------------------------------------------
Title: NASA's Perseverance Mars Rover Gets Balanced
Paragraph: The mission team performed a crucial weight-balancing test on the rover in preparation for this summer's history-making launch to the Red Planet.
Date posted: April 20, 2020
------------------------------------------------------------------------------------
Title: NASA's Curiosity Keeps Rolling As Team Operates Rover From Home
Paragraph: The team has learned to meet new challenges as they work remotely on the Mars mission.
Date posted: April 13, 2020
------------------------------------------------------------------------------------
Title: Mars Helicopter Attached to NASA's Perseverance Rover
Paragraph: The team also fueled the rover's sky crane to get ready for this summer's history-making launch.
Date posted: April 10, 2020
------------------------------------------------------------------------------------
Title: NASA's Perseverance Mars Rover Gets Its Wheels and Air Brakes
Paragraph: After the rover was shipped from JPL to Kennedy Space Center, the team is getting closer to finalizing the spacecraft for launch later this summer.
Date posted: April 3, 2020
------------------------------------------------------------------------------------
Title: The Man Who Wanted to Fly on Mars
Paragraph: The Mars Helicopter is riding to the Red Planet this summer with NASA's Perseverance rover. The helicopter's chief engineer, Bob Balaram, shares the saga of how it came into being.
Date posted: April 1, 2020
------------------------------------------------------------------------------------
Title: 10.9 Million Names Now Aboard NASA's Perseverance Mars Rover
Paragraph: As part of NASA's 'Send Your Name to Mars' campaign, they've been stenciled onto three microchips along with essays from NASA's 'Name the Rover' contest. Next stop: Mars.
Date posted: March 26, 2020
------------------------------------------------------------------------------------
Title: NASA's Curiosity Mars Rover Takes a New Selfie Before Record Climb
Paragraph: Along with capturing an image before its steepest ascent ever, the robotic explorer filmed its "selfie stick," or robotic arm, in action.
Date posted: March 20, 2020
###Markdown
JPL Mars Space Images - Featured Image
###Code
url = 'https://www.jpl.nasa.gov/spaceimages/'
browser.visit(url)
url = 'https://www.jpl.nasa.gov/spaceimages/'
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
base_url = 'https://www.jpl.nasa.gov'
image_url = soup.find("a", class_ = "button fancybox")["data-fancybox-href"]
featured_image_url = base_url + image_url
print(featured_image_url)
###Output
https://www.jpl.nasa.gov/spaceimages/images/mediumsize/PIA01320_ip.jpg
###Markdown
Mars Weather
###Code
url = ('https://twitter.com/marswxreport?lang=en')
browser.visit(mars_weather_url)
time.sleep(3)
weather_html = browser.html
weather_soup = bs(weather_html, "html.parser")
# print(weather_soup.prettify())
mars_tweets = [weather_soup.find_all('p', class_="TweetTextSize"), weather_soup.find_all('span', class_="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0")]
for tweets in mars_tweets:
mars_tweet = tweets
for tweet in mars_tweet:
if 'InSight' in tweet.text:
mars_weather = tweet.text
if tweet.a in tweet:
mars_weather = mars_weather.strip(tweet.a.text)
break
print(mars_weather)
###Output
InSight sol 603 (2020-08-07) low -91.3ºC (-132.4ºF) high -12.2ºC (10.0ºF)
winds from the W at 6.6 m/s (14.8 mph) gusting to 17.2 m/s (38.4 mph)
pressure at 7.90 hPa
###Markdown
Mars Facts
###Code
url = 'https://space-facts.com/mars/'
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
tables = pd.read_html(url)
tables
facts_df = tables[0]
facts_df.columns = ['Fact', 'Value']
facts_df['Fact'] = facts_df['Fact'].str.replace(':', '')
facts_df
facts_df = tables[0]
facts_df.columns = ['Fact', 'Value']
facts_df['Fact'] = facts_df['Fact'].str.replace(':', '')
facts_df
facts_html = facts_df.to_html()
print(facts_html)
###Output
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Fact</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Equatorial Diameter</td>
<td>6,792 km</td>
</tr>
<tr>
<th>1</th>
<td>Polar Diameter</td>
<td>6,752 km</td>
</tr>
<tr>
<th>2</th>
<td>Mass</td>
<td>6.39 × 10^23 kg (0.11 Earths)</td>
</tr>
<tr>
<th>3</th>
<td>Moons</td>
<td>2 (Phobos & Deimos)</td>
</tr>
<tr>
<th>4</th>
<td>Orbit Distance</td>
<td>227,943,824 km (1.38 AU)</td>
</tr>
<tr>
<th>5</th>
<td>Orbit Period</td>
<td>687 days (1.9 years)</td>
</tr>
<tr>
<th>6</th>
<td>Surface Temperature</td>
<td>-87 to -5 °C</td>
</tr>
<tr>
<th>7</th>
<td>First Record</td>
<td>2nd millennium BC</td>
</tr>
<tr>
<th>8</th>
<td>Recorded By</td>
<td>Egyptian astronomers</td>
</tr>
</tbody>
</table>
###Markdown
Mars Hemispheres
###Code
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
results = soup.find_all('div', class_="description")
base_url = 'https://astrogeology.usgs.gov/'
sites = []
for result in results:
link = result.find('a', class_ = "itemLink product-item")
link_text = link['href']
full_url = base_url + link_text
sites.append(full_url)
sites
hemispheres = []
for site in sites:
browser.visit(site)
html = browser.html
soup = bs(html, 'html.parser')
title = soup.find('h2', class_ = "title").text.strip()
url = soup.find_all('a', target = "_blank", href = True)[0]['href']
hemispheres.append({"title": title, "img_url": url})
hemispheres
###Output
_____no_output_____
###Markdown
Scraping Set-up
###Code
#dependencies
import pandas as pd
from bs4 import BeautifulSoup as bs
import splinter
from splinter import Browser
from splinter.exceptions import ElementDoesNotExist
import requests
# https://splinter.readthedocs.io/en/latest/drivers/chrome.html
!which chromedriver
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
###Output
_____no_output_____
###Markdown
NASA Mars News
###Code
#url variable for target site
url_news = 'https://mars.nasa.gov/news'
#retrieve page with requests
response_news = requests.get(url_news)
#create beautiful soup object
soup_news = bs(response_news.text, 'html.parser')
#examine results
print(soup_news.prettify())
#extract title text
news_title = soup_news.title.text
print(news_title)
#extract paragraph text
news_paragraphs = soup_news.find('p')
print(news_paragraphs.text)
###Output
Managed by the Mars Exploration Program and the Jet Propulsion Laboratory for NASA’s Science Mission Directorate
###Markdown
JPL Mars Space Images
###Code
#url variable for target site
url_jpl = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(url_jpl)
html_jpl = browser.html
soup_jpl = bs(html_jpl, 'html.parser')
image_jpl = soup.find("a", class_='button fancybox')['data-fancybox-href']
featured_image_url = f'https://jpl.nasa.gov{image_jpl}'
print(featured_image_url)
###Output
https://jpl.nasa.gov/spaceimages/images/mediumsize/PIA14627_ip.jpg
###Markdown
Mars Weather
###Code
#url variable for target site
url_weather = 'https://twitter.com/marswxreport?lang=en'
browser.visit(url_weather)
html_weather = browser.html
soup_weather = bs(html_weather, 'html.parser')
mars_weather = soup.find("div", class_='css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0')
mars_weather_text = f'{mars_weather}'
print(mars_weather_text)
###Output
None
###Markdown
Mars Facts
###Code
#url variable for target site
url_facts = 'https://space-facts.com/mars/'
#ingest tables from target site
tables = pd.read_html(url_facts)
#restrict to table of interest (plane facts)
tables[0]
#output table of interest as html
print(tables[0].to_html())
###Output
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Equatorial Diameter:</td>
<td>6,792 km</td>
</tr>
<tr>
<th>1</th>
<td>Polar Diameter:</td>
<td>6,752 km</td>
</tr>
<tr>
<th>2</th>
<td>Mass:</td>
<td>6.39 × 10^23 kg (0.11 Earths)</td>
</tr>
<tr>
<th>3</th>
<td>Moons:</td>
<td>2 (Phobos & Deimos)</td>
</tr>
<tr>
<th>4</th>
<td>Orbit Distance:</td>
<td>227,943,824 km (1.38 AU)</td>
</tr>
<tr>
<th>5</th>
<td>Orbit Period:</td>
<td>687 days (1.9 years)</td>
</tr>
<tr>
<th>6</th>
<td>Surface Temperature:</td>
<td>-87 to -5 °C</td>
</tr>
<tr>
<th>7</th>
<td>First Record:</td>
<td>2nd millennium BC</td>
</tr>
<tr>
<th>8</th>
<td>Recorded By:</td>
<td>Egyptian astronomers</td>
</tr>
</tbody>
</table>
###Markdown
Mars Hemispheres
###Code
url_hemi = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
###Output
_____no_output_____
###Markdown
Mission to Mars: A Web Scraping Challenge I. NASA Mars News (latest article)
###Code
# Step 1 - BeautifulSoup/Splinter Scraping
# NASA Mars News
# Importing dependencies
from bs4 import BeautifulSoup
import requests
import pandas as pd
import pymongo
from splinter import Browser
from bs4 import BeautifulSoup as soup
from webdriver_manager.chrome import ChromeDriverManager
# Path for Splinter
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
# Establishing URL to pull from (NASA Mars News site)
url = "https://mars.nasa.gov/news/?page=0&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest"
# Using splinter to visit the URL
browser.visit(url)
html = browser.html
# Create BeautifulSoup oject holding html from site
new_soup = soup(html, 'html.parser')
slide_element = new_soup.select_one("ul.item_list li.slide")
news_title = slide_element.find('div', class_="content_title").get_text()
print(news_title)
new_paragraph = slide_element.find('div', class_="article_teaser_body").get_text()
print(new_paragraph)
###Output
[WDM] - ====== WebDriver manager ======
###Markdown
II. JPL Mars Space Images
###Code
# Splinter: JPL Mars Space Images - Featured Image
from splinter import Browser
from bs4 import BeautifulSoup
from webdriver_manager.chrome import ChromeDriverManager
# Path for splinter browser via ChromeDriverManager. Headless is False to be able to see what the browser window is doing
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
# URL to the JPL Mars Space Images. We want today's featured image
base_url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/'
url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html'
browser.visit(url)
# Get html from the browser
html = browser.html
# Parse html with Beautiful Soup
soup = BeautifulSoup(html, 'lxml')
# Retrieve all elements that contain header information
images = soup.find_all('div', class_="header")
# First header contains image we want
full_header = images[0]
images[0]
# Finding the hyper-text reference for the featured image within the header
full_header.find('a', class_= "showimg fancybox-thumbs")['href']
# Combining base url with the href for the featured image
featured_image_url = base_url + str(images[0].find('a', class_="showimg fancybox-thumbs")['href'])
print(featured_image_url)
###Output
https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/image/featured/mars3.jpg
###Markdown
III. Mars Facts
###Code
# Pandas Scraping: Mars Facts
# space-facts.com
import pandas as pd
import requests
#
url = 'https://space-facts.com/mars/'
tables = pd.read_html(requests.get(url).text)
# Select for first html table (equatorial diameter, polar diameter, mass, moons, orbit distance, orbit period, surface temperature, first record, recorded by)
mars_facts_df = tables[0]
# Renaming columns
mars_facts_df = mars_facts_df.rename(columns = {0: "Property", 1: "Observation"})
mars_facts_df
# Converting this Pandas dataframe into an html table string
html_mars_facts = mars_facts_df.to_html()
# html_mars_facts
###Output
_____no_output_____
###Markdown
IV. Astrogeology USGS: Mars Hemisphere Images
###Code
# Mars Hemisphere Images
# Scraping with splinter
from splinter import Browser
from bs4 import BeautifulSoup
from webdriver_manager.chrome import ChromeDriverManager
# Path and browser via ChromeDriverManager
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
# URL to astrogeology.usgs
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url)
# Establish html from which to scrape hemispheres and associated images
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
url = 'https://astrogeology.usgs.gov'
hemisphere_titles = []
hemisphere_page_hrefs = []
hemispheres = soup.find_all('div', class_='description')
for hemisphere in hemispheres:
h3 = hemisphere.find('h3').text
link = hemisphere.find('a')
href = url + link['href']
hemisphere_titles.append(h3)
hemisphere_page_hrefs.append(href)
print(hemisphere_titles)
print(hemisphere_page_hrefs)
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
# Empty list to hold image links
image_hrefs = []
# Iterating through hemisphere page links to extract full-size image href
for page in hemisphere_page_hrefs:
# Establishing new URl to visit is the specific page's URL
url = page
browser.visit(url)
# HTML object
html = browser.html
# Parse HTML with Beautiful Soup
soup = BeautifulSoup(html, 'html.parser')
# Retrieve elements that contain image links
image_content = soup.find('div', class_='downloads')
# Using Beautiful Soup's find() method to navigate and retrieve attributes
ul = image_content.find('ul')
li = ul.find('li')
link = li.find('a')
href = link['href']
# Appending the href to the full-size image to the list
image_hrefs.append(href)
print(image_hrefs)
# Storing the full-size image URLs with the hemisphere titles in a list of dictionaries
# Empty list to hold 1 dictionary per hemisphere
hemisphere_image_urls = []
# Iterating through
for i in range(4):
# We want a dictionary containing the hemisphere's name and the link to its image
hemisphere_dict = {
'title': hemisphere_titles[i],
'img_url': image_hrefs[i]
}
# Appending these dictionaries (1 per hemisphere) to a list
hemisphere_image_urls.append(hemisphere_dict)
# Print 'em out to check!
print(hemisphere_image_urls)
# print(len(hemisphere_image_urls))
browser.quit()
###Output
_____no_output_____
###Markdown
Step 1 - Scraping NASA Mars News
###Code
# # https://splinter.readthedocs.io/en/latest/drivers/chrome.html
!which chromedriver
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
nasa_url = 'https://mars.nasa.gov/news/'
browser.visit(nasa_url)
html = browser.html
bsoup = bs(html,"lxml")
news_title = bsoup.find('div', class_='content_title').text
print(news_title)
news_p=bsoup.find('div', class_='article_teaser_body').text
print(news_p)
###Output
Media Get a Close-Up of NASA's Mars 2020 Rover
The clean room at NASA's Jet Propulsion Laboratory was open to the media to see NASA's next Mars explorer before it leaves for Florida in preparation for a summertime launch.
###Markdown
JPL Mars Space Images - Featured Image
###Code
jpl_url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(jpl_url)
html = browser.html
soup = bs(html, 'lxml')
image_name= bsoup.find('article', class_='carousel_item')['alt']
print(image_name)
base_url = 'https://www.jpl.nasa.gov'
img_url = soup.find(attrs={'data-title':image_name})["data-fancybox-href"]
combo_url = base_url + img_url
print(combo_url)
###Output
https://www.jpl.nasa.gov/spaceimages/images/mediumsize/PIA16469_ip.jpg
###Markdown
Mars Weather
###Code
weather_url = 'https://twitter.com/marswxreport?lang=en'
browser.visit(weather_url)
html = browser.html
bsoup = bs(html, 'lxml')
mars_weather= bsoup.find('p', class_='css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0')
print(mars_weather)
###Output
None
###Markdown
Mars Facts
###Code
facts_url = 'https://space-facts.com/mars/'
mars_facts_tables = pd.read_html(facts_url)
mars_facts_tables
mars_df = mars_facts_tables[1]
mars_df
mars_df = mars_df.drop(columns=['Earth'])
mars_df
mars_df = mars_df.rename(columns=
{"Mars - Earth Comparison": "Measure"})
mars_df
html_table = mars_df.to_html(header=None,index=False)
html_table.replace('\n', '')
html_table
mars_df.to_html('mars_table.html')
!open mars_table.html
###Output
_____no_output_____
###Markdown
Mars Hemispheres
###Code
hemisphere_url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(hemisphere_url)
html = browser.html
bsoup = bs(html, 'lxml')
hemisphere_urls = []
#include all 4 hemispheres
xpath = '//*[@id="product-section"]/div[2]/div/div/a'
hemisphere_anchors = browser.find_by_xpath(xpath)
# Loop through results
for anchor in hemisphere_anchors:
try:
hemisphere_title = anchor.find_by_tag('h3').text
hemisphere_href = anchor['href']
#request the next page using the href
hemisphere_page = requests.get(hemisphere_href).text
bsoup = bs(hemisphere_page, 'lxml')
anchor_tag_page2 = bsoup.select('#wide-image > div > ul > li:nth-child(1) > a')
hemisphere_url = anchor_tag_page2[0]['href']
img_dict = { "image title": hemisphere_title, "image url": hemisphere_url }
hemisphere_urls.append(img_dict)
except Exception as e:
print(e)
print("This is an exception being thrown")
hemisphere_urls
###Output
_____no_output_____
###Markdown
NASA Mars News
###Code
# define URL we will be looking at
url = "https://mars.nasa.gov/news/?page=0&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest"
# visit selected URL
browser.visit(url)
# connect to html code
html = browser.html
# Create BeautifulSoup object to parse code from the webpage
soup = BeautifulSoup(html, 'html.parser')
# prettify BeautifulSoup page
print(soup.prettify())
# capture title of the latest article
mars_title = soup.find('div', class_='list_text').find('div', class_='content_title').text.strip()
mars_title
# capture title of the latest article
mars_p = soup.find('div', class_='list_text').find('div', class_='article_teaser_body').get_text()
mars_p
###Output
_____no_output_____
###Markdown
JPL Mars Space Images - Featured Image
###Code
# define URL we will be looking at
url = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
# visit selected URL
browser.visit(url)
# connect to html code
html = browser.html
# Create BeautifulSoup object to parse code from the webpage
soup = BeautifulSoup(html, 'html.parser')
# prettify BeautifulSoup page
print(soup.prettify())
# find location of large size link and save as an item
ft_img = soup.find('li', class_='slide').find('a', class_='fancybox')
ft_img = ft_img['data-fancybox-href']
ft_img
# set base url
base_url = 'https://www.jpl.nasa.gov'
# combine base url and ft_img url to create link to the featured image
ft_img_link = base_url + ft_img
ft_img_link
###Output
_____no_output_____
###Markdown
NASA Mars News- Scrape the NASA Mars News Site (https://mars.nasa.gov/news/?page=0&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest) and collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later.
###Code
browser.visit('https://mars.nasa.gov/news/')
title = browser.find_by_css('div.content_title a').text
paragraph = browser.find_by_css('div.article_teaser_body').text
title, paragraph
###Output
_____no_output_____
###Markdown
JPL Mars Space Images - Featured Image¶- Visit the url for JPL Featured Space Image https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html.- Use splinter to navigate the site and find the image url for the current Featured Mars Image and assign the url string to a variable called featured_image_url.- Make sure to find the image url to the full size .jpg image.- Make sure to save a complete url string for this image.
###Code
browser.visit('https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html')
browser.links.find_by_partial_text('FULL IMAGE').click()
browser.find_by_css('img.fancybox-image')['src']
###Output
_____no_output_____
###Markdown
Mars Facts- Visit the Mars Facts webpage (https://space-facts.com/mars/) and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.- Use Pandas to convert the data to a HTML table string.
###Code
pd.read_html('https://space-facts.com/mars/')[0].to_html(classes='table table-stripped')
###Output
_____no_output_____
###Markdown
Mars Hemispheres- Visit the USGS Astrogeology site (https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars) to obtain high resolution images for each of Mar's hemispheres.- You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.- Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys img_url and title.- Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere.
###Code
browser.visit('https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars')
links = browser.find_by_css('a.itemLink h3')
hemispheres = []
for i in range(len(links)):
hemisphere = {}
hemisphere['title'] = browser.find_by_css('a.itemLink h3')[i].text
browser.find_by_css('a.itemLink h3')[i].click()
hemisphere['url'] = browser.find_by_text('Sample')['href']
hemispheres.append(hemisphere)
browser.back()
browser.quit()
hemispheres
###Output
_____no_output_____
###Markdown
NASA MARS NEWS
###Code
#import dependencies
from bs4 import BeautifulSoup as bs
from splinter import Browser
import pandas as pd
import time
import os
import requests
from selenium.webdriver.chrome.options import Options
from splinter.exceptions import ElementDoesNotExist
#pointing to the directory where chromedriver exists
executable_path = {"executable_path": "users/anali/bin/chromedriver"}
browser = Browser("chrome", **executable_path, headless = False)
#visiting the page
#url = "https://mars.nasa.gov/news/"
url = "https://mars.nasa.gov/news/?page=0&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest"
# launch browser
browser.visit(url)
#check if the page has been loaded
browser.is_element_present_by_name('list_date', wait_time=10)
#create HTML object
html = browser.html
#parse HTML with beautiful object
soup = bs(html,"html.parser")
#extract title and paragraph
news_date = soup.find('div', class_='list_date').text
news_title = soup.find('div', class_='content_title').text
news_p = soup.find('div', class_='article_teaser_body').text
print(f"Date: {news_date}")
print(f"Title: {news_title}")
print(f"Para: {news_p}")
###Output
Date: August 14, 2020
Title: Mars Now
Para: The board will assist with analysis of current plans and goals for one of the most difficult missions humanity has ever undertaken.
###Markdown
-------------------------------------------- JPL Mars Space Images - Featured Image --------------------------------------------
###Code
url_image = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(url_image)
#print(soup.prettify())
#Getting the base url
from urllib.parse import urlsplit
base_url = "{0.scheme}://{0.netloc}/".format(urlsplit(url_image))
print(base_url)
#Design an xpath selector to grab the image
xpath = "//*[@id=\"page\"]/section[3]/div/ul/li[1]/a/div/div[2]/img"
print(xpath)
#Use splinter to click on the mars featured image
#to bring the full resolution image
results = browser.find_by_xpath(xpath)
img = results[0]
img.click()
print(results)
#get image url using BeautifulSoup
html_image = browser.html
soup = bs(html_image, "html.parser")
img_url = soup.find("img", class_="fancybox-image")
#print(img_url["src"])
#test = base_url.append(img_url)
print(str(base_url) + "/n" + str(img_url))
#img_url = soup.find("img", class_="fancybox-image")["src"]
#print(soup.prettify())
featured_image_url = base_url + img_url
print(featured_image_url)
#RESULT: featured_image_url = https://www.jpl.nasa.gov//spaceimages/images/largesize/PIA24053_hires.jpg
###Output
https://www.jpl.nasa.gov//n<img class="fancybox-image" src="/spaceimages/images/largesize/PIA24059_hires.jpg" style="display: inline;"/>
###Markdown
Mars Weather
###Code
# twitter url to visit
url = 'https://twitter.com/marswxreport?lang=en'
# launch browser
browser.visit(url)
# create beautifulsoup object
html = browser.html
soup = bs(html, "html.parser")
#print(soup.prettify())
#find tweet and extract text
mars_weather = soup.find_all('span')
for i in range(len(mars_weather)):
if ("InSight" in mars_weather[i].text):
weather = mars_weather[i].text
break
#Visit the Mars Weather twitter account
url='https://twitter.com/marswxreport?lang=en'
browser.visit(url)
html = browser.html
soup = bs(html, "html.parser")
time.sleep(1)
#mars_weather=soup.findall('head', 'title="([^"]*?)"')
mars_weather = soup.find('p', class_='TweetTextSize TweetTextSize--normal js-tweet-text tweet-text')[0].text
print(mars_weather)
#mars_weather = soup.find('head', 'title')[1].a.text
# #### Mars Weather
# Use splinter to scrape the latest Mars weather tweet from the Mars Weather twitter account (https://twitter.com/marswxreport?lang=en)
# URL of page to be scraped
url = 'https://twitter.com/marswxreport?lang=en'
#Visit the page using the browser
browser.visit(url)
# assign html content
html = browser.html
# Create a Beautiful Soup object
soup = bs(html, "html")
#scrap latest Mars weather tweet
mars_weather = soup.find('p', class_='TweetTextSize TweetTextSize--normal js-tweet-text tweet-text').text
# Put infos into Library
mars_data['mars_weather'] = mars_weather
#Visit the Mars Weather twitter account
url='https://twitter.com/marswxreport?lang=en'
browser.visit(url)
#mars_weather=soup.findall('head', 'title="([^"]*?)"')
mars_weather = soup.select("html head title")[0].get_text()
print(mars_weather)
#mars_weather = soup.find('head', 'title')[1].a.text
#<span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0">
#InSight sol 597 (2020-08-01) low -91.0ºC (-131.8ºF) high -16.9ºC (1.6ºF)
#winds from the WNW at 8.0 m/s (17.9 mph) gusting to 20.2 m/s (45.1 mph)
#pressure at 7.90 hPa
#</span>
###Output
_____no_output_____
###Markdown
Mars Facts
###Code
url_facts = "https://space-facts.com/mars/"
table = pd.read_html(url_facts)
table[0]
df_mars_facts = table[0]
df_mars_facts.columns = ["Parameter", "Values"]
df_mars_facts.set_index(["Parameter"])
df_mars_facts.to_html()
###Output
_____no_output_____
###Markdown
Mars Hemispheres
###Code
url4 = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url4)
html4 = browser.html
soup4 = bs(html4, 'html.parser')
# First, get a list of all of the hemispheres
links = browser.find_by_css("a.product-item h3")
hemisphere_image_urls = []
# First, get a list of all of the hemispheres
links = browser.find_by_css("a.product-item h3")
# Next, loop through those links, click the link, find the sample anchor, return the href
for i in range(len(links)):
hemisphere = {}
# We have to find the elements on each loop to avoid a stale element exception
browser.find_by_css("a.product-item h3")[i].click()
# Next, we find the Sample image anchor tag and extract the href
sample_elem = browser.find_link_by_text('Sample').first
hemisphere['img_url'] = sample_elem['href']
# Get Hemisphere title
hemisphere['title'] = browser.find_by_css("h2.title").text
# Append hemisphere object to list
hemisphere_image_urls.append(hemisphere)
# Finally, we navigate backwards
browser.back()
hemisphere_image_urls
###Output
_____no_output_____ |
Sarcasm_Classifier/Sarcasm_Classifier.ipynb | ###Markdown
Defining the variables:1. dictionary size2. embedding size3. max length of a single input4. truncating type5. padding type6. token for words not in vocabulary7. defining the size of the train set and remaining will be the test set.
###Code
vocab_size = 10000
embedding_dim = 16
max_length = 90
truc_type ='post'
padding_type = 'post'
oov_tok = "<OOV>"
training_size = 20000
###Output
_____no_output_____
###Markdown
Downloading the dataset.... from kaggel
###Code
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json \
-O /tmp/sarcasm.json
###Output
--2020-07-01 06:09:17-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json
Resolving storage.googleapis.com (storage.googleapis.com)... 108.177.119.128, 108.177.126.128, 173.194.69.128, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|108.177.119.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5643545 (5.4M) [application/json]
Saving to: ‘/tmp/sarcasm.json’
/tmp/sarcasm.json 0%[ ] 0 --.-KB/s
/tmp/sarcasm.json 100%[===================>] 5.38M --.-KB/s in 0.04s
2020-07-01 06:09:17 (138 MB/s) - ‘/tmp/sarcasm.json’ saved [5643545/5643545]
###Markdown
Splitting the data into two parts "sentences" and "labels"
###Code
with open('/tmp/sarcasm.json','r') as f:
datastore = json.load(f)
sentences = []
labels = []
for item in datastore:
sentences.append(item['headline'])
labels.append(item['is_sarcastic'])
import pandas as pd
pd.DataFrame(labels,sentences,).head()
###Output
_____no_output_____
###Markdown
Splitting the data in training and testing sets
###Code
training_sentences = sentences[0:training_size]
testing_sentences = sentences[training_size:]
training_labels = labels[0:training_size]
testing_labels = labels[training_size:]
###Output
_____no_output_____
###Markdown
Creating token for the words in the sequences.
###Code
tokenizer = Tokenizer(num_words=vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(training_sentences)
word_index = tokenizer.word_index
training_sequences = tokenizer.texts_to_sequences(training_sentences)
training_padded = pad_sequences(training_sequences, maxlen=max_length,
padding=padding_type,truncating= truc_type)
testing_sequences = tokenizer.texts_to_sequences(testing_sentences)
testing_padded = pad_sequences(testing_sequences, maxlen=max_length,
padding=padding_type,truncating= truc_type)
import numpy as np
training_padded = np.array(training_padded)
training_labels = np.array(training_labels)
testing_padded = np.array(testing_padded)
testing_labels = np.array(testing_labels)
reverse_word_index = dict([(value,key) for (key,value) in word_index.items()])
def decode_sentence(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
print(decode_sentence(training_padded[0]))
print(training_sentences[2])
print(labels[2])
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size,embedding_dim,input_length=max_length),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(24, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
history = model.fit(training_padded,training_labels, epochs=30,
validation_data = (testing_padded, testing_labels),
verbose=1)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string,"val_"+string])
plt.show()
plot_graphs(history,'accuracy')
plot_graphs(history, 'loss')
###Output
_____no_output_____
###Markdown
Sarscam Classifier
###Code
import json
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
###Output
_____no_output_____ |
assignment1/features.ipynb | ###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your interests.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates=[1e-10,1e-9, 1e-8, 1e-7]
regularization_strengths=[5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg,
num_iters=1500, verbose=False)
y_train_pred = svm.predict(X_train_feats)
train_acc=np.mean(y_train_pred==y_train)
y_val_pred = svm.predict(X_val_feats)
val_acc=np.mean(y_val_pred==y_val)
results[(lr,reg)]=[train_acc,val_acc]
if val_acc>best_val:
best_val=val_acc
best_svm=svm
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1000, batch_size=200,
learning_rate=1, learning_rate_decay=0.95,
reg=0, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print('Validation accuracy: %f'%val_acc)
best_net = net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.55
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=learning_rate, reg=regularization_strength,
num_iters=400, verbose=True)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
current_val = np.mean(y_val == y_val_pred)
if current_val > best_val:
best_val = current_val
best_svm = svm
results[(learning_rate, regularization_strength)] = (np.mean(y_train == y_train_pred), current_val)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?The mistaken result are somewhat related to the correct class template. For the plane class, many picture with monochromatic background and long object body. Trucks are often mistaken as cars, and birds class results contain images with green backgrounds.Animals are mistaken with each other more often, so as human-made transportation. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
for learning_rate in np.arange(1e-4, 1e-3, 4e-4):
for num_iters in [1000, 1500]:
for reg in [.1, .3, .5]:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=num_iters, batch_size=200,
learning_rate=learning_rate, learning_rate_decay=0.95,
reg=reg, verbose=False)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print 'Validation accuracy: ', val_acc
if val_acc > best_val:
best_val = val_acc
best_net = net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print test_acc
###Output
0.09
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
_____no_output_____
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for rs in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, lr, rs)
train = np.mean((svm.predict(X_train_feats) == y_train))
val = np.mean((svm.predict(X_val_feats) == y_val))
results[(lr,rs)] = (train, val)
if val > best_val:
best_svm = svm
best_val = val
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ No Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.fc_net import TwoLayerNet
from cs231n.solver import Solver
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
best_val = 0
lrs = [1e-1,1e-2,1e-3, 1e-4]
regs = [0, 1e-4]
data = {
'X_train': X_train_feats, # training data
'y_train': y_train, # training labels
'X_val': X_val_feats, # validation data
'y_val': y_val, # validation labels
'X_test': X_test_feats,
'y_test': y_test
}
total = len(lrs) * len(regs)
try_num = 0
for lr in lrs:
for reg in regs:
try_num += 1
print("Try : %d / %d" %(try_num, total))
net.reg = reg
solver = Solver(net, data,
update_rule = 'sgd',
optim_config = {'learning_rate' : lr,},
lr_decay = 0.95,
num_epochs = 10, batch_size = 100,
verbose = False
)
solver.train()
solver_val = solver.best_val_acc
print("Accuracy : %f, Args : %f, %f" %(solver_val, lr, reg))
if solver_val > best_val:
best_val = solver_val
best_net = net
best_solver = solver
print('best validation accuracy : %f' %best_val)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
y_test_pred = np.argmax(best_net.loss(data['X_test']), axis=1)
test_acc = (y_test_pred == data['y_test']).mean()
print(test_acc)
###Output
0.607
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from comp411.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from comp411.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'comp411/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from comp411.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from comp411.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
svm = LinearSVM()
for lr in learning_rates:
for reg in regularization_strengths:
loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=1500)
y_train_pred = svm.predict(X_train_feats)
acc_train = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
acc_val = np.mean(y_val == y_val_pred)
results[(lr, reg)] = (acc_train, acc_val)
if acc_val > best_val:
best_val = acc_val
best_svm = svm
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$*Actually they do make sense in most of the cases since in this method of classification we are mainly focusing on the common non-spatial features that pictures have such as colors and edges. For example we can see that the deer standing in a background with blue color has been classified as ship since in the ship case also there is a brown existence in middle of a blue background and also the edges are same.* Neural Network on image featuresEarlier in this assigment we saw that training a three-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 58% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from comp411.classifiers.neural_net import ThreeLayerNet
np.random.seed(1)
input_dim = X_train_feats.shape[1]
hidden_dim = 300
num_classes = 10
net = ThreeLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a three-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
def generate_random_hyperparams(lr_min, lr_max, reg_min, reg_max, h_min, h_max):
lr = 10**np.random.uniform(lr_min,lr_max)
reg = 10**np.random.uniform(reg_min,reg_max)
hidden = np.random.randint(h_min, h_max)
return lr, reg, hidden
for i in range(20):
lr, reg, hidden_dim = generate_random_hyperparams(-1, 0, -7, -4, 10, 500)
# Create a two-layer network
net = ThreeLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=2500, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False)
# Predict on the training set
train_accuracy = (net.predict(X_train_feats) == y_train).mean()
# Predict on the validation set
val_accuracy = (net.predict(X_val_feats) == y_val).mean()
# Save best values
if val_accuracy > best_val:
best_val = val_accuracy
best_net = net
# Print results
print('lr %e reg %e hid %d train accuracy: %f val accuracy: %f' % (lr, reg, hidden_dim, train_accuracy, val_accuracy))
print('best validation accuracy achieved: %f' % best_val)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.578
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your interests.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=3000)
y_train_pred = svm.predict(X_train_feats)
train_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_accuracy = np.mean(y_val == y_val_pred)
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = svm
results[(lr, reg)] = train_accuracy, val_accuracy
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
learning_rates = [5e-1, 8e-1]
reg_strengths = [1e-6]
batche_sizes = [400]
np.random.seed(0)
for lr in learning_rates:
for reg in reg_strengths:
for bs in batche_sizes:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=2000, batch_size=bs,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False)
# Predict on the training set
train_accuracy = (net.predict(X_train_feats) == y_train).mean()
# Predict on the validation set
val_accuracy = (net.predict(X_val_feats) == y_val).mean()
if(val_accuracy > best_val):
best_val = val_accuracy
best_net = net
best_stats = stats
print('lr %e reg %e batch %d train acc: %f val acc: %f' % (
lr, reg, bs, train_accuracy, val_accuracy))
print('best validation accuracy: %f' % best_val)
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.582
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for rs in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=rs, num_iters = 1500)
train_accuracy = np.mean(svm.predict(X_train_feats) == y_train)
val_accuracy = np.mean(svm.predict(X_val_feats) == y_val)
results[(lr, rs)] = (train_accuracy, val_accuracy)
if(val_accuracy > best_val):
best_val = val_accuracy
best_svm = svm
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$Similar background color;similar shape;similar feature.Yes, since HOG and color histogram are used. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
learning_rate = [10**i for i in range(-3, 2)]
regularization_strength = [10**i for i in range(-5, -1)]
learning_rate_decay = [1, 0.95, 0.90]
hidden_size = [100*i for i in range(1, 6)]
# learning_rate = [0.1]
# regularization_strength = [0.01]
# learning_rate_decay = [1]
# hidden_size = [1000]
print(hidden_size, learning_rate, regularization_strength, learning_rate_decay)
best_val = -1
result = {}
for hs in hidden_size:
for lr in learning_rate:
for rs in regularization_strength:
for lrd in learning_rate_decay:
net = TwoLayerNet(input_dim, hs, num_classes)
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=2000, batch_size=200,
learning_rate=lr, learning_rate_decay=lrd,
reg=rs, verbose=False)
val_acc = np.mean(net.predict(X_val_feats) == y_val)
tr_acc = np.mean(net.predict(X_train_feats) == y_train)
if(val_acc > best_val):
best_net = net
best_val = val_acc
result[(hs, lr, rs, lrd)] = val_acc
print(hs, lr, rs, lrd, tr_acc, val_acc)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.584
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
/home/josh/anaconda2/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg,
num_iters=1500, verbose=True)
train_acc = np.mean(y_train == svm.predict(X_train_feats))
val_acc = np.mean(y_val == svm.predict(X_val_feats))
results[(lr, reg)] = (train_acc, val_acc)
if(val_acc > best_val):
best_val = val_acc
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
learning_rates = [5e-1,1]
regularization_strengths = [5e-4,1e-3,5e-3]
for lr in learning_rates:
for reg in regularization_strengths:
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,num_iters=2000, batch_size=300,learning_rate=lr,
learning_rate_decay=0.95,reg=reg, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print 'Validation accuracy: ', val_acc
results[(lr, reg, hs)] = (stats, val_acc)
if(val_acc > best_val):
best_val = val_acc
best_net = net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print test_acc
###Output
0.573
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print(X_train.shape)
###Output
(49000, 32, 32, 3)
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for cur_lr in learning_rates: #go over the learning rates
for cur_reg in regularization_strengths:#go over the regularization strength
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=cur_lr, reg=cur_reg,
num_iters=1500, verbose=False)
y_train_pred = svm.predict(X_train_feats)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_acc = np.mean(y_val == y_val_pred)
# FIX storing results
results[(cur_lr,cur_reg)] = (train_acc,val_acc)
if val_acc > best_val:
best_val = val_acc
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?Yes for the alot of the the misclassifications the objects have very similar contours/colours to the predicted object, e.g. the deer,horses,dogs look very much like each other when viewed side on. The HOG feature is not good enough to differentiate between these. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1000, batch_size=200,
learning_rate=1, learning_rate_decay=0.95,
reg=0.0, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print('Validation accuracy: ', val_acc)
best_net = net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.563
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for ii in xrange(len(learning_rates)):
for jj in xrange(len(regularization_strengths)):
svm = LinearSVM()
lr = learning_rates[ii]
reg = regularization_strengths[jj]
loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg,
num_iters=1500, verbose=True)
y_train_pred = svm.predict(X_train_feats)
train_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_accuracy = np.mean(y_val == y_val_pred)
results[(lr,reg)] = (train_accuracy,val_accuracy)
if(val_accuracy > best_val):
best_val = val_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
learning_rates = 1-np.fabs(0.1*np.random.randn(5))
regs = 0.0095*(1+0.5*np.random.randn(5))
# learning_rates = [1]
# regs = [0.01]
best_val_acc = 0
best_ii = None
best_jj = None
best_stats = None
for ii in xrange(len(learning_rates)):
for jj in xrange(len(regs)):
lr = learning_rates[ii]
rg = regs[jj]
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1000, batch_size=400,
learning_rate=lr, learning_rate_decay=0.95,
reg=rg, verbose=True)
val_acc = (net.predict(X_val_feats) == y_val).mean()
if val_acc > best_val_acc:
best_ii = ii
best_jj = jj
best_net = net
best_stats = stats
print 'lr = %f, reg = %f' % (lr,rg)
print 'Validation Accuracy = %f' % val_acc
print 'Best Learning Rates: %f' % (learning_rates[best_ii])
print 'Best Regularization Parameter: %f' % (regs[best_jj])
plt.subplot(2, 1, 1)
plt.plot(best_stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(best_stats['train_acc_history'], label='train')
plt.plot(best_stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.legend()
plt.show()
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print test_acc
###Output
0.538
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
#learning_rates = list(map(lambda x: x*1e-9, np.arange(0.9, 2, 0.1)))
#regularization_strengths = list(map(lambda x: x*1e4, np.arange(1, 10)))
results = {}
best_val = -1
best_svm = None
iters = 2000
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
print('Training with lr={0}, reg={1}'.format(lr, reg))
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=iters)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
train_accuracy = np.mean(y_train == y_train_pred)
validation_accuracy = np.mean(y_val == y_val_pred)
if validation_accuracy > best_val:
best_val = validation_accuracy
best_svm = svm
results[(lr, reg)] = (validation_accuracy, train_accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?It makes sense given that we are using color histogram features, so for some results the background seems to affect. For example, blue background/flat background for a plane, trucks as cars (street + background) and the other way, etc. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
learning_rates = np.arange(0.1, 1.6, 0.1)
regularization_params = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1]
results = {}
best_val_accuracy = 0
for lr in learning_rates:
for reg in regularization_params:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
stats = net.train(X_train_feats, y_train, X_val_feats, y_val, num_iters=2000, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95, reg=reg)
val_accuracy = (net.predict(X_val_feats) == y_val).mean()
if val_accuracy > best_val_accuracy:
best_val_accuracy = val_accuracy
best_net = net
print('LR: {0} REG: {1} ACC: {2}'.format(lr, reg, val_accuracy))
print('best validation accuracy achieved during cross-validation: {0}'.format(best_val_accuracy))
net = best_net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.594
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for rs in regularization_strengths:
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=rs, num_iters=1500, verbose=True)
y_train_pred = svm.predict(X_train_feats)
acc_tr = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
acc_val = np.mean(y_val == y_val_pred)
results[(lr, rs)] = (acc_tr, acc_val)
if acc_val > best_val:
best_val = acc_val
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
learning_rates = [1e0]
regularization_strengths = [1e-3]
best_val = -1
results = {}
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
for lr in learning_rates:
for rs in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=2000, batch_size=400,
learning_rate=lr, learning_rate_decay=0.90,
reg=rs, verbose=True)
y_train_pred = net.predict(X_train_feats)
acc_tr = np.mean(y_train == y_train_pred)
y_val_pred = net.predict(X_val_feats)
acc_val = np.mean(y_val == y_val_pred)
results[(lr, rs)] = (acc_tr, acc_val)
if acc_val > best_val:
best_val = acc_val
best_net = net
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print ('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print ('best validation accuracy achieved during cross-validation: %f' % best_val)
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.594
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
_____no_output_____
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
X_train_feats.shape, X_train.reshape(49000, -1).shape
###Output
_____no_output_____
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [4e-5, 3.8e-5, 3.5e-5, 3.2e-5, 3e-5, 2.8e-5]
learning_rates_small = [3e-6, 2e-6, 1.5e-6, 1e-6, 0.8e-6]
regularization_strengths = [5e2, 6e2, 7e2, 8e2, 1e3]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# randomized search function
def randomizedSearch(lr, reg):
lrId = np.random.randint(0, len(lr), size=1)[0]
regId = np.random.randint(0, len(reg), size=1)[0]
return lr[lrId], reg[regId]
# train & evaluate
for _ in range(20):
lr1, reg = randomizedSearch(learning_rates, regularization_strengths)
svm_clf = LinearSVM()
history1 = svm_clf.train(X_train_feats, y_train, learning_rate=lr1, reg=reg, num_iters=180, batch_size=128)
lr2, reg = randomizedSearch(learning_rates_small, [reg])
history2 = svm_clf.train(X_train_feats, y_train, learning_rate=lr2, reg=reg, num_iters=20, batch_size=128)
y_pred = svm_clf.predict(X_val_feats)
acc_val = np.mean(y_pred==y_val)
acc_train = np.mean(svm_clf.predict(X_train_feats)==y_train)
if acc_val > best_val:
best_val = acc_val
best_svm = svm_clf
results[(lr1, lr2, reg)] = (acc_train, acc_val)
print('lr1: %f lr2: %f regularization strength: %f training accuracy: %f val accuracy: %f' % (lr1, lr2, reg, acc_train, acc_val))
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr1, lr2, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr1, lr2, reg)]
print('lr1 %e lr2 %e reg %e train accuracy: %f val accuracy: %f' % (
lr1, lr2, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ The system tends to misclassify the images that contain the true class object appears with the similar enviroment of the class that the system try to predict, or classes with similar shape or color also be missclassify. For both cases, it makes sense, because we are using only the HOG and color histogram features, we are ony capturing the colors and texture in the images. So, the system tends to make mistakes because of the enviroment's texture and colors. For example, the system mixs up between planes, birds and ship and images of these classes have the very similar background color. It is also the case for cat and dog, or truck and car, when themself is very similar the other. However for some instances, it does not make sense. For example, when the system predict a truck is a cat, or a dog is a bird. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
num_classes = 10
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
best_net = None
best_val = -1
def generate_random_hyperparams(lr_min, lr_max, reg_min, reg_max, dim_min, dim_max):
lr = 10**np.random.uniform(lr_min, lr_max)
reg = 10**np.random.uniform(reg_min, reg_max)
dim = np.random.randint(dim_min, dim_max)
return lr, reg, dim
for _ in range(20):
lr, reg, dim = generate_random_hyperparams(-1, 0, -7, -4, 10, 500)
net = TwoLayerNet(input_size=input_dim, hidden_size=dim, output_size=num_classes)
history = net.train(X_train_feats, y_train, X_val_feats, y_val, learning_rate=lr,
reg=reg, batch_size=128, num_iters=4000, EarlyStopping=True, patience=700)
y_pred = net.predict(X_val_feats)
acc_val = np.mean(y_pred==y_val)
acc_train = np.mean(net.predict(X_train_feats)==y_train)
if acc_val > best_val:
best_val = acc_val
best_net = net
print('lr: %f reg: %f hidden size: %d train acc: %f val acc: %f' % (lr, reg, dim, acc_train, acc_val))
print('best val accuracy: %f' % best_val)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.599
###Markdown
--- IMPORTANTThis is the end of this question. Please do the following:1. Click `File -> Save` to make sure the latest checkpoint of this notebook is saved to your Drive.2. Execute the cell below to download the modified `.py` files back to your drive.
###Code
import os
FOLDER_TO_SAVE = os.path.join('drive/My Drive/', FOLDERNAME)
FILES_TO_SAVE = []
for files in FILES_TO_SAVE:
with open(os.path.join(FOLDER_TO_SAVE, '/'.join(files.split('/')[1:])), 'w') as f:
f.write(''.join(open(files).readlines()))
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
num_train = X_train.shape[0]
num_val = X_val.shape[0]
for lr in learning_rates:
for reg in regularization_strengths:
key = tuple([lr,reg])
svm = LinearSVM()
svm.train(X_train_feats,y_train)
y_train_pred = svm.predict(X_train_feats)
num_corr = np.sum(y_train_pred==y_train)
train_acc = num_corr/num_train
y_val_pred = svm.predict(X_val_feats)
num_corr = np.sum(y_val==y_val_pred)
val_acc = num_corr/num_val
value = tuple([train_acc,val_acc])
results[key] = value
if val_acc>best_val:
best_val = val_acc
best_svm = svm
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_val = -1
learning_rates = [5e-2, 1e-1, 5e-1]
regularization_strengths = [1e-4, 5e-4, 1e-3]
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for reg in regularization_strengths:
stat = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1000, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False)
y_train_pred = net.predict(X_train_feats)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = net.predict(X_val_feats)
val_acc = np.mean(y_val == y_val_pred)
results[(lr, reg)] = (train_acc,val_acc)
if best_val<val_acc:
best_val = val_acc
best_net = net
best_stats = stat
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_acc, val_acc))
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.597
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print(f'{X_train.shape}')
###Output
(49000, 32, 32, 3)
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
print(X_train_feats.shape)
###Output
(49000, 155)
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for reg in regularization_strengths:
it = max(X_train_feats.shape[0] // 100, 1)
num_epoch = 5
it *= num_epoch
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg,
num_iters=it, batch_size=500, verbose=False)
train_acc = np.mean(svm.predict(X_train_feats) == y_train)
val_acc = np.mean(svm.predict(X_val_feats) == y_val)
results[(lr, reg)] = (train_acc, val_acc)
if val_acc > best_val:
best_val = val_acc
best_svm = svm
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
print('some pictures that classified incorrectly.')
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
# show some pictures that classified correctly
print('some that correctly classified.')
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test == cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
some pictures that classified incorrectly.
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$For example, row 5 col 1 picture is a bird, but the plane also has two wings, thus they might have some similar HoG features because they both has some diagonal gradients. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_val_acc = 0.0
best_net = None
best_lr = 1e-5
best_hs = 200
best_stats = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
def plot_history(stats):
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Classification accuracy')
plt.legend()
plt.show()
def tune(lr, hs, reg, verbose=False):
net = TwoLayerNet(input_dim, hs, num_classes)
batch_size = 200
num_epoch = 10
num_iters = max(X_train_feats.shape[0] // batch_size, 1) * num_epoch
stats = net.train(X_train_feats, y_train, X_val_feats, y_val, learning_rate=lr, reg=reg,
learning_rate_decay=0.95, batch_size=batch_size, num_iters=num_iters, verbose=verbose)
val_acc = np.mean(net.predict(X_val_feats) == y_val)
train_acc = np.mean(net.predict(X_train_feats) == y_train)
print(f'lr:{lr} hidden_size:{hs} lr_reg: {reg} train_acc:{train_acc} val_acc:{val_acc}')
return val_acc, net, stats
# tuning process is in the code cell below
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
best_val_acc = 0.0
best_net = None
best_lr = 5e-2
best_hs = 500
best_stats = None
best_reg = 5e-6
learning_rates = [1e-2, 5e-2, 1e-1, 5e-1]
hidden_dim = [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]
lr_reg = [1e-4, 2e-4, 3e-4, 7e-4, 2e-3]
# for lr in learning_rates:
# val_acc, net, stats = tune(lr, best_hs, best_reg, False)
# if val_acc > best_val_acc:
# best_lr = lr
# best_net = net
# best_stats = stats
# best_val_acc = val_acc
# for hs in hidden_dim:
# val_acc, net, stats = tune(best_lr, hs, best_reg, False)
# if val_acc > best_val_acc:
# best_hs = hs
# best_net = net
# best_stats = stats
# best_val_acc = val_acc
for reg in lr_reg:
val_acc, net, stats = tune(best_lr, best_hs, reg, False)
if val_acc > best_val_acc:
best_reg = reg
best_net = net
best_stats = stats
best_val_acc = val_acc
print(f'best val: {best_val_acc}, best_lr:{best_lr} best_hidden_size:{best_hs} best_lr_reg:{best_reg}')
plot_history(best_stats)
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.57
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_rbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_fns, vefeats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
num_iters=1000
batch_size=200
for lr in learning_rates:
for reg in regularization_strengths:
svm=LinearSVM()
loss_hist=svm.train(X_train_feats,y_train,lr,reg,num_iters,batch_size,verbose=False)
train_acc=np.mean(svm.predict(X_train_feats)==y_train)
val_acc=np.mean(svm.predict(X_val_feats)==y_val)
if val_acc>best_val:
best_val=val_acc
best_svm=svm
results[(lr,reg)]=(train_acc,val_acc)
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
--- IMPORTANTThis is the end of this question. Please do the following:1. Click `File -> Save` to make sure the latest checkpoint of this notebook is saved to your Drive.2. Execute the cell below to download the modified `.py` files back to your drive.
###Code
import os
FOLDER_TO_SAVE = os.path.join('drive/My Drive/', FOLDERNAME)
FILES_TO_SAVE = []
for files in FILES_TO_SAVE:
with open(os.path.join(FOLDER_TO_SAVE, '/'.join(files.split('/')[1:])), 'w') as f:
f.write(''.join(open(files).readlines()))
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
!activate cs231n
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg,
num_iters=1500, verbose=False)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
train_acc=np.mean(y_train == y_train_pred)
val_acc=np.mean(y_val == y_val_pred)
results[(lr,reg)]=(train_acc,val_acc)
if val_acc>best_val:
best_val=val_acc
best_svm=svm
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_val = -1
learning_rates = [5e-2, 1e-1, 5e-1]
regularization_strengths = [1e-4, 5e-4, 1e-3]
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for reg in regularization_strengths:
stat = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1000, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False)
y_train_pred = net.predict(X_train_feats)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = net.predict(X_val_feats)
val_acc = np.mean(y_val == y_val_pred)
results[(lr, reg)] = (train_acc,val_acc)
if best_val<val_acc:
best_val = val_acc
best_net = net
best_stats = stat
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_acc, val_acc))
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.585
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 3e-9, 5e-9, 7e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=1500, batch_size=64)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
train_accuracy = np.mean(y_train == y_train_pred)
val_accuracy= np.mean(y_val == y_val_pred)
results[(lr, reg)] = (train_accuracy, val_accuracy)
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = svm
best_lr = lr
best_reg = reg
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print test_acc
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg,
num_iters=1500, verbose=False)
y_train_pred = svm.predict(X_train_feats)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_acc = np.mean(y_val == y_val_pred)
results[(lr,reg)] = (train_acc,val_acc)
if best_val<val_acc:
best_val = val_acc
best_svm = svm
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=3000, batch_size=100,
learning_rate=1, learning_rate_decay=0.90,
reg=0.001, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print('Validation accuracy: ', val_acc)
best_net = net
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.567
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your interests.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
# learning_rates = [1e-9, 1e-8, 1e-7]
# regularization_strengths = [5e4, 5e5, 5e6]
learning_rates = [1e-3, 3e-3, 1e-2, 3e-2]
regularization_strengths = [0]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for l in learning_rates:
for r in regularization_strengths:
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=l, reg=r,
batch_size=256, num_iters=1000, verbose=True)
y_train_pred = svm.predict(X_train_feats)
training_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
validation_accuracy = np.mean(y_val == y_val_pred)
results[(l, r)] = training_accuracy, validation_accuracy
if (validation_accuracy > best_val):
best_val = validation_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = None
best_net = None
results = {}
best_val = -1
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
learning_rates = [6e-2, 1e-1]
regularization_strengths = [0, 3e-4]
for l in learning_rates:
for r in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
loss_hist = net.train(X_train_feats, y_train, X_val_feats, y_val, learning_rate=l, reg=r,
batch_size=256, num_iters=4000, learning_rate_decay=0.95, verbose=True)
y_train_pred = net.predict(X_train_feats)
training_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = net.predict(X_val_feats)
validation_accuracy = np.mean(y_val == y_val_pred)
results[(l, r)] = training_accuracy, validation_accuracy
if (validation_accuracy > best_val):
best_val = validation_accuracy
best_net = net
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
svm = LinearSVM()
for learning_rate in learning_rates:
for reg in regularization_strengths:
loss_hist = svm.train(X_train_feats, y_train, learning_rate=learning_rate, reg=reg,
num_iters=3000, verbose=False)
y_train_pred = svm.predict(X_train_feats)
train_accuracy = np.mean(y_train == y_train_pred)
y_val_pred =svm.predict(X_val_feats)
val_accuracy = np.mean(y_val == y_val_pred)
results[(learning_rate,reg)] = (train_accuracy,val_accuracy)
if val_accuracy>best_val:
best_val = val_accuracy
best_svm = svm
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
learning_rates = [1e-2,1e-1]
regs = [5e-6]
num_iter = [1000,2000]
best_val = 0
for num_iters in num_iter:
for learning_rate in learning_rates:
for reg in regs:
net.train(X_train_feats,y_train,X_val_feats,y_val,learning_rate=learning_rate,
learning_rate_decay=0.95,reg=reg,num_iters=num_iters,batch_size=200,verbose=True)
y_val_pred = net.predict(X_val_feats)
val_acc = np.mean(y_val == y_val_pred)
if val_acc > best_val:
best_val = val_acc
best_net = net
print('Temp best validation accuracy:',val_acc,'\t','best hyper param: ',[num_iter,learning_rate,reg])
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.565
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import time
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [2e-5]
regularization_strengths = [0.001]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
print("Starting hyperparameter search...")
NUM_ITERATIONS = 20000
PRINT_LOSS_HISTORY = True
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train_feats, y_train,
learning_rate=lr,
reg=reg,
num_iters=NUM_ITERATIONS,
learning_rate_decay=0.99,
verbose=False)
toc = time.time()
print('Training took %fs' % (toc - tic), ' lr=', lr, ' reg=', reg)
y_train_pred = svm.predict(X_train_feats)
training_accuracy = np.mean(y_train == y_train_pred)
# print('training accuracy: %f' % (training_accuracy, ))
y_val_pred = svm.predict(X_val_feats)
validation_accuracy = np.mean(y_val == y_val_pred)
# print('validation accuracy: %f' % validation_accuracy)
if PRINT_LOSS_HISTORY:
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
plt.show()
results[(lr,reg)] = (training_accuracy, validation_accuracy)
if validation_accuracy > best_val:
best_val = validation_accuracy
best_svm = svm
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
def plot_stats(stats):
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Classification accuracy')
plt.legend()
plt.show()
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
results = {}
best_val = -1
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Provided as a reference. You may or may not want to change these hyperparameters
learning_rates = [5e-1]
regularization_strengths = [1e-5]
print("Starting hyperparameter search...")
NUM_ITERATIONS = 6000
PRINT_LOSS_HISTORY = True
for lr in learning_rates:
for reg in regularization_strengths:
print('\n\n')
print('=== PARAMETERS ===', ' lr=', lr, ' reg=', reg)
tic = time.time()
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=NUM_ITERATIONS,
batch_size=200,
learning_rate_decay=0.95,
learning_rate=lr,
reg=reg,
verbose=False)
toc = time.time()
print('Training took %fs' % (toc - tic))
if PRINT_LOSS_HISTORY:
plot_stats(stats)
# evaluate
training_accuracy = np.average(stats['train_acc_history'][-3:])
validation_accuracy = np.average(stats['val_acc_history'][-3:])
print('training_accuracy =', training_accuracy)
print('validation_accuracy =', validation_accuracy)
# store results
results[(lr,reg)] = (training_accuracy, validation_accuracy)
if validation_accuracy > best_val:
best_val = validation_accuracy
best_net = net
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.57
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
_____no_output_____
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print test_acc
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [6e-7, 7e-7, 8e-7, 1e-6]
regularization_strengths = [4e4,5e4,6e4,1e5]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for learning_rate in learning_rates:
for reg in regularization_strengths:
# train and predict
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate = learning_rate, reg=reg)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
# store and update results
train_accuracy = np.mean(y_train==y_train_pred)
val_accuracy = np.mean(y_val==y_val_pred)
results[(learning_rate,reg)]=(train_accuracy,val_accuracy)
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = svm
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ It seems that, to some extent, the misclassified pictures either have a background that would be typical for that class (e.g., for the plane class, we see pictures that have large chunk(s) of homogeneous color(s)), or have an object that has liner/shape that could be similar to a typical object in that class. This makes sense as we are using the extracted features HOG and HOC to train our model.But overall, the misclassified examples are Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 1e5, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg,
num_iters=1500, verbose=True)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
training_accuracy = np.mean(y_train == y_train_pred)
validation_accuracy = np.mean(y_val == y_val_pred)
#append in results
results[(lr,reg)] = (training_accuracy, validation_accuracy)
if validation_accuracy > best_val:
best_val = validation_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?In certain classes, classifier is predicting based on background. Like Plane going over water is misclassified as ship Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
best_val = -1
results = {}
learning_rates = [1e-1, 5e-1]
regularization_strengths = [1e-2, 1e-3,1e-4, 1e-5]
for lr in learning_rates:
for reg in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
loss_hist = net.train(X_train_feats, y_train, X_val_feats, y_val, learning_rate=lr, reg=reg,
num_iters=2000, verbose=True)
y_train_pred = net.predict(X_train_feats)
y_val_pred = net.predict(X_val_feats)
training_accuracy = np.mean(y_train == y_train_pred)
validation_accuracy = np.mean(y_val == y_val_pred)
#append in results
results[(lr,reg)] = (training_accuracy, validation_accuracy)
if validation_accuracy > best_val:
best_val = validation_accuracy
best_net = net
################################################################################
# END OF YOUR CODE #
################################################################################
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.563
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, lr, reg, num_iters=2000)
pred_train = svm.predict(X_train_feats)
train_acc = np.mean(y_train == pred_train)
pred_val = svm.predict(X_val_feats)
val_acc = np.mean(y_val == pred_val)
results[(lr, reg)] = (train_acc, val_acc)
if val_acc > best_val:
best_val = val_acc
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
best_val = -1
best_stats = None
learning_rates = np.logspace(-10, 0, 5) # np.logspace(-10, 10, 8) #-10, -9, -8, -7, -6, -5, -4
regularization_strengths = np.logspace(-3, 5, 5) # causes numeric issues: np.logspace(-5, 5, 8) #[-4, -3, -2, -1, 1, 2, 3, 4, 5, 6]
results = {}
iters = 2000 #100
for lr in learning_rates:
for rs in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=iters, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg=rs)
y_train_pred = net.predict(X_train_feats)
acc_train = np.mean(y_train == y_train_pred)
y_val_pred = net.predict(X_val_feats)
acc_val = np.mean(y_val == y_val_pred)
results[(lr, rs)] = (acc_train, acc_val)
if best_val < acc_val:
best_stats = stats
best_val = acc_val
best_net = net
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.103
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
_____no_output_____
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for cur_lr in learning_rates: #go over the learning rates
for cur_reg in regularization_strengths:#go over the regularization strength
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=cur_lr, reg=cur_reg,
num_iters=1500, verbose=False)
y_train_pred = svm.predict(X_train_feats)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_acc = np.mean(y_val == y_val_pred)
# FIX storing results
results[(cur_lr,cur_reg)] = (train_acc,val_acc)
if val_acc > best_val:
best_val = val_acc
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
X_train.shape
###Output
_____no_output_____
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7, 2e-7]
regularization_strengths = [2e4, 5e4,1e5, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
model = LinearSVM()
loss_history = model .train(X_train_feats, y_train, learning_rate=learning_rate, reg=regularization_strength,
num_iters=2000, verbose=False)
y_train_pred=model.predict(X_train_feats)
train_acc=np.mean(y_train_pred== y_train)
y_val_pred = model.predict(X_val_feats)
val_acc=np.mean(y_val == y_val_pred)
results[(learning_rate, regularization_strength)]=train_acc, val_acc
if(val_acc>best_val):
best_val=val_acc
best_svm=model
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ They may have some features such as contour or color which are similar to the template we get. It does not make sense to me, however, I have to say the mistaking ones do have some features which make them look similar to the orginal labels. For example, the last frog is a cat but we can see it has similar contour and color with the frogs Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_net = None # store the best model into this
best_loss_his = None
results = {}
best_val = -1
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
learning_rates = [1e-1,5e-1]
regularization_strengths = [3e-4,1e-3,3e-3]
for learning_rate in learning_rates:
for reg in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
loss_history = net.train(X_train_feats , y_train, X_val_feats , y_val,
num_iters=3000, batch_size=500,
learning_rate=learning_rate, learning_rate_decay=0.95,
reg=reg, verbose=True)
train_acc=np.mean(net.predict(X_train_feats)== y_train)
val_acc=np.mean(y_val == net.predict(X_val_feats))
results[(learning_rate, reg)]=train_acc, val_acc
if(val_acc>best_val):
best_val=val_acc
best_net=net
best_loss_his=loss_history
print(best_val)
for lr, hidden_s in sorted(results):
train_accuracy, val_accuracy = results[(lr, hidden_s)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, hidden_s, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.587
###Markdown
--- IMPORTANTThis is the end of this question. Please do the following:1. Click `File -> Save` to make sure the latest checkpoint of this notebook is saved to your Drive.2. Execute the cell below to download the modified `.py` files back to your drive.
###Code
import os
FOLDER_TO_SAVE = os.path.join('drive/My Drive/', FOLDERNAME)
FILES_TO_SAVE = []
for files in FILES_TO_SAVE:
with open(os.path.join(FOLDER_TO_SAVE, '/'.join(files.split('/')[1:])), 'w') as f:
f.write(''.join(open(files).readlines()))
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
from __future__ import print_function
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your interests.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=learning_rate,
reg=regularization_strength, num_iters=1500)
y_train_pred = svm.predict(X_train_feats)
training_accuracy = (y_train == y_train_pred).mean()
y_val_pred = svm.predict(X_val_feats)
validation_accuracy = (y_val == y_val_pred).mean()
results[(learning_rate, regularization_strength)] = training_accuracy, validation_accuracy
if validation_accuracy > best_val:
best_val = validation_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?Because of using color histogram features, the misclassfication results refer to the similar distributed color blobs to the training examples. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# Your code
learning_rates = [0.1, 0.3, 1, 3]
regularization_strengths = [5e-5, 5e-4, 5e-3]
results = {}
best_val = -1
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val, num_iters=1500, batch_size=200,
learning_rate=learning_rate, learning_rate_decay=0.95,
reg=regularization_strength)
# Predict on the validation set
y_train_pred = net.predict(X_train_feats)
training_accuracy = (y_train == y_train_pred).mean()
y_val_pred = net.predict(X_val_feats)
validation_accuracy = (y_val == y_val_pred).mean()
results[(learning_rate, regularization_strength)] = training_accuracy, validation_accuracy
if validation_accuracy > best_val:
best_val = validation_accuracy
best_net = net
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.559
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for i in learning_rates:
for j in regularization_strengths:
svm=LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=i, reg=j,num_iters=5000, verbose=False)
y_pred=svm.predict(X_train_feats)
y_val_pred=svm.predict(X_val_feats)
train_accuracy=np.mean(y_pred==y_train)
val_accuracy=np.mean(y_val_pred==y_val)
print train_accuracy, val_accuracy
results[(i,j)]=(train_accuracy,val_accuracy)
if val_accuracy>best_val:
best_val=val_accuracy
best_svm=svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
maxn=20
best_val=0
for i in xrange(maxn):
net_exp=net.train(X_train_feats, y_train, X_val_feats, y_val,
learning_rate=1e-2, learning_rate_decay=0.95,
reg=1e-5, num_iters=1000,
batch_size=200, verbose=False)
acc_val=np.mean(net.predict(X_test_feats)==y_test)
print acc_val
if acc_val>best_val:
best_val=acc_val
best_net=net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print test_acc
###Output
0.564
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](https://deep-learning-su.github.io/assignment-requirements/) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from deep_learning_su.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from deep_learning_su.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'deep_learning_su/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from deep_learning_su.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from deep_learning_su.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for l_rate in learning_rates:
for r_strength in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=l_rate, reg=r_strength,
num_iters=10000, verbose=False)
train_predicts = svm.predict(X_train_feats)
train_acc = np.mean(y_train == train_predicts)
val_predicts = svm.predict(X_val_feats)
val_acc = np.mean(y_val == val_predicts)
if val_acc > best_val:
best_val = val_acc
best_svm = svm
results[(l_rate, r_strength)] = [train_acc, val_acc]
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from deep_learning_su.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
learning_rates = [1e-7, 5e-5]
regularization_strengths = [2.5e4, 5e4]
best_score = 0
for learning_rate in learning_rates:
for reg in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
net.train(X_train_feats, y_train, X_val_feats, y_val,
learning_rate=learning_rate, reg=reg, num_iters=100, verbose=True)
train_acc = np.mean(net.predict(X_train_feats)==y_train)
val_acc = np.mean(net.predict(X_val_feats)==y_val)
if best_score < val_acc:
best_score = val_acc
best_net = net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.089
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 40 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-3, 1e-2]
regularization_strengths = [0.001, 0.01, 0.1, 1]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for rs in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=rs,
num_iters=1500)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
training_accuracy = np.mean(y_train == y_train_pred)
validation_accuracy = np.mean(y_val == y_val_pred)
results[(lr, rs)] = (training_accuracy, validation_accuracy)
if validation_accuracy > best_val:
best_val = validation_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:In the above figure, we can see that the classifier often gets confused among vehicle classes (plane, car, ship and truck) or animal classes (bird, cat, deer, dog, frog, horse)This makes sense as the HOG features capture the texture of the image, and the confused classes share similar textures. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
from itertools import product
best_val = -1
# Hyperparameters
hidden_sizes = [500]
learning_rates = [1e-1, 5e-1, 7e-1]
regs = [1e-3, 1e-4, 1e-5, 1e-6]
lr_decays = [0.99]
hyperparameters = [hidden_sizes, learning_rates, regs, lr_decays]
for (hidden_dim, lr, reg, lr_decay) in product(*hyperparameters):
print("hidden_size=%d, lr=%f, reg=%f, lr_decay=%f" % (hidden_dim, lr, reg, lr_decay))
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=2000, batch_size=200,
learning_rate=lr, learning_rate_decay=lr_decay,
reg=reg, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print("Validation accuracy: %f" % (val_acc))
if val_acc > best_val:
print("New best model!")
best_val = val_acc
best_net = net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.572
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
import itertools
lr_reg_pairs = list(itertools.product(learning_rates, regularization_strengths))
num_of_iters = len(lr_reg_pairs)
for i, (lr, reg) in enumerate(lr_reg_pairs):
print('Calculating the model #{}/{} with:'.format(i+1, num_of_iters))
print(' - learning rate {}'.format(lr))
print(' - regularization strength {}'.format(reg))
svm = LinearSVM()
svm.train(X_train_feats, y_train,
learning_rate=lr, reg=reg,
num_iters=1500, verbose=False)
y_train_pred = svm.predict(X_train_feats)
training_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
valid_accuracy = np.mean(y_val == y_val_pred)
if valid_accuracy > best_val:
best_val = valid_accuracy
best_svm = svm
results[(lr, reg)] = (training_accuracy, valid_accuracy)
print('Model #{} accuracy: {:.4f}'.format(i+1, valid_accuracy))
print('Best model accuracy: {:.4f}\n'.format(best_val))
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?Yes, they look similar. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 250
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=4000, batch_size=200,
learning_rate=0.5, learning_rate_decay=0.9,
reg=0.00025, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print('Validation accuracy: ', val_acc)
best_net = net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
print('Trying (%e, %e)...' % (lr, reg))
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg,
num_iters=1500)
y_val_pred = svm.predict(X_val_feats)
val_accuracy = np.mean(y_val == y_val_pred)
results[(lr, reg)] = loss_hist[-1], val_accuracy
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
learning_rates = [1]
regularization_strengths = [5e-4]
best_acc = 0
for lr in learning_rates:
for reg in regularization_strengths:
print({'learning_rate': lr, 'reg': reg})
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1500, batch_size=2000,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
if val_acc > best_acc:
best_acc = val_acc
best_net = net
print('Validation accuracy: ', val_acc)
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your interests.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
svm_cls = LinearSVM()
svm_cls.train(X_train_feats, y_train, lr, reg)
train_pred = svm_cls.predict(X_train_feats)
val_pred = svm_cls.predict(X_val_feats)
train_acc = np.mean(train_pred == y_train)
val_acc = np.mean(val_pred == y_val)
if val_acc > best_val:
best_val = val_acc
best_svm = svm_cls
results[(lr, reg)] = (train_acc, val_acc)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=True)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
best_val = -1
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# Your code
results = {}
learning_rates = [9e-1, 8e-1]
regularization_strengths = [5e-7, 5e-6]
for lr in learning_rates:
for reg in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
net.train(X_train_feats, y_train, X_val_feats, y_val, num_iters=1000,
batch_size=200, learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False)
train_pred = net.predict(X_train_feats)
val_pred = net.predict(X_val_feats)
train_acc = np.mean(train_pred == y_train)
val_acc = np.mean(val_pred == y_val)
if val_acc > best_val:
best_val = val_acc
best_net = net
results[(lr, reg)] = (train_acc, val_acc)
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.573
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
_____no_output_____
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import cupy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
best_com = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for rs in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats,y_train,learning_rate = lr,reg = rs,num_iters = 1000,verbose = True)
train_pred = svm.predict(X_train_feats)
train_acc = np.mean(train_pred==y_train)
val_pred = svm.predict(X_val_feats)
val_acc = np.mean(val_pred==y_val)
if val_acc > best_val:
best_val = val_acc
best_svm = svm
best_com = (lr,rs)
results[(lr,rs)] = (train_acc,val_acc)
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print("Best Combo :", best_com)
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
learning_rates = [0.5]
regularization_strengths = [0.001]
# learning_rates = [1e-5 ,1e-4, 5e-1, 1, 5]
# regularization_strengths = [1e-3, 5e-3, 1e-2, 1e-1, 0.5, 1]
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_val = -1
best_net = None
best_com = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for rs in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
net.train(X_train_feats,y_train,X_val_feats,y_val,num_iters = 1500,batch_size=400,learning_rate = lr,learning_rate_decay = 0.95,reg = rs,verbose = True)
val_acc = np.mean((net.predict(X_val_feats)==y_val))
if val_acc > best_val:
best_val = val_acc
best_net = net
best_com = (lr,rs)
results[(lr,reg)] = val_acc
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
val_acc = results[(lr, reg)]
print ('lr %e reg %e val accuracy: %f' % (lr, reg, val_acc))
print(best_com)
print ('best validation accuracy achieved during cross-validation: %f' % best_val)
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.112
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 100 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-7, 2e-7, 3e-7]
regularization_strengths = [2.5e4, 1.5e4, 3.5e4]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
model = LinearSVM()
model.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=2500)
train_predict = model.predict(X_train_feats)
train_acc = np.mean(train_predict == y_train)
val_predict = model.predict(X_val_feats)
val_acc = np.mean(val_predict == y_val)
results[(lr, reg)] = (train_acc, val_acc)
if val_acc > best_val:
best_val = val_acc
best_svm = model
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
best_val = -1
learning_rates_list = [1e-3, 1e-4, 1e-5]
regularization_strengths_list = [1e1, 1e2, 1e3]
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
for lr in learning_rates_list:
for reg in regularization_strengths_list:
net.train(X_train_feats, y_train, X_val_feats, y_val, learning_rate=lr,
reg=reg, num_iters=1000)
predict_label = net.predict(X_val_feats)
predict_acc = np.mean(predict_label == y_val)
if predict_acc > best_val:
best_val = predict_acc
best_net = net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.09
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your interests.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach !column! SHOULD BE ROW HERE is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
(49000, 155)
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for i in learning_rates:
for j in regularization_strengths:
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=i, reg=j,
num_iters=1500, verbose=False)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
train_accuracy = np.mean(y_train == y_train_pred)
val_accuracy = np.mean(y_val == y_val_pred)
results[(i, j)] = (train_accuracy, val_accuracy)
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?The result make sense for the setting with HoG and colour histogram as feature.This setting do not have local patten, but only local orientaiton and overall colour as descriptors for image.Therefore it is easy to missclassify images contain object have similar orientation as plane and background colour as sky.For example, 2nd row of plane class, a ship been miss classified as plane. It is easy to imagine the shape have similar orientation and background colour as plane. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
num_classes = 10
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
best_val = -1
best_stats = None
hidden_size = [i * 10 for i in range (7, 9)]
learning_rates = [0.1 * (i ** 2) for i in range (1, 5)]
reg = [0.001, 0.005, 0.01, 0.05]
batch_size = [1000, 1500]
results = {}
for hid in hidden_size:
for lr in learning_rates:
for bat in batch_size:
for re in reg:
net = TwoLayerNet(input_dim, hid, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1000, batch_size=bat,
learning_rate=lr, learning_rate_decay=0.95,
reg=re, verbose=False)
acc_train = np.mean(y_train == net.predict(X_train_feats))
acc_val = np.mean(y_val == net.predict(X_val_feats))
results[(hid, lr, bat, re)] = (acc_train, acc_val)
if acc_val > best_val:
best_val = acc_val
best_stats = stats
best_net = net
# print out results
for hid, lr, bat, re in sorted(results):
train_accuracy, val_accuracy = results[(hid, lr, bat, re)]
print ('hid=%e lr=%e bat=%e reg=%e train accuracy: %f val accuracy: %f' % (hid, lr, bat, re, train_accuracy, val_accuracy))
print ('best validation accuracy achieved during cross-validation: %f' % best_val)
################################################################################
# END OF YOUR CODE #
################################################################################
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(best_stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(best_stats['train_acc_history'], label='train')
plt.plot(best_stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.561
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
num_learning_rates = len(learning_rates)
num_regularization = len(regularization_strengths)
for i in range(num_learning_rates):
for j in range(num_regularization):
lr = learning_rates[i]
reg = regularization_strengths[j]
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=1500,verbose=False)
train_accuracy = np.mean(svm.predict(X_train_feats) == y_train)
val_accuracy = np.mean(svm.predict(X_val_feats) == y_val)
results[(lr, reg)] = (train_accuracy, val_accuracy)
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape,X_train.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
best_val_acc = -1
#-------------------------------------------------|
# Train with SGD |
# |
# the best test_accuracy comes when |
# learning_rate=0.691831 |
# batch_size=300 |
# reg=1.930698e-05 |
# and |
# train_acc: 0.682735 |
# val_acc: 0.607000 |
# test_acc: 0.578 |
#-------------------------------------------------|
learning_rates = np.logspace(-0.16, -0.02, num=8)
batch_sizes = [300]
regs = np.logspace(-6,-3, num=8)
#-------------------------------------------------|
# END OF SGD |
#-------------------------------------------------|
#-------------------------------------------------|
# Train with SGD + dropout |
# |
# the best test accuracy comes when |
# learning_rate=0.558042, |
# batch_size=300, |
# reg=6.309573e-06 |
# dropout_ratio=0.5 |
# and |
# train_acc: 0.652673 |
# val_acc: 0.607000 |
# test_acc: 0.576 |
#-------------------------------------------------|
#learning_rates = np.logspace(-1, 0.6, num=16) #narrower the region further
#batch_sizes = [300]
#regs = np.logspace(-8,-1, num=16)
#-------------------------------------------------|
# END OF ADAM + dropout |
#-------------------------------------------------|
#-------------------------------------------------|
# Train with Adam |
# |
# the best test_accuracy comes when |
# learning_rate=0.001585 |
# batch_size=300 |
# reg=1.847850e-05 |
# and |
# train_acc: 0.704898 |
# val_acc: 0.611000 |
# test_acc: 0.585 |
#-------------------------------------------------|
#learning_rates = np.logspace(-4, -1, num=16) #narrower the region further
#batch_sizes = [300]
#regs = np.logspace(-8,-1, num=16)
#regs = [1e-3]
#-------------------------------------------------|
# END OF ADAM |
#-------------------------------------------------|
for j in range(len(learning_rates)):
for a in range(len(batch_sizes)):
for b in range(len(regs)):
l_r = learning_rates[j]
b_s = batch_sizes[a]
reg = regs[b]
net = TwoLayerNet(input_dim, 1000, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1000, batch_size=b_s,
learning_rate=l_r, learning_rate_decay=0.95,
reg=reg, verbose=False, d_r=0.5)
train_acc = (net.predict(X_train_feats) == y_train).mean()
val_acc = (net.predict(X_val_feats) == y_val).mean()
if(val_acc > best_val_acc):
best_val_acc = val_acc
best_net = net
print('learning_rate: %f, batch_size: %f, reg: %e, train_acc: %f, val_acc: %f, best_acc: %f' %
(l_r, b_s, reg, train_acc, val_acc, best_val_acc))
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.576
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
for l in learning_rates:
for r in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=l, reg=r, num_iters=1500, batch_size=200)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
training_accuracy = np.mean(y_train == y_train_pred)
validation_accuracy = np.mean(y_val == y_val_pred)
results[(l, r)] = (training_accuracy, validation_accuracy)
if validation_accuracy > best_val:
best_val = validation_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
best_val = -1
best_stats = None
learning_rates = np.logspace(-10, 0, 5) # np.logspace(-10, 10, 8) #-10, -9, -8, -7, -6, -5, -4
regularization_strengths = np.logspace(-3, 5, 5) # causes numeric issues: np.logspace(-5, 5, 8) #[-4, -3, -2, -1, 1, 2, 3, 4, 5, 6]
results = {}
iters = 2000 #100
for lr in learning_rates:
for rs in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=iters, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg=rs)
y_train_pred = net.predict(X_train_feats)
acc_train = np.mean(y_train == y_train_pred)
y_val_pred = net.predict(X_val_feats)
acc_val = np.mean(y_val == y_val_pred)
results[(lr, rs)] = (acc_train, acc_val)
if best_val < acc_val:
best_stats = stats
best_val = acc_val
best_net = net
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 50 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 2e-8, 5e-8, 1e-7]
regularization_strengths = [1e4, 2e4, 5e4, 1e5, 2e5, 5e5, 1e6, 2e6, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
hyperparams = [(lr, rg) for lr in learning_rates for rg in regularization_strengths]
for lr, rg in hyperparams:
svm = LinearSVM()
tr_loss = svm.train(X_train_feats, y_train, learning_rate = lr, reg = rg,
num_iters=1500, verbose=True)
y_tr_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
tr_acc = np.mean(y_tr_pred == y_train)
val_acc = np.mean(y_val_pred == y_val)
if(val_acc > best_val):
best_val = val_acc
best_svm = svm
results[(lr, rg)] = (tr_acc, val_acc)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$For some classes they make sense. For instance, cats and dogs are animals that have similar appearance in the sense of edges, they both have four legs and one tail. Thus, images from these classes can be mixed up, we can also include here horses or deers. A similar example is the case of cars and trucks. The plane class is interesting because the misclassified examples have similarities in the sense of colors, thus they are commonly mixed up with ship images because the sky and sea colors are very similar. However, there are classes that do not make sense at all, it is the case of the bird class where images of dogs, cats, planes and others are misclassified. We can assume all these images where classified as bird because of the color features.In conclusion, the combination of HOG and color histogram feature vectors are not enough to discriminate correctly all the classes. The HOG descriptor is very useful because it takes into account the edges, but we have to take into account different type of invariances that HOG might not consider, e.g., invariance to translation. Besides that, HOG has different parameters that we can cross-validate for a better performance. Color histogram features can help in some cases (similar colors) but are not very helpful for others. However, we can still consider this features but maybe with a lower importance when combining it with other features like HOG. Finally, we can use other type of descriptors to improve the accuracy of our model, e.g., SIFT, LBP (texture), etc. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_net = None
best_val = -1.0
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
def generate_random_hyperparams(lr_min, lr_max, reg_min, reg_max, h_min, h_max):
lr = 10**np.random.uniform(lr_min,lr_max)
reg = 10**np.random.uniform(reg_min,reg_max)
hidden = np.random.randint(h_min, h_max)
return lr, reg, hidden
for i in range(20):
lr, rg, hidden_dim = generate_random_hyperparams(-1, 0, -7, -4, 10, 500)
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=3000, batch_size = 200,
learning_rate=lr, learning_rate_decay=0.95,
reg = rg, verbose = False)
tr_acc = np.mean(net.predict(X_train_feats) == y_train)
val_acc = np.mean(net.predict(X_val_feats) == y_val)
if val_acc > best_val:
best_val = val_acc
best_net = net
print('lr %e reg %e hid %d train accuracy: %f val accuracy: %f' % (
lr, rg, hidden_dim, tr_acc, val_acc))
print('best validation accuracy achieved: %f' % best_val)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.594
###Markdown
--- IMPORTANTThis is the end of this question. Please do the following:1. Click `File -> Save` to make sure the latest checkpoint of this notebook is saved to your Drive.2. Execute the cell below to download the modified `.py` files back to your drive.
###Code
import os
FOLDER_TO_SAVE = os.path.join('drive/My Drive/', FOLDERNAME)
FILES_TO_SAVE = []
for files in FILES_TO_SAVE:
with open(os.path.join(FOLDER_TO_SAVE, '/'.join(files.split('/')[1:])), 'w') as f:
f.write(''.join(open(files).readlines()))
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
from __future__ import print_function
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg,
num_iters=2000, verbose=False)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
train_acc = np.mean(y_train == y_train_pred)
val_acc = np.mean(y_val == y_val_pred)
if val_acc > best_val:
best_val = val_acc
best_svm = svm
results[(lr,reg)] = (train_acc,val_acc)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?No, most of the misclassifications do not make sense. Some of the shape do not even look similar, such as a plane and a horse, they dont look similar.This could be due to the use of HOG feature only. only the gradient of the image is concerned. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
results={}
learning_rates = np.linspace(3,0.1,num=15)
regularization_strengths = np.logspace(-9,-3,num=14)
best_val = -1
for lr in learning_rates:
for reg in regularization_strengths:
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val, num_iters=1500,
batch_size=200, learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False)
y_train_pred = net.predict(X_train_feats)
y_val_pred = net.predict(X_val_feats)
train_acc = np.mean(y_train == y_train_pred)
val_acc = np.mean(y_val == y_val_pred)
if val_acc > best_val:
best_val = val_acc
best_net = net
results[(lr,reg)] = (train_acc,val_acc)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
net=best_net
#################################################################################
# END OF YOUR CODE #
#################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.582
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
def getFeatureData(num_color_bins = 10):
# Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
return X_train_feats, X_val_feats, X_test_feats
X_train_feats, X_val_feats, X_test_feats = getFeatureData()
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
import itertools, copy
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr, reg, nbin in itertools.product(learning_rates, regularization_strengths, range(10,14)):
svm = LinearSVM()
svm.train(X_train_feats, y_train,learning_rate=lr, reg=reg,
num_iters=1500, verbose=True)
train_acc = (svm.predict(X_train_feats)==y_train).mean()
val_acc = (svm.predict(X_val_feats)==y_val).mean()
results[(lr,reg)] = (train_acc, val_acc)
if val_acc>best_val:
best_val =val_acc
best_svm = copy.deepcopy(svm)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# tune num_color_bin
best_val = -1
for nbin in xrange(10, 20):
X_train_feats, X_val_feats, X_test_feats = getFeatureData(nbin)
svm = LinearSVM()
svm.train(X_train_feats, y_train,learning_rate=1e-7, reg=1e6,
num_iters=1500, verbose=True)
train_acc = (svm.predict(X_train_feats)==y_train).mean()
val_acc = (svm.predict(X_val_feats)==y_val).mean()
results[(lr,reg)] = (train_acc, val_acc)
if val_acc>best_val:
best_val =val_acc
best_svm = copy.deepcopy(svm)
best_nbin = nbin
print(best_val, best_nbin)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print test_acc
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your interests.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
svm.train(X=X_train_feats,y=y_train,learning_rate=lr, reg=reg,
num_iters=1500)
y_pre_val = svm.predict(X_val_feats)
y_pre_train = svm.predict(X_train_feats)
svm_val = np.mean(y_pre_val == y_val)
svm_train = np.mean(y_pre_train == y_train)
if svm_val > best_val:
best_val = svm_val
best_svm = svm
results[(lr,reg)] = (svm_train,svm_val)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
import warnings
warnings.filterwarnings('ignore')
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
learning_rates = [1e-1, 5e-1, 7e-1]
regularization_strengths = [1e-4,5e-4,1e-3, 3e-3,5e-3]
best_val = -1
best_net = None
for lr in learning_rates:
for reg in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1000, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False)
val_acc = np.mean(net.predict(X_val_feats) == y_val)
if val_acc > best_val:
best_val = val_acc
best_net = net
print(' lr %e reg %e val accuracy: %f' % ( lr, reg, best_val))
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.579
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
_____no_output_____
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
/opt/conda/envs/python2/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print test_acc
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [(8.6)*1e-9,4*1e-9,8.6*1e-10]
regularization_strengths = [ (6+0.2*i)*1e6 for i in range(-2,2)]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
svm = LinearSVM()
for reg in regularization_strengths:
for lr in learning_rates:
loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr,reg= reg,
num_iters=900, verbose=False)
tr_acc = np.mean(y_train == svm.predict(X_train_feats))
val_acc =np.mean(y_val == svm.predict(X_val_feats))
results[(lr, reg)] = tr_acc, val_acc
if val_acc > best_val:
best_svm = svm
best_val = val_acc
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?**answer:** *misclassification make sense, as shown in above, similar color photos are predicted to be same class, that's why a bird in sky is like a plane in sky, these misclassification need to be fixed.* Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
best_val = -1
learning_rates = [0.45,5e-1,0.55]
regularization_strengths = [9e-4,1e-3, 2e-3 ]
for reg in regularization_strengths:
for lr in learning_rates:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1500, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False)
val_acc =np.mean(y_val == net.predict(X_val_feats))
if val_acc > best_val:
best_net = net
best_val = val_acc
results[(lr, reg)] = float(val_acc)
print(lr,reg,val_acc)
################################################################################
# END OF YOUR CODE #
################################################################################
print(best_val)
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.577
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
for lr in learning_rates:
for rs in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=rs, num_iters=1500, verbose=False)
y_train_pred = svm.predict(X_train_feats)
training_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
validation_accuracy = np.mean(y_val == y_val_pred)
results[(lr,rs)]= (training_accuracy, validation_accuracy)
if validation_accuracy > best_val:
best_val = validation_accuracy
best_svm = svm
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:In the first column you can see that most pictures have a long horizontal object with light blue background which can give high scire for the plane template. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = [500,1000]
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
learning_rate = [1e-3,1e-1,0.05]
num_iters = [1500]
batch_size = [200,300]
reg = [0.1,0.25,0.75]
best_acc = -1 # The highest validation accuracy that we have seen so far.
best_params = {}
for hd in hidden_dim:
for ni in num_iters:
for bs in batch_size:
for lr in learning_rate:
for rg in reg:
net = TwoLayerNet(input_dim, hd, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=ni, batch_size=bs,
learning_rate=lr, learning_rate_decay=0.95,
reg=rg, verbose=True)
val_acc = (net.predict(X_val_feats) == y_val).mean()
if val_acc > best_acc:
best_acc = val_acc
best_net = net
best_params_lst = [ni,bs,lr,rg]
print("********************************")
print ("best_params")
print(best_params_lst)
print("best accuracy")
print(best_acc)
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
_____no_output_____
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for test_learning_rate in learning_rates:
for test_regularization_strength in regularization_strengths:
svm = LinearSVM() # SVM을 하나 object 정의
loss_hist = svm.train(X_train_feats,
y_train,
learning_rate = test_learning_rate,
reg = test_regularization_strength,
num_iters=10000,
verbose=True) # learning rate, reg를 바꾸면서 학습
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats) # training은 완료. validation을 예측
train_accuracy = np.mean(y_train == y_train_pred)
val_accuracy = np.mean(y_val == y_val_pred) # 실제와 얼마나 같은지 측정
results[(test_learning_rate, test_regularization_strength)] = (train_accuracy, val_accuracy)
if best_val < val_accuracy: # 최고의 정확도 보다 높으면 저장
best_val = val_accuracy
best_svm = svm
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.fc_net import TwoLayerNet
from cs231n.solver import Solver
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
test_num = 5
lr = np.random.uniform(4e-1, 2e-1, test_num)
best_val_accuracy = -1
data = {
'X_train' : X_train_feats,
'y_train' : y_train,
'X_val' : X_val_feats,
'y_val' : y_val,
'X_test' : X_test_feats,
'y_test' : y_test
}
for i in range(0, test_num):
solver = Solver(net, data,
update_rule='sgd',
optim_config={
'learning_rate': lr[i],
},
lr_decay=0.95,
num_epochs=10, batch_size=100,
print_every=100)
solver.train()
if(best_val_accuracy < solver.best_val_acc):
best_val_accuracy = solver.best_val_acc
best_net = net
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
y_test_pred = np.argmax(best_net.loss(data['X_test']), axis=1)
test_acc = (y_test_pred == data['y_test']).mean()
print(test_acc)
###Output
0.576
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for reg in regularization_strengths:
model = LinearSVM()
loss_hist = model.train(X_train_feats,y_train,learning_rate = lr,reg = reg,num_iters = 1500)
at = np.mean(model.predict(X_train_feats) == y_train)
av = np.mean(model.predict(X_val_feats) == y_val)
results[(lr,reg)] = (at,av)
if av>best_val:
best_val = av
best_svm = model
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$
###Code
# yes, svm is trained on general color distribution
###Output
_____no_output_____
###Markdown
Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
# hidden_dim = 500
num_classes = 10
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
best_val = -1
learning_rates = [1e-1,2e-1,3e-1]
hs = [500,400,300,200]
lamda = [1e-7,1e-6]
for lr in learning_rates:
for hidden_dim in hs:
for reg in lamda:
model = TwoLayerNet(input_dim, hidden_dim, num_classes)
stat = model.train(X_train_feats, y_train, X_val_feats, y_val,num_iters=2500,learning_rate=lr
,reg=reg, verbose=False)
val_acc = (model.predict(X_val_feats) == y_val).mean()
if val_acc>best_val:
best_val = val_acc
best_net = model
stats = stat
best_lr = lr
best_reg = reg
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
best_val
(best_net.predict(X_train_feats) == y_train).mean()
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Classification accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [i*1e-8 for i in range(6,11)]
regularization_strengths = [i*1e5 for i in range(1,11)]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
loss_history = svm.train(X_train_feats,y_train,learning_rate = lr,reg = reg,num_iters = 2000,batch_size = 250)
train_accuracy = (svm.predict(X_train_feats) == y_train).mean()
val_accuracy = (svm.predict(X_val_feats) == y_val).mean()
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = svm
results[(lr,reg)] = (train_accuracy,val_accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?From the accuracy of test sample, we may assume that about 40% classification makes sense. On the opposite, approximately 60% classification does not. On the feature of texture side, we may observe that wrong classifications come from similar contours, such as dogs are classified as cats. To improve this, we need this model to be able to see further specific features existing on different objects. On the feature of color side, some ships are wrongly classified as planes, which may result from sea and sky have similar color. Hence, if background color is not taken into consideration in CIFAR10 set, the performance may be much better as expected. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
result = {}
best_val = -1
best_net = None
learning_rates = [0.5,1,1.5]
regularization_strengths = [1e-2,1e-3,1e-4,1e-5,1e-6]
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
stats = net.train(X_train_feats,y_train,X_val_feats,y_val,num_iters = 2000,batch_size = 250,
learning_rate = lr,learning_rate_decay = 0.95, reg = reg)
train_accuracy = (net.predict(X_train_feats) == y_train).mean()
val_accuracy = (net.predict(X_val_feats) == y_val).mean()
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = net
results[(lr,reg)] = (train_accuracy,val_accuracy)
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (lr, reg, train_accuracy, val_accuracy))
################################################################################
# END OF YOUR CODE #
################################################################################
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.515
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-7]
regularization_strengths = [5e4, 7e4]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=learning_rate, reg=regularization_strength, num_iters=2000)
y_val_pred = svm.predict(X_val_feats)
y_train_pred = svm.predict(X_train_feats)
val_acc = (y_val_pred == y_val).mean()
train_acc = (y_train_pred == y_train).mean()
if val_acc > best_val:
best_val = val_acc
best_svm = svm
results[(learning_rate, regularization_strength)] = (train_acc, val_acc)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(10, 8))
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$The results fail to make sense. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
best_val = -1
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
lrs = [0.75]
regs = [3e-3, 5e-5]
hidden_sizes = [1550]
results = {}
for hidden_size in hidden_sizes:
for lr in lrs:
for reg in regs:
net = TwoLayerNet(input_dim, hidden_size, num_classes)
net.train(X_train_feats, y_train, X_val_feats, y_val,
learning_rate=lr, reg=reg, num_iters=2000, batch_size=200)
y_val_pred = net.predict(X_val_feats)
y_train_pred = net.predict(X_train_feats)
val_acc = (y_val_pred == y_val).mean()
train_acc = (y_train_pred == y_train).mean()
print('learning rate: ', lr, ' regulation strength: ', reg)
print('hidden size: ', hidden_size)
print('validation accuracy: ', val_acc)
print('------------------------')
if val_acc > best_val:
best_val = val_acc
best_net = net
results[(learning_rate, regularization_strength)] = (train_acc, val_acc)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.574
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 30 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
import itertools
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr, reg in itertools.product(learning_rates, regularization_strengths):
svm = LinearSVM()
svm.train(X_train_feats, y_train, lr, reg)
train_accuracy = np.mean(y_train == svm.predict(X_train_feats))
val_accuracy = np.mean(y_val == svm.predict(X_val_feats))
if best_val < val_accuracy:
best_val = val_accuracy
best_svm = svm
results[(lr, reg)] = (train_accuracy, val_accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 2000
num_classes = 10
net = None
best_net = None
best_val = -1
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
for lr, reg in itertools.product(learning_rates, regularization_strengths):
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
net.train(X_train_feats, y_train, X_val_feats, y_val, learning_rate=lr, reg=reg)
train_accuracy = np.mean(y_train == net.predict(X_train_feats))
val_accuracy = np.mean(y_val == net.predict(X_val_feats))
if best_val < val_accuracy:
best_val = val_accuracy
best_net = net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print test_acc
###Output
0.143
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
mesh_l, mesh_r = np.meshgrid(learning_rates, regularization_strengths,indexing='ij')
for i in np.arange(len(learning_rates)):
for j in np.arange(len(regularization_strengths)):
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=mesh_l[i,j], reg=mesh_r[i,j], num_iters=2500, verbose=True)
y_train_pred = svm.predict(X_train_feats)
y_train_pred = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
y_val_pred = np.mean(y_val == y_val_pred)
results[(mesh_l[i,j],mesh_r[i,j])] = (y_train_pred, y_val_pred)
if best_val < y_val_pred:
best_val = y_val_pred
best_svm = svm
############################
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?The models get confused mostly within animals classes like bird, cat, deer, dog, frog and among plane,car,ship,truck.It may be due to backgroung similarity and some distincive feature similarity in the images. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
#X_train_feats = X_train_feats[:,:X_train_feats.shape[1]-1]
#X_val_feats = X_val_feats[:,:X_val_feats.shape[1]-1]
#X_test_feats = X_test_feats[:,:X_test_feats.shape[1]-1]
input_size = X_train_feats.shape[1]
hidden_size = 500
num_classes = 10
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=2000, batch_size=2000,
learning_rate=1, learning_rate_decay=0.95,
reg=0, verbose=True)
best_net = net
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print('Validation accuracy: ', val_acc)
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.58
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your interests.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 15 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
np.random.seed(22)
learning_rates = [1e-7, 2e-7, 3e-7, 4e-7,5e-7,6e-7,7e-7,8e-7, 1e-6]
regularization_strengths = [3e4,2.5e4, 1.5e4, 2e4, 1e4, 9e3]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
num_iter=1000
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg,num_iters=num_iter, verbose=False)
y_train_pred = svm.predict(X_train_feats)
train_acc=np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_acc=np.mean(y_val == y_val_pred)
results[(lr,reg)]=(train_acc,val_acc)
if val_acc > best_val:
best_val=val_acc
best_svm=svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
num_classes=10
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
hidden_size = sorted([400, 500, 600, 700, 800])
learning_rate = sorted([1, 1e-1, 1e-2])
reg = sorted([5e-4, 2e-4, 1e-4])
best_acc = -1
for hs in hidden_size:
for lr in learning_rate:
for r in reg:
net = TwoLayerNet(input_dim, hs, num_classes)
net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1000, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg=r, verbose=False)
val_acc = (net.predict(X_val_feats) == y_val).mean()
print('for hs: %e, lr: %e and r: %e, valid accuracy is: %f'
% (hs, lr, r, val_acc))
if val_acc > best_acc:
best_net = net
best_acc = val_acc
print('Best Networks has an Accuracy of: %f' % best_acc)
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.574
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = '../data/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for l in learning_rates:
for r in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=l, reg=r, num_iters=1500, batch_size=200)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
training_accuracy = np.mean(y_train == y_train_pred)
validation_accuracy = np.mean(y_val == y_val_pred)
results[(l, r)] = (training_accuracy, validation_accuracy)
if validation_accuracy > best_val:
best_val = validation_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
best_val = -1
best_stats = None
learning_rates = np.logspace(-2, 1, 10) # np.logspace(-10, 10, 8) #-10, -9, -8, -7, -6, -5, -4
regularization_strengths = np.logspace(-4, -2, 10) # causes numeric issues: np.logspace(-5, 5, 8) #[-4, -3, -2, -1, 1, 2, 3, 4, 5, 6]
results = {}
iters = 2000 #100
for lr in learning_rates:
for rs in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=iters, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg=rs)
y_train_pred = net.predict(X_train_feats)
acc_train = np.mean(y_train == y_train_pred)
y_val_pred = net.predict(X_val_feats)
acc_val = np.mean(y_val == y_val_pred)
results[(lr, rs)] = (acc_train, acc_val)
if best_val < acc_val:
best_stats = stats
best_val = acc_val
best_net = net
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.501
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your interests.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 11 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
SVM = LinearSVM()
_ = SVM.train(X_train_feats, y_train, lr, reg, num_iters=2500, verbose=False)
training_pred = SVM.predict(X_train_feats)
training_accuracy = np.mean(y_train == training_pred)
validation_pred = SVM.predict(X_val_feats)
validation_accuracy = np.mean(y_val == validation_pred)
if validation_accuracy>best_val:
best_val = validation_accuracy
best_svm = SVM
results[(lr,reg)] = (training_accuracy, validation_accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_size= X_train_feats.shape[1]
hidden_size = 500
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
best_net = None
best_val = -1
learning_rates = [5e-2, 8e-2, 1e-1]
regularization_strengths = [5e-5, 5e-6, 5e-7]
batch_sizes = [2e2]
results = {}
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
for batch_size in batch_sizes:
_ = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters = 2500, batch_size = batch_size, learning_rate = lr,
learning_rate_decay = 0.95, reg = reg, verbose=False)
training_pred = net.predict(X_train_feats)
training_accuracy = np.mean(y_train == training_pred)
validation_pred = net.predict(X_val_feats)
validation_accuracy = np.mean(y_val == validation_pred)
if validation_accuracy>best_val:
best_val = validation_accuracy
best_net = net
results[(lr,reg,batch_size)] = (training_accuracy, validation_accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg, batch_size in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg, batch_size)]
print('lr %e reg %e batch_size %e train accuracy: %f val accuracy: %f' % (
lr, reg, batch_size, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.426
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for reg in regularization_strengths:
print('Settings are lr = %e and reg = %e' %(lr,reg))
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg,
num_iters=1500, verbose=True)
y_train_pred = svm.predict(X_train_feats)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_acc = np.mean(y_val == y_val_pred)
results[(lr,reg)] = (train_acc, val_acc)
if val_acc > best_val:
best_val = val_acc
best_svm = svm
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
best_acc = -1
np.random.seed(1)
hidden_size_range = [1000]
learning_rate_range = [1e-1]
reg_range = [0.001,0.002]
niter_range = [6000,5000]
param_list = []
for hidden_size in hidden_size_range:
for learning_rate in learning_rate_range:
for reg in reg_range:
for niter in niter_range:
param_list.append((hidden_size,learning_rate,reg,niter))
np.random.shuffle(param_list)
for param in param_list:
hidden_size, learning_rate, reg, niter = param
net = TwoLayerNet(input_dim, hidden_size, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=niter, batch_size=200,
learning_rate=learning_rate, learning_rate_decay=0.95,
reg=reg, verbose=False)
# Predict on the validation set
train_acc = (net.predict(X_train_feats) == y_train).mean()
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print("hidden_size=%d, lr=%f, reg=%f, train_acc= %f, val_acc=%f, niter=%d" %(hidden_size,learning_rate,reg,train_acc,val_acc,niter))
if val_acc > best_acc :
best_net = net
best_acc = val_acc
print("Search complete")
#stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
# num_iters=3000, batch_size=200,
# learning_rate=learning_rate, learning_rate_decay=0.95,
# reg=reg, verbose=True)
#best_net = net
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.573
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
from __future__ import absolute_import, division, print_function
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
import seaborn
%matplotlib inline
# set default size of plots
plt.rcParams['figure.figsize'] = (10.0, 8.0)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000,
num_validation=1000,
num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = '../data/cifar10'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Training data shape:', X_train.shape)
print('Training label shape:', y_train.shape)
print('Validation data shape:', X_val.shape)
print('Validation label shape:', y_val.shape)
print('Test data shape:', X_test.shape)
print('Test label shape:', y_test.shape)
###Output
Training data shape: (49000, 32, 32, 3)
Training label shape: (49000,)
Validation data shape: (1000, 32, 32, 3)
Validation label shape: (1000,)
Test data shape: (1000, 32, 32, 3)
Test label shape: (1000,)
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature,
lambda img: color_histogram_hsv(img,
nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
print('Before adding bias:')
print(' Train features: ', X_train_feats.shape)
print(' Validation features:', X_val_feats.shape)
print(' Test features: ', X_test_feats.shape)
print('')
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack(
[X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack(
[X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack(
[X_test_feats, np.ones((X_test_feats.shape[0], 1))])
print('After adding bias:')
print(' Train features: ', X_train_feats.shape)
print(' Validation features:', X_val_feats.shape)
print(' Test features: ', X_test_feats.shape)
print('')
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Before adding bias:
Train features: (49000, 154)
Validation features: (1000, 154)
Test features: (1000, 154)
After adding bias:
Train features: (49000, 155)
Validation features: (1000, 155)
Test features: (1000, 155)
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and
# regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
#learning_rates = [1e-9, 1e-8, 1e-7]
#regularization_strengths = [1e5, 1e6, 1e7]
learning_rates = 10 ** np.linspace(-4, -1, 10)
regularization_strengths = 10 ** np.linspace(-6, 0, 10)
results = {}
best_val = -1
best_svm = None
best_lr = None
best_reg = None
###################################################################
# TODO: #
# Use the validation set to set the learning rate and #
# regularization strength. This should be identical to the #
# validation that you did for the SVM; save the best trained #
# classifer in best_svm. You might also want to play with #
# different numbers of bins in the color histogram. If you are #
# careful you should be able to get accuracy of near 0.44 on the #
# validation set.
####################################################################
import itertools
n_iters = 900
combinations = itertools.product(learning_rates,
regularization_strengths)
it = 0
for lr, reg in combinations:
it += 1
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg,
num_iters=n_iters)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
train_acc = np.mean(y_train == y_train_pred)
val_acc = np.mean(y_val == y_val_pred)
results[(lr, reg)] = (train_acc, val_acc)
# print('[lr={}, reg={}]'.format(lr, reg))
# print(' train_acc={}, val_acc={}'.format(train_acc, val_acc))
if val_acc > best_val:
best_val = val_acc
best_lr = lr
best_reg = reg
best_svm = svm
if it % 10 == 0:
print('[{}] current best: {}'.format(it, best_val))
print('')
print('Best validation:', best_val)
print(' best learning rate: ', best_lr)
print(' best regularization strength:', best_reg)
print(' best log learning rate: ', np.log10(best_lr))
print(' best log regularization: ', np.log10(best_reg))
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
cm = plt.cm.viridis #[djn] colormap
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size,
c=colors, cmap=cm)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size,
c=colors, cmap=cm)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
##################################################################
# END OF YOUR CODE #
##################################################################
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works
# is to visualize the mistakes that it makes. In this visualization,
# we show examples of images that are misclassified by our current
# system. The first column shows images that our system labeled as
# "plane" but whose true label is something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes),
i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
N = X_train_feats.shape[0]
num_iters = 1000
batch_size = 200
std = np.sqrt(2.0 / N)
best_net = None
best_stat = None
best_val_acc = 0.0
best_lr = None
best_decay = None
best_reg = None
reverse = {} # dict: val_accuracy -> (lr, decay, reg)
###################################################################
# TODO: Train a two-layer neural network on image features. You #
# may want to cross-validate various parameters as in previous #
# sections. Store your best model in the best_net variable. #
###################################################################
import itertools
#learning_rates = 10 ** np.linspace(-5, -2, 5)
learning_rates = 10 ** np.linspace(-1, np.log10(3), 5)
#decay_rates = 10 ** np.linspace(-0.017728, -0.004364, 5)
#decay_rates = 10 ** np.linspace(-0.022276, -0.004364, 4)
#decay_rates = [.7, .9, .98]
decay_rates = [.65, .7, .73]
#regularizations = 10 ** np.linspace(-3, -1, 5)
regularizations = 10 ** np.linspace(-3, -1.5, 5)
hyperparams = itertools.product(learning_rates,
decay_rates,
regularizations)
total = len(learning_rates) * len(decay_rates) * len(regularizations)
it = 0
for lr, decay, reg in hyperparams:
it += 1
net = TwoLayerNet(input_dim, hidden_dim, num_classes, std)
stat = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=num_iters, batch_size=batch_size,
learning_rate=lr, learning_rate_decay=decay,
reg=reg, verbose=False)
train_acc = np.mean(net.predict(X_train_feats) == y_train)
val_acc = np.mean(net.predict(X_val_feats) == y_val)
reverse.setdefault(val_acc, []).append((lr, decay, reg))
if val_acc > best_val_acc:
best_val_acc = val_acc
best_stat = stat
best_lr = lr
best_decay = decay
best_reg = reg
best_net = net
print('[{}/{}, lr={}, decay={}, reg={}]'.format(
it, total, lr, decay, reg))
print('\ttrain_acc={}, val_acc={}'.format(train_acc, val_acc))
print('\tcurrent best:', best_val_acc)
it = 0
upto = 3 # diplay top val_acc upto this number
print('')
for val_acc in reversed(sorted(reverse)):
if it >= upto:
break
it += 1
params = reverse[val_acc]
print('[{}] val_acc={}'.format(it, val_acc))
print('\tparams={}'.format(params))
###################################################################
# END OF YOUR CODE #
###################################################################
# Plot the loss function and train / validation accuracies
# for the best net
plt.subplot(2, 1, 1)
plt.plot(best_stat['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(best_stat['train_acc_history'], label='train', color='blue')
plt.plot(best_stat['val_acc_history'], label='val', color='green')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.598
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for rate in learning_rates:
for reg in regularization_strengths:
svm=LinearSVM()
svm.train(X_train_feats,y_train,learning_rate=rate,reg=reg,num_iters=1500,verbose=True)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
val=np.mean(y_val == y_val_pred)
if val>best_val:
best_svm=svm
best_val=val
results[(rate,reg)]=(np.mean(y_train == y_train_pred),val)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
best_acc=0.0
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
best={}
learning_rates = [1e-2 ,1e-1, 5e-1, 1, 5]
regularization_strengths = [1e-3, 5e-3, 1e-2, 1e-1, 0.5, 1]
for rate in learning_rates:
for reg in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,num_iters=1500, batch_size=200,
learning_rate=rate, learning_rate_decay=0.95,reg=reg, verbose=False)
val_acc = (net.predict(X_val_feats) == y_val).mean()
if val_acc>best_acc:
best_acc=val_acc
best_net=net
best['reg']=reg
best['rate']=rate
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.58
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
from itertools import product
for lr, reg in product(learning_rates, regularization_strengths):
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg,
num_iters=1500, verbose=False)
y_train_pred = svm.predict(X_train_feats)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_acc = np.mean(y_val == y_val_pred)
results[(lr, reg)] = (train_acc, val_acc)
if best_val < val_acc:
best_val = val_acc
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?Not really. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
best_val = -1
learning_rates = [1e-2, 3e-2, 5e-2, 7e-2]
regularization_strengths = [1e-1, 5e-1, 1e-2, 5e-2, 1e-3]
# 0.25, 0.35,
for lr, reg in product(learning_rates, regularization_strengths):
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=2000, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print(lr, reg)
print('Validation accuracy: ', val_acc)
if best_val < val_acc:
best_val = val_acc
best_net = net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.517
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
from __future__ import print_function
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your interests.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-8, 1e-7, 1e-6]
regularization_strengths = [1e4, 1e5, 1e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
################################################################################
for lr in learning_rates:
for r in regularization_strengths:
svm = LinearSVM()
hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=r, num_iters=1000, verbose=False)
# Predict on the training set
train_acc = (svm.predict(X_train_feats) == y_train).mean()
# Predict on the validation set
val_acc = (svm.predict(X_val_feats) == y_val).mean()
results[(lr, r)] = train_acc, val_acc
if val_acc > best_val:
best_svm = svm
best_val = val_acc
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
num_classes = 10
best_net = None # store the best model into this
best_acc = 0
best_stats = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
num_iters = [2000, 3000]
learning_rates = [2e-1, 3e-1]
hidden_sizes = [500, 750]
reg_strenghts = [1e-4, 1e-5, 1e-6]
best_hyperparameters = {"n_iters": None, "lr": None, "hidden_size": None, "reg": None}
for n_it in num_iters:
for lr in learning_rates:
for hs in hidden_sizes:
for r in reg_strenghts:
net = TwoLayerNet(input_dim, hs, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=n_it, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg=r, verbose=False)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print(f"n_iters: {n_it}, lr: {lr}, hs:{hs}, r:{r} - validation accuracy: {val_acc}")
if val_acc > best_acc:
best_acc = val_acc
best_net = net
best_stats = stats
best_hyperparameters["n_iters"] = n_it
best_hyperparameters["lr"] = lr
best_hyperparameters["hidden_size"] = hs
best_hyperparameters["reg"] = r
print()
print(f"Best accuracy {best_acc} achieved using these hyperparameters: {best_hyperparameters}")
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.601
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=1500, verbose=False)
y_train_pred = svm.predict(X_train_feats)
train_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_accuracy = np.mean(y_val == y_val_pred)
results[(lr, reg)] = (train_accuracy, val_accuracy)
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = svm
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
from itertools import product
best_val = -1
for lr, num_train_epochs, reg in product(
[5e-1], [4], [1e-3]
):
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
batch_size=200
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=int(num_train_epochs * X_train_feats.shape[0] / batch_size), batch_size=batch_size,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False)
y_val_pred = net.predict(X_val_feats)
val_accuracy = np.mean(y_val == y_val_pred)
if val_accuracy > best_val:
best_val = val_accuracy
best_net = net
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.553
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
import sys
sys.path.insert(0, '../utils')
from data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
sys.path.insert(0, '../utils')
from data_utils import load_CIFAR10
from features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'dataset/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
sys.path.insert(0, './classifiers')
from linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
best_lr = None
best_reg = None
best_val = -1
best_svm = None
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
_ = svm.train(X_train_feats, y_train, learning_rate=lr,
reg=reg, num_iters=500, verbose=False)
y_train_pred = svm.predict(X_train_feats)
train_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_accuracy = np.mean(y_val == y_val_pred)
if val_accuracy > best_val:
best_val = val_accuracy
best_lr = lr
best_reg = reg
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (lr, reg, train_accuracy, val_accuracy))
best_svm = LinearSVM()
best_svm.train(X_train_feats, y_train, learning_rate=best_lr, reg=best_reg, num_iters=2500, verbose=False)
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
from neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_lr = None
best_reg = None
best_h_size = None
best_batch_size = None
best_val = -1
for batch_size in (10, 50, 100, 200, 500):
for lr in np.logspace(-2,-5,8):
for reg in [0.2, 0.25, 0.3]:
nn = TwoLayerNet(input_size, hidden_dim, num_classes)
stats = nn.train(X_train, y_train, X_val, y_val, num_iters=1000, batch_size=batch_size,
learning_rate=lr, learning_rate_decay=0.95, reg=reg)
y_train_pred = nn.predict(X_train)
train_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = nn.predict(X_val)
val_accuracy = np.mean(y_val == y_val_pred)
print("reg: {0:.2f}, lr: {1:.4f}, batch: {2}, h: {3}, acc: {4:.3f}, val_acc:{5:.3f}".format(reg, lr,
batch_size,
h_size,
train_accuracy,
val_accuracy))
if val_accuracy > best_val:
print('best', val_accuracy, best_val)
best_val = val_accuracy
best_batch_size = batch_size
best_h_size = h_size
best_lr = lr
best_reg = reg
from neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
nn = TwoLayerNet(input_dim, hidden_dim, num_classes)
_ = nn.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1000, batch_size=200, learning_rate=0.5,
learning_rate_decay=0.95, reg=0.)
y_train_pred = nn.predict(X_train_feats)
train_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = nn.predict(X_val_feats)
val_accuracy = np.mean(y_val == y_val_pred)
print(train_accuracy, val_accuracy)
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (nn.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.576
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your interests.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
_____no_output_____
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# Your code
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
svm_clf = LinearSVM()
loss_hist = svm_clf.train(X_train_feats, y_train, learning_rate = learning_rate,
reg = regularization_strength, num_iters = 1500, verbose = False)
train_accuracy = np.mean(svm_clf.predict(X_train_feats) == y_train)
val_accuracy = np.mean(svm_clf.predict(X_val_feats) == y_val)
results[(learning_rate, regularization_strength)] = (train_accuracy, val_accuracy)
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = svm_clf
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
best_val_acc = 0
best_lr = 0
best_hs = 0
best_reg = 0
learning_rates = [i/10 for i in range(1, 10, 2)]
hidden_sizes = range(60, 100, 10)
regs = [i/4000 for i in range(1, 3)]
for hs in hidden_sizes:
for lr in learning_rates:
for reg in regs:
nn = TwoLayerNet(input_dim, hs, num_classes)
results = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1500, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False)
val_acc = np.mean(net.predict(X_val_feats) == y_val)
print("hs:%d, lr:%f, reg:%f, val accuracy:%f"%(hs, lr, reg, val_acc))
if val_acc > best_val_acc:
best_val_acc = val_acc
best_net = net
best_hs = hs
best_lr = lr
best_reg = reg
print("best model is:")
print("hs:%d, lr:%f, reg:%f, val accuracy:%f"%(best_hs, best_lr, best_reg, best_val_acc))
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.567
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [2.5e-5, 1e-5, 7.5e-4]
regularization_strengths = [5e2, 7.5e2, 1e3]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=2000, batch_size=200, verbose=True)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
train_acc = np.mean(y_train_pred == y_train)
val_acc = np.mean(y_val_pred == y_val)
results[(lr, reg)] = (train_acc, val_acc)
if best_val < val_acc:
best_val = val_acc
best_svm = svm
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
learning_rates = [5e-2, 1e-1, 2e-1]
regularization_strengths = [5e-4, 1e-3, 5e-3]
results = {}
best_val = -1
for lr in learning_rates:
for reg in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=5000, batch_size=200,
learning_rate=lr,learning_rate_decay=0.95,
reg=reg, verbose=True)
y_train_pred = net.predict(X_train_feats)
y_val_pred = net.predict(X_val_feats)
train_acc = np.mean(y_train_pred == y_train)
val_acc = np.mean(y_val_pred == y_val)
results[(lr, reg)] = (train_acc, val_acc)
if best_val < val_acc:
best_val = val_acc
best_net = net
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.537
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
#learning_rates = [1e-9, 1e-8, 1e-7]
#regularization_strengths = [1e5, 1e6, 1e7]
learning_rates =[5e-9, 7.5e-9, 1e-8]
regularization_strengths = [(5+i)*1e6 for i in range(-3,4)]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for rs in regularization_strengths:
for lr in learning_rates:
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, lr, rs, num_iters=6000)
y_train_pred = svm.predict(X_train_feats)
train_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_accuracy = np.mean(y_val == y_val_pred)
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = svm
results[(lr,rs)] = train_accuracy, val_accuracy
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
results = {}
best_val = -1
best_net = None
learning_rates = [1e-2 ,1e-1, 5e-1, 1, 5]
regularization_strengths = [1e-3, 5e-3, 1e-2, 1e-1, 0.5, 1]
for lr in learning_rates:
for reg in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=1500, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95,
reg= reg, verbose=False)
val_acc = (net.predict(X_val_feats) == y_val).mean()
if val_acc > best_val:
best_val = val_acc
best_net = net
results[(lr,reg)] = val_acc
# Print out results.
for lr, reg in sorted(results):
val_acc = results[(lr, reg)]
print 'lr %e reg %e val accuracy: %f' % (
lr, reg, val_acc)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print test_acc
###Output
0.574
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.fc_net import TwoLayerNet
from cs231n.solver import Solver
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
y_test_pred = np.argmax(best_net.loss(data['X_test']), axis=1)
test_acc = (y_test_pred == data['y_test']).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for learning_rate in learning_rates:
for reg in regularization_strengths:
print('lr %e reg %e' % (learning_rate, reg,))
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=learning_rate, reg=reg,
num_iters=1500, verbose=True)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
accuracy_train = np.mean(y_train == y_train_pred)
accuracy_val = np.mean(y_val == y_val_pred)
results[(learning_rate, reg)] = (accuracy_train, accuracy_val)
if best_val < accuracy_val:
best_val = accuracy_val
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 256
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# Train the network
_reg=0.005
_learning_rate=1e-0 * 0.88
_learning_rate_decay=0.83
_num_iters=2500
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=_num_iters, batch_size=200,
learning_rate=_learning_rate, learning_rate_decay=_learning_rate_decay,
reg=_reg, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print('Validation accuracy: ', val_acc)
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.581
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=1e-7, reg=5e4,
num_iters=1500, verbose=True)
y_train_pred = svm.predict(X_train_feats)
training_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
validation_accuracy = np.mean(y_val == y_val_pred)
results[(learning_rate, regularization_strength)] = (training_accuracy, validation_accuracy)
if best_val == -1 or best_val < validation_accuracy:
best_val = validation_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?**Your Answer:**They make sense! Look at the plane column. Their color is simple and clear, as same as a real plane in the sky. So, our net can find if a photo has many color, and use it as a feature to classify. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
hidden_sizes = [500]
regs = [0.01]
best_val_acc = -1
for hidden_size in hidden_sizes:
for reg in regs:
num_classes = 10
net = TwoLayerNet(input_dim, hidden_size, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=10000, batch_size=200,
learning_rate=1, learning_rate_decay=0.95,
reg=reg, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print 'Validation accuracy: ', val_acc
if best_net == None or val_acc > best_val_acc:
best_val_acc = val_acc
best_net = net
net = best_net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print test_acc
###Output
0.566
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for rs in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats,
y_train,
learning_rate=lr,
reg=rs)
y_train_pred = svm.predict(X_train_feats)
train_acc = np.mean(y_train==y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_acc = np.mean(y_val==y_val_pred)
results[lr, rs] = [train_acc, val_acc]
if val_acc > best_val:
best_val = val_acc
best_svm = svm
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$There are several mistakes in each category and it makes sense as the structure and color of the image resembles with the class in some extent, though, it's not an accurate model. Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.fc_net import TwoLayerNet
from cs231n.solver import Solver
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_net = None
best_val_accuracy = 0
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
data = {"X_train": X_train_feats,
"y_train": y_train,
"X_val": X_val_feats,
"y_val": y_val}
hidden_dims = [200, 700, 800]
learning_rates = np.linspace(1e-4, 1e-2, 10)
for hd in hidden_dims:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
solver = None
for lr in learning_rates:
print("Hidden dimensions: {}".format(hd))
print("Learning rate: {}".format(lr))
solver = Solver(net, data,
update_rule='sgd',
optim_config={'learning_rate': lr},
lr_decay=0.95,
num_epochs=10,
batch_size=100,
print_every=490)
solver.train()
val_acc = solver.best_val_acc
if val_acc > best_val_accuracy:
best_val_accuracy = val_acc
best_net = solver.model
print("The best validation accuracy is: {}".format(best_val_accuracy))
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
y_test_pred = np.argmax(best_net.loss(X_test_feats), axis=1)
test_acc = (y_test_pred == y_test).mean()
print(test_acc)
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
##
learning_rate_pows = np.linspace(-9, -7, 8)
regularization_strength_pows = np.linspace(4, 7, 8)
jitter_power = 0.4
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
def jitter(value, power, step):
return value + power * step / 2 * np.random.randn()
def search():
learning_rate_pow_step = learning_rate_pows[1] - learning_rate_pows[0]
regularization_strength_pow_step = regularization_strength_pows[1] - regularization_strength_pows[0]
best = None
best_val_acc = 0.0
total_iteration_num = len(learning_rate_pows) * len(regularization_strength_pows)
current_iteration = 0
for learning_rate_pow in learning_rate_pows:
for regularization_strength_pow in regularization_strength_pows:
learning_rate = 10**jitter(
learning_rate_pow, jitter_power, learning_rate_pow_step)
regularization_strength = 10**jitter(
regularization_strength_pow, jitter_power, regularization_strength_pow_step)
print()
print(f'rate: {learning_rate}')
print(f'reg: {regularization_strength}')
print()
svm = LinearSVM()
svm.train(X_train_feats, y_train,
learning_rate=learning_rate,
reg=regularization_strength,
num_iters=1500,
verbose=True,
)
trained_accuracy = np.mean(y_train == svm.predict(X_train_feats))
val_accuracy = np.mean(y_val == svm.predict(X_val_feats))
results[(
learning_rate,
regularization_strength,
)] = (
trained_accuracy,
val_accuracy,
)
print('*' * filled + '-' * (80 - filled))
if best_val_acc < val_accuracy:
best_val_acc = val_accuracy
best = svm
return best_val, best
%time best_val, best_svm = search()
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results, key=lambda x: -results[x][1]):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?- some of cases seems fail because they have color histogram which quite close to color histogram of images from this class- few misclassified image quite similar - like truck and car- HOG takes into account direction of local gradient but for such small images it doesn't play well Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_size = X_train_feats.shape[1]
hidden_size = 100
num_classes = 10
batch_size = 200
# net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
num_iters = 1000
# searching space
learning_rate_pows = np.linspace(-2, 1, 5)
regularization_strength_pows = np.linspace(-10.0, 2, 5)
# random layout
jitter_power = 0.4
results = {}
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
def search():
"""
We have disabled hidden_size_space search dimension.
"""
best_net = None # store the best model into this
best_acc = 0.0
learning_rate_pow_step = learning_rate_pows[1] - learning_rate_pows[0]
regularization_strength_pow_step = regularization_strength_pows[1] - regularization_strength_pows[0]
results = {}
total_iteration_num = len(learning_rate_pows) * len(regularization_strength_pows)
current_iteration = 0
for learning_rate_pow in learning_rate_pows:
for regularization_strength_pow in regularization_strength_pows:
# randomize layout
learning_rate_pow_jitter = jitter_power * learning_rate_pow_step / 2 * np.random.randn()
regularization_strength_jitter = jitter_power * regularization_strength_pow_step / 2 * np.random.randn()
# define params
learning_rate = 10**(learning_rate_pow + learning_rate_pow_jitter)
regularization_strength = 10**(regularization_strength_pow + regularization_strength_jitter)
print()
print(f'rate: {learning_rate}')
print(f'reg: {regularization_strength}')
print(f'hidden_size: {hidden_size}')
print()
net = TwoLayerNet(input_size, hidden_size, num_classes)
stats = net.train(X_train_feats, y_train,
X_val_feats, y_val,
num_iters=num_iters,
batch_size=batch_size,
learning_rate=learning_rate,
learning_rate_decay=0.95,
reg=regularization_strength,
verbose=True,
)
# Predict on the validation set
trained_accuracy = (net.predict(X_train_feats) == y_train).mean()
val_accuracy = (net.predict(X_val_feats) == y_val).mean()
print(f'train accuracy: {trained_accuracy}')
print(f'val accuracy: {val_accuracy}')
filled = int(80*current_iteration/total_iteration_num)
print('*' * filled + '-' * (80 - filled))
current_iteration+=1
if best_acc < val_accuracy:
best_acc = val_accuracy
best_net = net
results[(
learning_rate,
regularization_strength,
hidden_size,
)] = (
trained_accuracy,
val_accuracy,
)
return best_net, results
%time best_net, results = search()
################################################################################
# END OF YOUR CODE #
################################################################################
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 50
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors, edgecolors='lightgrey')
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors, edgecolors='lightgrey')
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
sorted_results = sorted(results.items(), key=lambda x: x[1][1])
print(sorted_results[-1])
(
best_learning_rate,
best_regularization_strength,
hidden_size,
) = sorted_results[-1][0]
print('The best params are:')
print(f'rate: {best_learning_rate}')
print(f'reg: {best_regularization_strength}')
print(f'hidden_size: {hidden_size}')
print()
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.56
###Markdown
Bonus: Design your own features!You have seen that simple image features can improve classification performance. So far we have tried HOG and color histograms, but other types of features may be able to achieve even better classification performance.For bonus points, design and implement a new type of feature and use it for image classification on CIFAR-10. Explain how your feature works and why you expect it to be useful for image classification. Implement it in this notebook, cross-validate any hyperparameters, and compare its performance to the HOG + Color histogram baseline. Bonus: Do something extra!Use the material and code we have presented in this assignment to do something interesting. Was there another question we should have asked? Did any cool ideas pop into your head as you were working on the assignment? This is your chance to show off!
###Code
# play rolling sound at the end of the notebook :)
# very useful for long training
from IPython.display import HTML
video_id = 'mzAfTmC3It0'
HTML(f'<iframe width="560" height="315" src="https://www.youtube.com/embed/{video_id}?rel=0&controls=1&showinfo=0&autoplay=1" frameborder="0" allowfullscreen></iframe>')
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 100 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = np.logspace(-6.5, -4.5, 10)
regularization_strengths = np.logspace(3.5, 4.5, 10)
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for learn_rate in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
_ = svm.train(X_train_feats, y_train, learning_rate=learn_rate, reg=reg,
num_iters=1500)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
train_accuracy = np.mean(y_train == y_train_pred)
val_accuracy = np.mean(y_val == y_val_pred)
results[(learn_rate, reg)] = (train_accuracy,val_accuracy)
if best_val < val_accuracy:
best_val = val_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?Misclassifications tend to be monochromatic and mostly make sense. For example, the two birds in classified under plane with a blue sky are likely to be misclassified even by humans. The other images are mostly white, which imply that the classifier tends to heavily weigh the color white with shades of gray towards plans (ie, a while plane in the clouds). Going through the other classe, we see something smilar. Trucks misclassifed as cars, which even humans might do. The animals are somewhat harder to interpret. however. For birds, it seems green tints are favored, but no overall trends. Deer favor green tints, and some misclassfied images are hard to distinguish from just "animal in gree gras". The frog class tends to have yellow-ish bias, and dogs brown, which examplins some of the misclassifications. Going further, we see ship as favoring blue, which explains why some planes are classified therein.Overall, it's interesting to note that most misclassifications are "understandable" and sometimes even hard to make out as a human (and might require outside knowledge). Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
learning_rates = np.logspace(-1, 0, 3)
regularization_strengths = np.logspace(-4, -2, 3)
best_stats = None
best_val = -1 # The highest validation accuracy that we have seen so far.
np.random.seed(0) # For repeatability.
for reg in regularization_strengths:
for learn_rate in learning_rates:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=2000, batch_size=400,
learning_rate=learn_rate, learning_rate_decay=0.9,
reg=reg, verbose=True)
val_acc = (net.predict(X_val_feats) == y_val).mean()
print('Validation accuracy: ', val_acc)
print('Params. rate: %s, reg: %s.' % (learn_rate, reg))
results[(learn_rate, reg)] = (val_acc)
if best_val < val_acc:
best_val = val_acc
best_net = net
best_stats = stats
print('best validation accuracy achieved during cross-validation: %f' % best_val)
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
net = best_net
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.607
###Markdown
Bonus: Design your own features!You have seen that simple image features can improve classification performance. So far we have tried HOG and color histograms, but other types of features may be able to achieve even better classification performance.For bonus points, design and implement a new type of feature and use it for image classification on CIFAR-10. Explain how your feature works and why you expect it to be useful for image classification. Implement it in this notebook, cross-validate any hyperparameters, and compare its performance to the HOG + Color histogram baseline. Bonus: Do something extra!Use the material and code we have presented in this assignment to do something interesting. Was there another question we should have asked? Did any cool ideas pop into your head as you were working on the assignment? This is your chance to show off! Bayesian OptimizationSee nn_optimizer for Bayesian Optimization technique useful in optimizing the NN hyper parameters. We used this optimizer to intelligently find optimal paramters for our NNs, and then explored the space surround these optimal parameters. Plots for Best BBA few plots to visualize our results.
###Code
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
###Output
_____no_output_____
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
'''learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
learning_rates = [1e-9, 2.5e-9, 5e-9]
regularization_strengths = [i*1e7 for i in range(10)]'''
learning_rates = [(1+i*0.1)*1e-9 for i in range(6)]
regularization_strengths = [(1+i*0.1)*4e7 for i in range(6)]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
#pass
for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, lr, reg, num_iters=2000)
y_train_pred = svm.predict(X_train_feats)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_acc = np.mean(y_val == y_val_pred)
if val_acc > best_val:
best_val = val_acc
best_svm = svm
results[(lr, reg)] = train_acc, val_acc
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
results = {}
best_val = -1
'''learning_rates = [1e-2, 5e-2, 1e-1, 5e-1]
regularization_strengths = [1e-3, 1e-2, 1e-1]'''
learning_rates = [(1+i*0.1)*5e-1 for i in range(6)]
regularization_strengths = [(1+i*0.1)*1e-3 for i in range(6)]
for lr in learning_rates:
for reg in regularization_strengths:
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, num_iters=1500, batch_size=200, verbose=False)
y_train_pred = net.predict(X_train_feats)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = net.predict(X_val_feats)
val_acc = np.mean(y_val == y_val_pred)
if val_acc > best_val:
best_val = val_acc
best_net = net
results[(lr, reg)] = train_acc, val_acc
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
###Output
0.574
###Markdown
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook.
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk.
###Code
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
###Output
_____no_output_____
###Markdown
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for your own interest.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image.
###Code
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
###Output
Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images
Done extracting features for 49000 / 49000 images
###Markdown
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
###Code
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for (lr, reg) in [(lr, reg) for lr in learning_rates for reg in regularization_strengths]:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=1500, verbose=False)
y_train_pred = svm.predict(X_train_feats)
train_acc = np.mean(y_train == y_train_pred)
y_pred = svm.predict(X_val_feats)
val_acc = np.mean(y_pred == y_val)
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (lr, reg, train_acc, val_acc))
results[(lr,reg)] = (train_acc, val_acc)
if(val_acc > best_val):
best_val = val_acc
best_svm = svm
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved: %f' % best_val)
# Evaluate your trained SVM on the test set: you should be able to get at least 0.40
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
###Output
_____no_output_____
###Markdown
Inline question 1:Describe the misclassification results that you see. Do they make sense?$\color{blue}{\textit Your Answer:}$ Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
###Code
# Preprocessing: Remove the bias dimension
# Make sure to run this cell only ONCE
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)
from cs231n.classifiers.fc_net import TwoLayerNet
from cs231n.solver import Solver
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
data = {}
data['X_train'] = X_train_feats
data['y_train'] = y_train
data['X_val'] = X_val_feats
data['y_val'] = y_val
data['X_test'] = X_test_feats
data['y_test'] = y_test
solver = Solver(net, data,
update_rule='sgd',
optim_config={
'learning_rate': 3e-1,
},
lr_decay=0.95,
num_epochs=10, batch_size=100,
print_every=100, verbose = True)
solver.train()
best_net = net
print('Validation accuracy: ', solver.best_val_acc)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.
y_test_pred = np.argmax(best_net.loss(data['X_test']), axis=1)
test_acc = (y_test_pred == data['y_test']).mean()
print(test_acc)
###Output
0.583
|
examples/PKIS2_kinase_informed.ipynb | ###Markdown
PKIS2 kinase-informed notebookIn this notebook, we create the workflow for the following model setting:- Data set: PKIS2 which uses KinomeScan assays- Molecular component type: ligand-based- Ligand feature: Morgan fingerprint- Kinase feature: scaled binding site composition- Model Architecture: tree-based (using XGBoost) and then a dense neural networkWe will consider the following metrics:- Loss: MSE- Goodness of fit: $R^2$We will test different evaluation strategies: train-test split - [ ] Random (K fold)
###Code
%load_ext autoreload
%autoreload 2
import warnings
warnings.simplefilter("ignore")
import logging
logging.basicConfig(level=logging.ERROR)
import numpy as np
###Output
_____no_output_____
###Markdown
Load PKIS2 data
###Code
from kinoml.datasets.kinomescan.pkis2 import PKIS2DatasetProvider
###Output
_____no_output_____
###Markdown
First as a DatasetProvider
###Code
pkis2 = PKIS2DatasetProvider.from_source()
###Output
_____no_output_____
###Markdown
Now as a pandas dataframe
###Code
df = pkis2.to_dataframe()
df
###Output
_____no_output_____
###Markdown
A few statistics about this data
###Code
import matplotlib.pyplot as plt
f = plt.figure
plt.hist(df['PercentageDisplacementMeasurement'])
plt.xlabel('Value in %')
plt.ylabel('Count')
plt.title('Histogram of measurements')
plt.show()
print("Measurements:", len(pkis2.measurements))
print("Systems:", len(pkis2.systems))
print("Ligands:", len(set([s.ligand for s in pkis2.systems])))
print("Proteins:", len(set([s.protein for s in pkis2.systems])))
from collections import Counter
print('Number of zeros : ',
Counter(df['PercentageDisplacementMeasurement'] == 0.0))
###Output
Number of zeros : Counter({False: 139025, True: 122845})
###Markdown
Featurize the ligandWe use the Morgan fingerprint and set the radius at 2, the vector length at 1024.
###Code
from kinoml.features.ligand import MorganFingerprintFeaturizer
morgan_featurizer = MorganFingerprintFeaturizer(nbits=1024, radius=2)
###Output
_____no_output_____
###Markdown
Featurize the kinase
###Code
from kinoml.features.protein import AminoAcidCompositionFeaturizer
from kinoml.features.core import ScaleFeaturizer, Concatenated, Pipeline
composition_featurizer = Pipeline([AminoAcidCompositionFeaturizer(),
ScaleFeaturizer()])
###Output
_____no_output_____
###Markdown
Model input featurizersFor this model, we concatenate the ligand and kinase featurizers as to have one single vector as input for the considered model.We then featurize all the systems in PKIS2.
###Code
concat_featurizers = Concatenated([morgan_featurizer,
composition_featurizer], axis=0)
pkis2.featurize(concat_featurizers)
###Output
_____no_output_____
###Markdown
XGBoost packageWe export the data to be compatible with the XGBoost libraray.
###Code
data = pkis2.to_xgboost()
data
###Output
_____no_output_____
###Markdown
Objective functionThe objective function is the function which is optimized in the algorithm. It is a custom loss function which depends on the observation model.
###Code
obj_function = pkis2.loss_adapter(backend="xgboost", loss="mse")
obj_function
###Output
_____no_output_____
###Markdown
Model TrainingWe define the tree-based model _per say_ and train it.
###Code
import xgboost as xgb
params = {'learning_rate': 1.0}
model = xgb.train(dtrain=data, params=params, obj=obj_function)
###Output
_____no_output_____
###Markdown
Model evaluatingWe evaluate the model on the data set, which gives us a prediction for $\Delta g$. Given the prediction, we then apply the observation model to obtain the predictions for the observed values.
###Code
observation_model = pkis2.observation_model(backend="xgboost")
observation_model
deltag_train = model.predict(data) # model predicts deltag
prediction = observation_model(deltag_train) # Apply observation model to predicted deltag
true = data.get_label() # true, observed values
import numpy as np
from matplotlib import pyplot as plt
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
fig, ax = plt.subplots()
ax.scatter(prediction, true)
ax.set(xlim=(0, 100), ylim=(0, 100))
ax.set_xlabel("Predicted y")
ax.set_ylabel("True y")
x = np.linspace(0, 15, 10)
ax.plot(x, x)
ax.set_aspect('equal', adjustable='box')
plt.show()
r2 = r2_score(true, prediction)
print(f"R2: Goodness of fit measure: {r2:.2f}")
if all(elem==prediction[0] for elem in prediction):
print("All outputs are equal: ")
mse = mean_squared_error(true, prediction)
mae = mean_absolute_error(true, prediction)
rmse = np.sqrt(mse)
print(f"MSE: {mse:.2f}")
print(f"RMSE: {rmse:.2f}")
print(f"MAE: {mae:.2f}")
###Output
_____no_output_____
###Markdown
Neural NetworkLet's see how the model performs if we use a neural network instead. Let's export the data to pytorch.
###Code
datasets = pkis2.to_pytorch()
datasets
###Output
_____no_output_____
###Markdown
We define the observation model which maps $\Delta g$ to percentage displacement measurement types.
###Code
pct_displacement_model = pkis2.measurement_type.observation_model(backend="pytorch")
import torch
from kinoml.ml.torch_models import NeuralNetworkRegression
# Use DataLoader for minibatches
loader = datasets.as_dataloader(batch_size=512)
model = NeuralNetworkRegression(input_size=datasets.systems[0].shape[0])
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
loss_function = torch.nn.MSELoss() # Mean squared error
print(f"{model} has {sum(param.numel() for param in model.parameters())} parameters")
nb_epoch = 100
loss_timeseries = []
for epoch in range(nb_epoch):
cumulative_loss = 0
for i, (x, y) in enumerate(loader, 1):
# Clear gradients
optimizer.zero_grad()
# Obtain model prediction given model input
# x.requires_grad_()
delta_g = model(x)
# with observation model
# with torch.no_grad():
prediction = pct_displacement_model(delta_g)
loss = loss_function(prediction, y)
# Obtain loss for the predicted output
cumulative_loss += loss.item()
# Gradients w.r.t to parameters
loss.backward()
# Optimizer
optimizer.step()
loss_timeseries.append(cumulative_loss)
if epoch % 10 == 0:
print(f"epoch {epoch} : loss {loss_timeseries[-1]}")
print("Done!")
f = plt.figure()
plt.plot(loss_timeseries)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
model_input = torch.tensor(datasets.systems).type(torch.FloatTensor)
true = datasets.measurements
delta_g = model(model_input)
prediction = datasets.observation_model(delta_g).detach().numpy()
fig, ax = plt.subplots()
ax.scatter(prediction, true)
ax.set(xlim=(0, 100), ylim=(0, 100))
ax.set_xlabel("Predicted y")
ax.set_ylabel("True y")
ax.set_title(f"Percentage displacement for PKIS2 dataset")
x = np.linspace(0, 100, 10)
ax.plot(x, x)
ax.set_aspect('equal', adjustable='box')
plt.show()
r2 = r2_score(true, prediction)
print(f"R2: Goodness of fit measure: {r2:.2f}")
if all(elem==prediction[0] for elem in prediction):
print("All outputs are equal: ")
mse = mean_squared_error(true, prediction)
mae = mean_absolute_error(true, prediction)
rmse = np.sqrt(mse)
print(f"MSE: {mse:.2f}")
print(f"RMSE: {rmse:.2f}")
print(f"MAE: {mae:.2f}")
###Output
_____no_output_____ |
W1_Assignment_BirdBoxes.ipynb | ###Markdown
Predicting Bounding BoxesWelcome to Course 3, Week 1 Programming Assignment! In this week's assignment, you'll build a model to predict bounding boxes around images. - You will use transfer learning on any of the pre-trained models available in Keras. - You'll be using the [Caltech Birds - 2010](http://www.vision.caltech.edu/visipedia/CUB-200.html) dataset. How to submit your workNotice that there is not a "submit assignment" button in this notebook. To check your work and get graded on your work, you'll train the model, save it and then upload the model to Coursera for grading. - [Initial steps](0) - [0.1 Set up your Colab](0-1) - [0.2 Set up the data location](0-2) - [0.3 Choose the GPU Runtime](0-3) - [0.4 Mount your drive](0-4) - [0.5 Imports](0-5)- [1. Visualization Utilities](1) - [1.1 Bounding Boxes Utilities](1-1) - [1.2 Data and Predictions Utilities](1-2)- [2. Preprocessing and Loading the Dataset](2) - [2.1 Preprocessing Utilities](2-1) - [2.2 Visualize the prepared Data](2-2) - [2.3 Loading the Dataset](2-3)- [3. Define the Network](3) - [Exercise 1](ex-01) - [Exercise 2](ex-02) - [Exercise 3](ex-03) - [Exercise 4](ex-04) - [Exercise 5](ex-05)- [4. Training the Model](4) - [Prepare to train the model](4.1) - [Exercise 6](ex-06) - [Fit the model to the data](4.2) - [Exercise 7](ex-07)- [5. Validate the Model](5) - [5.1 Loss](5-1) - [5.2 Save your Model](5-2) - [5.3 Plot the Loss Function](5-3) - [5.4 Evaluate performance using IoU](5-4)- [6. Visualize Predictions](6)- [7. Upload your model for grading](7) 0. Initial steps 0.1 Set up your Colab- As you cannot save the changes you make to this colab, you have to make a copy of this notebook in your own drive and run that. - You can do so by going to `File -> Save a copy in Drive`. - Close this colab and open the copy which you have made in your own drive. Then continue to the next step to set up the data location. Set up the data locationA copy of the dataset that you'll be using is stored in a publicly viewable Google Drive folder. You'll want to add a shortcut to it to your own Google Drive.- Go to this google drive folder named [TF3 C3 W1 Data](https://drive.google.com/drive/folders/1xgqUw9uWzL5Kh88iPdX1TBQgnkc-wVKd?usp=sharing)- Next to the folder name "TF3 C3 W1 Data" (at the top of the page beside "Shared with me"), hover your mouse over the triangle to reveal the drop down menu. - Use the drop down menu to select `"Add shortcut to Drive"` A pop-up menu will open up. - In the pop-up menu, "My Drive" is selected by default. Click the `ADD SHORTCUT` button. This should add a shortcut to the folder `TF3 C3 W1 Data` within your own google drive at the location `content/drive`.- To verify, go to the left-side menu and click on "My Drive". Scroll through your files to look for the shortcut TF3 C3 W1 Data. Please make sure this happens, as you'll be reading the data for this notebook from this folder. 0.3 Choose the GPU Runtime- Make sure your runtime is **GPU** (_not_ CPU or TPU). And if it is an option, make sure you are using _Python 3_. You can select these settings by going to `Runtime -> Change runtime type -> Select the above mentioned settings and then press SAVE` 0.4 Mount your drivePlease run the next code cell and follow these steps to mount your Google Drive so that it can be accessed by this Colab.- Run the code cell below. A web link will appear below the cell.- Please click on the web link, which will open a new tab in your browser, which asks you to choose your google account.- Choose your google account to login.- The page will display "Google Drive File Stream wants to access your Google Account". Please click "Allow".- The page will now show a code (a line of text). Please copy the code and return to this Colab.- Paste the code the textbox that is labeled "Enter your authorization code:" and hit ``- The text will now say "Mounted at /content/drive/"- Please look at the files explorer of this Colab (left side) and verify that you can navigate to `drive/MyDrive/TF3 C3 W1 Data/caltech_birds2010/0.1.1` . If the folder is not there, please redo the steps above and make sure that you're able to add the shortcut to the hosted dataset.
###Code
from google.colab import drive
drive.mount('/content/drive/', force_remount=True)
###Output
_____no_output_____
###Markdown
0.5 Imports
###Code
import os, re, time, json
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
import tensorflow_datasets as tfds
import cv2
###Output
_____no_output_____
###Markdown
Store the path to the data.- Remember to follow the steps to `set up the data location` (above) so that you'll have a shortcut to the data in your Google Drive.
###Code
data_dir = "/content/drive/My Drive/TF3 C3 W1 Data/"
###Output
_____no_output_____
###Markdown
1. Visualization Utilities 1.1 Bounding Boxes UtilitiesWe have provided you with some functions which you will use to draw bounding boxes around the birds in the `image`.- `draw_bounding_box_on_image`: Draws a single bounding box on an image.- `draw_bounding_boxes_on_image`: Draws multiple bounding boxes on an image.- `draw_bounding_boxes_on_image_array`: Draws multiple bounding boxes on an array of images.
###Code
def draw_bounding_box_on_image(image, ymin, xmin, ymax, xmax, color=(255, 0, 0), thickness=5):
"""
Adds a bounding box to an image.
Bounding box coordinates can be specified in either absolute (pixel) or
normalized coordinates by setting the use_normalized_coordinates argument.
Args:
image: a PIL.Image object.
ymin: ymin of bounding box.
xmin: xmin of bounding box.
ymax: ymax of bounding box.
xmax: xmax of bounding box.
color: color to draw bounding box. Default is red.
thickness: line thickness. Default value is 4.
"""
image_width = image.shape[1]
image_height = image.shape[0]
cv2.rectangle(image, (int(xmin), int(ymin)), (int(xmax), int(ymax)), color, thickness)
def draw_bounding_boxes_on_image(image, boxes, color=[], thickness=5):
"""
Draws bounding boxes on image.
Args:
image: a PIL.Image object.
boxes: a 2 dimensional numpy array of [N, 4]: (ymin, xmin, ymax, xmax).
The coordinates are in normalized format between [0, 1].
color: color to draw bounding box. Default is red.
thickness: line thickness. Default value is 4.
Raises:
ValueError: if boxes is not a [N, 4] array
"""
boxes_shape = boxes.shape
if not boxes_shape:
return
if len(boxes_shape) != 2 or boxes_shape[1] != 4:
raise ValueError('Input must be of size [N, 4]')
for i in range(boxes_shape[0]):
draw_bounding_box_on_image(image, boxes[i, 1], boxes[i, 0], boxes[i, 3],
boxes[i, 2], color[i], thickness)
def draw_bounding_boxes_on_image_array(image, boxes, color=[], thickness=5):
"""
Draws bounding boxes on image (numpy array).
Args:
image: a numpy array object.
boxes: a 2 dimensional numpy array of [N, 4]: (ymin, xmin, ymax, xmax).
The coordinates are in normalized format between [0, 1].
color: color to draw bounding box. Default is red.
thickness: line thickness. Default value is 4.
display_str_list_list: a list of strings for each bounding box.
Raises:
ValueError: if boxes is not a [N, 4] array
"""
draw_bounding_boxes_on_image(image, boxes, color, thickness)
return image
###Output
_____no_output_____
###Markdown
1.2 Data and Predictions UtilitiesWe've given you some helper functions and code that are used to visualize the data and the model's predictions.- `display_digits_with_boxes`: This displays a row of "digit" images along with the model's predictions for each image.- `plot_metrics`: This plots a given metric (like loss) as it changes over multiple epochs of training.
###Code
# Matplotlib config
plt.rc('image', cmap='gray')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# utility to display a row of digits with their predictions
def display_digits_with_boxes(images, pred_bboxes, bboxes, iou, title, bboxes_normalized=False):
n = len(images)
fig = plt.figure(figsize=(20, 4))
plt.title(title)
plt.yticks([])
plt.xticks([])
for i in range(n):
ax = fig.add_subplot(1, 10, i+1)
bboxes_to_plot = []
if (len(pred_bboxes) > i):
bbox = pred_bboxes[i]
bbox = [bbox[0] * images[i].shape[1], bbox[1] * images[i].shape[0], bbox[2] * images[i].shape[1], bbox[3] * images[i].shape[0]]
bboxes_to_plot.append(bbox)
if (len(bboxes) > i):
bbox = bboxes[i]
if bboxes_normalized == True:
bbox = [bbox[0] * images[i].shape[1],bbox[1] * images[i].shape[0], bbox[2] * images[i].shape[1], bbox[3] * images[i].shape[0] ]
bboxes_to_plot.append(bbox)
img_to_draw = draw_bounding_boxes_on_image_array(image=images[i], boxes=np.asarray(bboxes_to_plot), color=[(255,0,0), (0, 255, 0)])
plt.xticks([])
plt.yticks([])
plt.imshow(img_to_draw)
if len(iou) > i :
color = "black"
if (iou[i][0] < iou_threshold):
color = "red"
ax.text(0.2, -0.3, "iou: %s" %(iou[i][0]), color=color, transform=ax.transAxes)
# utility to display training and validation curves
def plot_metrics(metric_name, title, ylim=5):
plt.title(title)
plt.ylim(0,ylim)
plt.plot(history.history[metric_name],color='blue',label=metric_name)
plt.plot(history.history['val_' + metric_name],color='green',label='val_' + metric_name)
###Output
_____no_output_____
###Markdown
2. Preprocess and Load the Dataset 2.1 Preprocessing UtilitiesWe have given you some helper functions to pre-process the image data. read_image_tfds- Resizes `image` to (224, 224)- Normalizes `image`- Translates and normalizes bounding boxes
###Code
def read_image_tfds(image, bbox):
image = tf.cast(image, tf.float32)
shape = tf.shape(image)
factor_x = tf.cast(shape[1], tf.float32)
factor_y = tf.cast(shape[0], tf.float32)
image = tf.image.resize(image, (224, 224,))
image = image/127.5
image -= 1
bbox_list = [bbox[0] / factor_x ,
bbox[1] / factor_y,
bbox[2] / factor_x ,
bbox[3] / factor_y]
return image, bbox_list
###Output
_____no_output_____
###Markdown
read_image_with_shapeThis is very similar to `read_image_tfds` except it also keeps a copy of the original image (before pre-processing) and returns this as well.- Makes a copy of the original image.- Resizes `image` to (224, 224)- Normalizes `image`- Translates and normalizes bounding boxes
###Code
def read_image_with_shape(image, bbox):
original_image = image
image, bbox_list = read_image_tfds(image, bbox)
return original_image, image, bbox_list
###Output
_____no_output_____
###Markdown
read_image_tfds_with_original_bbox- This function reads `image` from `data`- It also denormalizes the bounding boxes (it undoes the bounding box normalization that is performed by the previous two helper functions.)
###Code
def read_image_tfds_with_original_bbox(data):
image = data["image"]
bbox = data["bbox"]
shape = tf.shape(image)
factor_x = tf.cast(shape[1], tf.float32)
factor_y = tf.cast(shape[0], tf.float32)
bbox_list = [bbox[1] * factor_x ,
bbox[0] * factor_y,
bbox[3] * factor_x,
bbox[2] * factor_y]
return image, bbox_list
###Output
_____no_output_____
###Markdown
dataset_to_numpy_utilThis function converts a `dataset` into numpy arrays of images and boxes.- This will be used when visualizing the images and their bounding boxes
###Code
def dataset_to_numpy_util(dataset, batch_size=0, N=0):
# eager execution: loop through datasets normally
take_dataset = dataset.shuffle(1024)
if batch_size > 0:
take_dataset = take_dataset.batch(batch_size)
if N > 0:
take_dataset = take_dataset.take(N)
if tf.executing_eagerly():
ds_images, ds_bboxes = [], []
for images, bboxes in take_dataset:
ds_images.append(images.numpy())
ds_bboxes.append(bboxes.numpy())
return (np.array(ds_images), np.array(ds_bboxes))
###Output
_____no_output_____
###Markdown
dataset_to_numpy_with_original_bboxes_util- This function converts a `dataset` into numpy arrays of - original images - resized and normalized images - bounding boxes- This will be used for plotting the original images with true and predicted bounding boxes.
###Code
def dataset_to_numpy_with_original_bboxes_util(dataset, batch_size=0, N=0):
normalized_dataset = dataset.map(read_image_with_shape)
if batch_size > 0:
normalized_dataset = normalized_dataset.batch(batch_size)
if N > 0:
normalized_dataset = normalized_dataset.take(N)
if tf.executing_eagerly():
ds_original_images, ds_images, ds_bboxes = [], [], []
for original_images, images, bboxes in normalized_dataset:
ds_images.append(images.numpy())
ds_bboxes.append(bboxes.numpy())
ds_original_images.append(original_images.numpy())
return np.array(ds_original_images), np.array(ds_images), np.array(ds_bboxes)
###Output
_____no_output_____
###Markdown
2.2 Visualize the images and their bounding box labelsNow you'll take a random sample of images from the training and validation sets and visualize them by plotting the corresponding bounding boxes. Visualize the **training** images and their bounding box labels
###Code
def get_visualization_training_dataset():
dataset, info = tfds.load("caltech_birds2010", split="train", with_info=True, data_dir=data_dir, download=False)
print(info)
visualization_training_dataset = dataset.map(read_image_tfds_with_original_bbox,
num_parallel_calls=16)
return visualization_training_dataset
visualization_training_dataset = get_visualization_training_dataset()
(visualization_training_images, visualization_training_bboxes) = dataset_to_numpy_util(visualization_training_dataset, N=10)
display_digits_with_boxes(np.array(visualization_training_images), np.array([]), np.array(visualization_training_bboxes), np.array([]), "training images and their bboxes")
###Output
_____no_output_____
###Markdown
Visualize the **validation** images and their bounding boxes
###Code
def get_visualization_validation_dataset():
dataset = tfds.load("caltech_birds2010", split="test", data_dir=data_dir, download=False)
visualization_validation_dataset = dataset.map(read_image_tfds_with_original_bbox, num_parallel_calls=16)
return visualization_validation_dataset
visualization_validation_dataset = get_visualization_validation_dataset()
(visualization_validation_images, visualization_validation_bboxes) = dataset_to_numpy_util(visualization_validation_dataset, N=10)
display_digits_with_boxes(np.array(visualization_validation_images), np.array([]), np.array(visualization_validation_bboxes), np.array([]), "validation images and their bboxes")
###Output
_____no_output_____
###Markdown
2.3 Load and prepare the datasets for the modelThese next two functions read and prepare the datasets that you'll feed to the model.- They use `read_image_tfds` to resize, and normalize each image and its bounding box label.- They performs shuffling and batching.- You'll use these functions to create `training_dataset` and `validation_dataset`, which you will give to the model that you're about to build.
###Code
BATCH_SIZE = 64
def get_training_dataset(dataset):
dataset = dataset.map(read_image_tfds, num_parallel_calls=16)
dataset = dataset.shuffle(512, reshuffle_each_iteration=True)
dataset = dataset.repeat()
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(-1)
return dataset
def get_validation_dataset(dataset):
dataset = dataset.map(read_image_tfds, num_parallel_calls=16)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.repeat()
return dataset
training_dataset = get_training_dataset(visualization_training_dataset)
validation_dataset = get_validation_dataset(visualization_validation_dataset)
###Output
_____no_output_____
###Markdown
3. Define the NetworkBounding box prediction is treated as a "regression" task, in that you want the model to output numerical values.- You will be performing transfer learning with **MobileNet V2**. The model architecture is available in TensorFlow Keras.- You'll also use pretrained `'imagenet'` weights as a starting point for further training. These weights are also readily available - You will choose to retrain all layers of **MobileNet V2** along with the final classification layers.**Note:** For the following exercises, please use the TensorFlow Keras Functional API (as opposed to the Sequential API). Exercise 1Please build a feature extractor using MobileNetV2.- First, create an instance of the mobilenet version 2 model - Please check out the documentation for [MobileNetV2](https://www.tensorflow.org/api_docs/python/tf/keras/applications/MobileNetV2) - Set the following parameters: - input_shape: (height, width, channel): input images have height and width of 224 by 224, and have red, green and blue channels. - include_top: you do not want to keep the "top" fully connected layer, since you will customize your model for the current task. - weights: Use the pre-trained 'imagenet' weights. - Next, make the feature extractor for your specific inputs by passing the `inputs` into your mobilenet model. - For example, if you created a model object called `some_model` and have inputs stored in `x`, you'd invoke the model and pass in your inputs like this: `some_model(x)` to get the feature extractor for your given inputs `x`.**Note**: please use mobilenet_v2 and not mobile_net or mobile_net_v3
###Code
def feature_extractor(inputs):
### YOUR CODE HERE ###
# Create a mobilenet version 2 model object
mobilenet_model = tf.keras.applications.MobileNetV2(input_shape=(224,224,3),include_top=False,weights='imagenet')
# pass the inputs into this modle object to get a feature extractor for these inputs
feature_extractor = mobilenet_model(inputs)
### END CODE HERE ###
# return the feature_extractor
return feature_extractor
###Output
_____no_output_____
###Markdown
Exercise 2Next, you'll define the dense layers to be used by your model.You'll be using the following layers- [GlobalAveragePooling2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling2D): pools the `features`.- [Flatten](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten): flattens the pooled layer.- [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense): Add two dense layers: - A dense layer with 1024 neurons and a relu activation. - A dense layer following that with 512 neurons and a relu activation. **Note**: Remember, please build the model using the Functional API syntax (as opposed to the Sequential API).
###Code
def dense_layers(features):
### YOUR CODE HERE ###
# global average pooling 2D layer.
x = tf.keras.layers.GlobalAveragePooling2D()(features)
# flatten layer
x = tf.keras.layers.Flatten()(x)
# 1024 Dense layer, with relu
x = tf.keras.layers.Dense(1024,activation='relu')(x)
# 512 Dense layer, with relu
x = tf.keras.layers.Dense(512,activation='relu')(x)
### END CODE HERE ###
return x
###Output
_____no_output_____
###Markdown
Exercise 3Now you'll define a layer that outputs the bounding box predictions. - You'll use a [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layer.- Remember that you have _4 units_ in the output layer, corresponding to (xmin, ymin, xmax, ymax).- The prediction layer follows the previous dense layer, which is passed into this function as the variable `x`.- For grading purposes, please set the `name` parameter of this Dense layer to be `bounding_box'
###Code
def bounding_box_regression(x):
### YOUR CODE HERE ###
# Dense layer named `bounding_box`
bounding_box_regression_output = tf.keras.layers.Dense(4,activation='relu')(x)
### END CODE HERE ###
return bounding_box_regression_output
###Output
_____no_output_____
###Markdown
Exercise 4Now, you'll use those functions that you have just defined above to construct the model.- feature_extractor(inputs)- dense_layers(features)- bounding_box_regression(x)Then you'll define the model object using [Model](https://www.tensorflow.org/s/results?q=Model). Set the two parameters:- inputs- outputs
###Code
def final_model(inputs):
### YOUR CODE HERE ###
# features
feature_cnn = feature_extractor(inputs)
# dense layers
last_dense_layer = dense_layers(feature_cnn)
# bounding box
bounding_box_output = bounding_box_regression(last_dense_layer)
# define the TensorFlow Keras model using the inputs and outputs to your model
model = tf.keras.Model(inputs=inputs, outputs=bounding_box_output)
### END CODE HERE ###
return model
###Output
_____no_output_____
###Markdown
Exercise 5Define the input layer, define the model, and then compile the model. - inputs: define an [Input](https://www.tensorflow.org/api_docs/python/tf/keras/Input) layer - Set the `shape` parameter. Check your definition of `feature_extractor` to see the expected dimensions of the input image.- model: use the `final_model` function that you just defined to create the model.- compile the model: Check the [Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) documentation for how to compile the model. - Set the `optimizer` parameter to Stochastic Gradient Descent using [SGD](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/SGD) - When using SGD, set the `momentum` to 0.9 and keep the default learning rate. - Set the loss function of SGD to mean squared error (see the SGD documentation for an example of how to choose mean squared error loss).
###Code
def define_and_compile_model():
### YOUR CODE HERE ###
# define the input layer
inputs = tf.keras.Input(shape=(224,224,3))
# create the model
model = final_model(inputs)
# compile your model
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=0.01,momentum=0.9), loss="mse", metrics=["mae"])
### END CODE HERE ###
return model
###Output
_____no_output_____
###Markdown
Run the cell below to define your model and print the model summary.
###Code
# define your model
model = define_and_compile_model()
# print model layers
model.summary()
###Output
_____no_output_____
###Markdown
Your expected model summary:![Screenshot 2020-10-29 at 10.53.54 AM.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABHQAAALgCAYAAAD1KyDZAAAMZWlDQ1BJQ0MgUHJvZmlsZQAASImVlwdck8f7wO8dmSRhBCIgI+wliswAMkJYEQRkCqISkkDCiCEhqLippQrWLaI4KloVsWi1AlIHItZZFNx1/HCgUqnFKi5U/pcBtfY/Pv/7fO69b5577rnneXL3vncA6HcL5PIC1ACAQlmxIjEqjD0lPYNNegQowAwwgQNgCIRKOTchIRbAMtz+s7y+DhB1e8Vdbevf/f9rMRKJlUIAkEzI2SKlsBByKwB4mVCuKAaAGA7ldrOK5WqWQDZWQAchz1NzrpZXqTlbyzs1OsmJPMjNAJBpAoEiFwBGO5SzS4S50A7jEWQPmUgqA0DfGHKwUCIQQU6GPKawcKaaF0F2hvpyyHsgc7I/s5n7D/vZI/YFgtwR1salKeRwqVJeIJjz/0zN/10KC1TDczjCSpMoohPV8cMc3syfGaNmGuQ+WXZcvDrXkN9KRdq8A4BSJaroFK0+aiFU8mD+AAuyh0gQHgPZAnKkrCAuVifPzpFG8iHD1YLOlhbzk3Vjl4qVEUk6m5sVMxPjhzlHwePqxjYIFJp51frtqvwUrs7+TYmYP2z/VakkOQ0yFQCMWiJNjYPMgGyszE+K0epgtqUSXtywjkKVqPbfHjJHLIsK09rHMnMUkYk6fXmhcjherFwi5cfpuLpYkhytzQ+2VyjQ+G8KuVEs46YM2xErp8QOxyISh0doY8c6xLIUXbzYPXlxWKJubL+8IEGnj5PFBVFquS1kc2VJkm4sPqEYLk6tfTxWXpyQrPUTz8oTTEzQ+oOXgFjAA+GADVSwZoOZIA9IO/qa+uAvbU8kEAAFyAVi4K6TDI9I0/TI4DMJlII/IImBcmRcmKZXDEqg/OOIVPt0Bzma3hLNiHzwGHIhiAEF8LdKM0o2MlsqeAQl0n/NLoS+FsCq7vu3jAslsTqJatguW39YkxhBDCdGEyOJLrg5HowH4rHwGQqrJ87B/Ye9/Vuf8JjQSXhAuEboJtyaIS1TfOHLJNAN7UfqIs7+PGLcEdr0wcPwIGgdWsZZuDlwx73hPFw8BM7sA6U8nd/q2Nn/TZwjEXyWc50exYOCUkZRQinOX45kuDJ8RqyoM/p5frS+Zo9klTfS8+X8vM/yLIJtzJea2FLsEHYGO4mdw45iTYCNncCasYvYMTWPrKFHmjU0PFuixp98aEf6r/kEujnVmVR61Hv0enzQ9YFi8exi9QbjzZTPUUhzJcVsLvwKiNl8mXDsGLanh6cHAOpvivY19ZKl+VYgrPN/y8p2ABAUPDQ0dPRvWUwPAIf64Ta/+7fMGe5dRhcAZ9cJVYoSrQxXPwjwbaAPd5QZsAJ2wBlG5Al8QSAIBRFgIogHySAdTId5lsD1rACzwDywGJSDSrAKrAebwDawA+wBP4CDoAkcBSfBL+ACuAyugdtw/fSAZ6AfvAaDCIKQEDrCRMwQa8QBcUM8EQ4SjEQgsUgiko5kIbmIDFEh85CvkEpkDbIJ2Y7UIT8iR5CTyDmkE7mF3Ed6kb+Q9yiG0lBj1BJ1RMehHJSLxqDJ6DQ0Fy1CS9El6Aq0Gq1F96GN6En0AnoN7UafoQMYwPQwFmaDuWMcjIfFYxlYDqbAFmAVWBVWizVgLfCfvoJ1Y33YO5yIM3E27g7XcDSeggvxInwBvhzfhO/BG/F2/Ap+H+/HPxHoBAuCGyGAwCdMIeQSZhHKCVWEXYTDhNNwN/UQXhOJRBbRiegHd2M6MY84l7icuIW4n9hK7CQ+JA6QSCQzkhspiBRPEpCKSeWkjaR9pBOkLlIP6S1Zj2xN9iRHkjPIMnIZuYq8l3yc3EV+Qh6kGFAcKAGUeIqIMoeykrKT0kK5ROmhDFINqU7UIGoyNY+6mFpNbaCept6hvtTT07PV89ebrCfVW6RXrXdA76zefb13NCOaK41Hy6SpaCtou2mttFu0l3Q63ZEeSs+gF9NX0Ovop+j36G8ZTMZYBp8hYixk1DAaGV2M5/oUfQd9rv50/VL9Kv1D+pf0+wwoBo4GPAOBwQKDGoMjBjcMBgyZhuMN4w0LDZcb7jU8Z/jUiGTkaBRhJDJaYrTD6JTRQybGtGPymELmV8ydzNPMHmOisZMx3zjPuNL4B+MO434TIxNvk1ST2SY1JsdMulkYy5HFZxWwVrIOsq6z3o+yHMUdJR61bFTDqK5Rb0xHm4aaik0rTPebXjN9b8Y2izDLN1tt1mR21xw3dzWfbD7LfKv5afO+0cajA0cLR1eMPjj6NwvUwtUi0WKuxQ6LixYDllaWUZZyy42Wpyz7rFhWoVZ5Vuusjlv1WjOtg62l1uusT1j/zjZhc9kF7Gp2O7vfxsIm2kZls92mw2bQ1sk2xbbMdr/tXTuqHccux26dXZtdv721/ST7efb19r85UBw4DhKHDQ5nHN44OjmmOX7j2OT41MnUie9U6lTvdMeZ7hziXORc63zVhejCccl32eJy2RV19XGVuNa4XnJD3XzdpG5b3DrHEMb4j5GNqR1zw53mznUvca93vz+WNTZ2bNnYprHPx9mPyxi3etyZcZ88fDwKPHZ63B5vNH7i+LLxLeP/8nT1FHrWeF71ontFei30avZ64e3mLfbe6n3Th+kzyecbnzafj75+vgrfBt9eP3u/LL/Nfjc4xpwEznLOWX+Cf5j/Qv+j/u8CfAOKAw4G/BnoHpgfuDfw6QSnCeIJOyc8DLINEgRtD+oOZgdnBX8X3B1iEyIIqQ15EGoXKgrdFfqE68LN4+7jPg/zCFOEHQ57wwvgzee1hmPhUeEV4R0RRhEpEZsi7kXaRuZG1kf2R/lEzY1qjSZEx0Svjr7Bt+QL+XX8/ol+E+dPbI+hxSTFbIp5EOsaq4htmYROmjhp7aQ7cQ5xsrimeBDPj18bfzfBKaEo4efJxMkJk2smP04cnzgv8UwSM2lG0t6k18lhySuTb6c4p6hS2lL1UzNT61LfpIWnrUnrnjJuyvwpF9LN06XpzRmkjNSMXRkDUyOmrp/ak+mTWZ55fZrTtNnTzk03n14w/dgM/RmCGYeyCFlpWXuzPgjiBbWCgWx+9ubsfiFPuEH4TBQqWifqFQeJ14if5ATlrMl5mhuUuza3VxIiqZL0SXnSTdIXedF52/Le5Mfn784fKkgr2F9ILswqPCIzkuXL2mdazZw9s1PuJi+XdxcFFK0v6lfEKHYpEeU0ZXOxMTy8X1Q5q75W3S8JLqkpeTsrddah2YazZbMvznGds2zOk9LI0u/n4nOFc9vm2cxbPO/+fO787QuQBdkL2hbaLVyysGdR1KI9i6mL8xf/WuZRtqbs1VdpX7UssVyyaMnDr6O+ri9nlCvKb3wT+M22pfhS6dKOZV7LNi77VCGqOF/pUVlV+WG5cPn5b8d/W/3t0IqcFR0rfVduXUVcJVt1fXXI6j1rDNeUrnm4dtLaxnXsdRXrXq2fsf5clXfVtg3UDaoN3dWx1c0b7Teu2vhhk2TTtZqwmv2bLTYv2/xmi2hL19bQrQ3bLLdVbnv/nfS7m9ujtjfWOtZW7SDuKNnxeGfqzjPfc76v22W+q3LXx92y3d17Eve01/nV1e212LuyHq1X1ffuy9x3+YfwH5ob3Bu272ftrzwADqgO/P5j1o/XD8YcbDvEOdTwk8NPmw8zD1c0Io1zGvubJE3dzenNnUcmHmlrCWw5/PPYn3cftTlac8zk2Mrj1ONLjg+dKD0x0Cpv7TuZe/Jh24y226emnLraPrm943TM6bO/RP5y6gz3zImzQWePngs4d+Q853zTBd8LjRd9Lh7+1efXwx2+HY2X/C41X/a/3NI5ofN4V0jXySvhV365yr964Vrctc7rKddv3si80X1TdPPprYJbL34r+W3w9qI7hDsVdw3uVt2zuFf7H5f/7O/27T52P/z+xQdJD24/FD589kj56EPPksf0x1VPrJ/UPfV8erQ3svfy71N/73kmfzbYV/6H4R+bnzs//+nP0D8v9k/p73mheDH01/KXZi93v/J+1TaQMHDvdeHrwTcVb83e7nnHeXfmfdr7J4OzPpA+VH90+djyKebTnaHCoSG5QCHQHAUwWNGcHAD+2g0APR0A5mV4fpiqvfNpCqK9p2oI/E+svRdqii8ADbBRH9d5rQAcgNURVjqs6qN6cihAvbxGqq4oc7w8tbZo8MZDeDs09NISAFILAB8VQ0ODW4aGPsI7KnYLgNYi7V1TXYjwbvCdt5q6WEU08EXR3kM/i/HLFqg90Az/R/tfprOKIHCFd8oAAACKZVhJZk1NACoAAAAIAAQBGgAFAAAAAQAAAD4BGwAFAAAAAQAAAEYBKAADAAAAAQACAACHaQAEAAAAAQAAAE4AAAAAAAAAkAAAAAEAAACQAAAAAQADkoYABwAAABIAAAB4oAIABAAAAAEAAAR0oAMABAAAAAEAAALgAAAAAEFTQ0lJAAAAU2NyZWVuc2hvdIyy4/AAAAAJcEhZcwAAFiUAABYlAUlSJPAAAAHXaVRYdFhNTDpjb20uYWRvYmUueG1wAAAAAAA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA1LjQuMCI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOmV4aWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20vZXhpZi8xLjAvIj4KICAgICAgICAgPGV4aWY6UGl4ZWxYRGltZW5zaW9uPjExNDA8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpVc2VyQ29tbWVudD5TY3JlZW5zaG90PC9leGlmOlVzZXJDb21tZW50PgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+NzM2PC9leGlmOlBpeGVsWURpbWVuc2lvbj4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+Cp8ikCwAAAAcaURPVAAAAAIAAAAAAAABcAAAACgAAAFwAAABcAAA2/arFh/7AABAAElEQVR4AeydB7zsRPXHh2YBBBUEpEl5IBaKIAKCFJH+F5XeBURFioCKgKKCYBeQKlIt9CII0p40lY40kSYIClJEUKrShP/7Bs7ec+cmu8lmdjd77+98Pu/t3mwymXyTTPnNmTNTLbvssq8EmQiIgAiIgAiIgAiIgAiIgAiIgAiIgAiIwNAQmEqCztDcK2VUBERABERABERABERABERABERABERABDICEnT0IIiACIiACIiACIiACIiACIiACIiACIjAkBGQoDNkN0zZFQEREAEREAEREAEREAEREAEREAEREAEJOnoGREAEREAEREAEREAEREAEREAEREAERGDICEjQGbIbpuyKgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAQdPQMiIAIiIAIiIAIiIAIiIAIiIAIiIAIiMGQEJOgM2Q1TdkVABERABERABERABERABERABERABERAgo6eAREQAREQAREQAREQAREQAREQAREQAREYMgISdIbshim7IiACIiACIiACIiACIiACIiACIiACIiBBR8+ACIiACIiACIiACIiACIiACIiACIiACAwZAQk6Q3bDlF0REAEREAEREAEREAEREAEREAEREAERkKCjZ0AEREAEREAEREAEREAEREAEREAEREAEhoyABJ0hu2HKrgiIgAiIgAiIgAiIgAiIgAiIgAiIgAhI0NEzIAIiIAIiIAIiIAIiIAIiIAIiIAIiIAJDRkCCzpDdMGVXBERABERABERABERABERABERABERABCTo6BkQAREQAREQAREQAREQAREQAREQAREQgSEjIEFnyG6YsisCIiACIiACIiACIiACIiACIiACIiACEnT0DIiACIiACIiACIiACIiACIiACIiACIjAkBGQoDNkN0zZFQEREAEREAEREAEREAEREAEREAEREAEJOnoGREAEREAEREAEREAEREAEREAEREAERGDICEjQGbIbpuyKgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAQdPQMiIAIiIAIiIAIiIAIiIAIiIAIiIAIiMGQEJOgM2Q1TdkVABERABERABERABERABERABERABERAgo6eAREQAREQAREQAREQAREQAREQAREQAREYMgISdIbshim7IiACIiACIiACIiACIiACIiACIiACItATQWeRRRYJu+++ey26L7/8cph66qlrpfH444+HWWaZZeBp1MrAlINfeumlMO2009ZKpilp6L6O3Mam3JMU+dB91X0dITD6m8rhER4p3rUUaeh9bd490X0duScpvqXg2ZQ09L6OPBFNuScp8qH7qvs6QmD0N7WbRnikeNdSpJHifT377LPDueeeO3JxCb/1RNBZbLHFwi677JIwm90l9eSTT4aZZ565u4NfOypFGrUyoIPHEEhxT1KkMSZj2lCLQIp7kiKNWhehg8cQSHFPUqQxJmPaUItAinuSIo1aF6GDxxBIcU9SpDEmY9pQi0CKe5IijVoXoYPHEEhxT1KkMSZj2lCLQIp7kiKNWhehg8cQuOCCC8IZZ5wxZnuKDT0RdCZNmhR22mmnWvl7wxveEJ5//vnwyiuvdJ3OU089FWaaaaauj+fAumlMM800gX8vvPBCrXxMNdVUtVhw8iakofs6+jFowj1J8Wzovuq+jiYw8lfdMpSU6qahcnjkfvBN7+toHiqHR3jUfddIqW4ael9H7gff9L6O5qH3dYRH3XeNlOqmofd15H7wTe/raB56X0d4nH/++WHy5MkjGxJ+64mgkzB/SkoEREAEREAEREAEREAEREAEREAEREAERCAiIEEnAqI/RUAEREAEREAEREAEREAEREAEREAERKDpBCToNP0OKX8iIAIiIAIiIAIiIAIiIAIiIAIiIAIiEBGQoBMB0Z8iIAIiIAIiIAIiIAIiIAIiIAIiIAIi0HQCEnSafoeUPxEQAREQAREQAREQAREQAREQAREQARGICEjQiYBM9D/nnnvuMN1004X//ve/4ZFHHpnoOCpdP5Ht3/72t2fHwA6GMhEQAREQAREQAREQAREQAREQARHoBQEJOr2gOqRpzjfffGHvvffOljd/9NFHw1577TWkVzKYbC+//PJh2223zU5+7bXXhqOOOmowGdFZRUAEREAEREAEREAEREAEREAExj2Bvgg6U089dQvkyy+/3PquL80isOeee4aFFlooy9SJJ54YLr300mZlsOG5wUPne9/7XphxxhnDK6+8Er797W+He++9t+G5VvZEQAREYITA2972toC4/9JLL4X77rsvPPHEEyM/6ttQEnjd614XZptttjDHHHMEvnNP//WvfwUGboraZG9605vCV7/61WyAB2/TffbZZyivXZkWARGYeASmmmqqrOzKu/KiMi9vX20TgWEh0HNBZ8kllww77rhjxuOFF17Ivutlat7j8f73vz987nOfyzJGQw/vHBr0ZeyNb3xj1gFg38ceeyz885//LHPYuNxnrbXWChtssEF2bX/5y18yUWdcXqguSgREYNwQYNBlo402Cssuu2ygI++N+gBx/4ILLvCbe/r9He94R5h++ukzYfzPf/5zoejQ00wkSHzQdeOb3/zmsP7664flllsut3Pz7LPPhj/96U/hN7/5TSbe+UtG2Pvud7+bbaLN9ulPf9r/rO8iIAIi0FgCm2yySVhttdVy80ff5h//+EcWVuKmm24K11xzTVbX5O6sjT0jQB2z7rrrZulzH2688cbs+4c//OEw//zzZ/X+SSedFJ5//vme5WE8JdxzQYeGxHbbbddiRqNAgk4LR2O+MPo2zzzzZPmp6p3Dy7f55ptnx958883h0EMPbcx19Tsjr3/968MPf/jDrDPCufl+xx139DsbOp8IiIAIlCIw7bTThs9+9rOBwZd2Rqf/1FNP7UvD90c/+lFLWNp3333D/fff3y5rjf1tkHXjO9/5zrDLLrsE6qROdtVVV4Vjjz121G4SdEbh0B8iIAJDROCTn/xkWHHFFUvl+G9/+1s48sgjM4/FUgdopyQEfP3461//Opx11llZut///vfDLLPMkrU1qMMYeJB1JiBBpzOjcb/HggsuGL7yla9k14lyvdtuu4X//Oc/pa/bv5QTXdAB2lZbbRVWWmmljN8f/vCH8OMf/7g0S+0oAiIgAv0ksOWWW4aVV165dcqnn3464BVDcPyFF144MJXUDEFn8uTJ9mfPPiXo1EM700wzhW984xsBDx0zAvXfdddd4cUXX8ymX80777yt3yXoGCV9ioAIjAcCsaDD9FKMem3mmWcOPhQI2wmP8J3vfEcOB8DokzF7xwaSbPDbDyQ88MADmupb4V5I0KkAa7zu+qlPfSp88IMfzC7vhhtuCEcccUSlS5WgMxoXcYiIR4T973//C7vvvnt48sknR++kv0RABERgwATw3jjwwANbog3TZb/1rW8FRB2MVfsQ+5n+hCEKEFel1yZBpx7htddeO5tqZamccsop4ZJLLhnVWaFDs/TSSwf2veeee8IvfvEL2z379A1rTbkahUZ/iIAINJyAF3QYoCC+pRmiDmXfhhtuGBC/zU444YRw2WWX2Z/67CEBYhxRzxNzFEeCnXfeORCW5UMf+lDYeuutszMzeMQgkqwcgaESdHgAuPlvfetbs086yTQwy8Z6KYek/F40iGjwMgqG+ls3dgyNawIXUsA89dRTWZq9njvINRx22GEtt+zDDz+8NY+xLImmCToWt4BOyUMPPTSqEVvmmureV55TKg9cBrGqU9jK5FH7iIAIiEBdAn5lPtIikDuxv7wh9iP6mzGKiQDQS0sl6FAWzzXXXFl7gdG+frtuD6pu/MIXvhDe8573ZLeok5cojPgXT4UvEnSMKbGWHn744dJBszkuVfvN8kB63d5X8j/nnHNm7ce///3vitPQyxdaaYtAnwm0E3QsKwy+7rHHHq34YnmeirYvweTp673lLW/JNlH20U+ra/QTZp111iwZYpASs45FVfIsRRlK/4b4NAw2M5XZyn3rN/373//O4gsV5SEvX91sI8QHoT4wL7h95jOfCcsss0y2/ZBDDgm33HJL9l3/dSYwFILO4osvHpZaaqmw2GKLtebV26XxMCLq/O53v8sC+9l2++SFtaDMPMBf//rX2zbqvva1r7U64szn++1vf2tJtT5pKK233nph7rnnDsQfMGOaEoGdCOL03HPP2eYxnxxvAQZxgT766KPDxhtvHFZYYYVsBQo7AKGKAob0cJPuhfnpVqS/6667tkZni85HI9UCWbEPjSoKGjMb3bW/+UT5pmGJpbons88+e+YJw7kpBBFOmD5AA55CC0MQ++Mf/xh+9rOfBVbqaGd176tPm7hRxI/COjWo/XH6LgIiIAL9IuA7/gyQ8HdsNPRoWFmZyggm5bkZHeO99947+50yljo2z5gCRF1BQ5Fgu5TZGOU3IpGf2uUDM1MPxmU3Dd/9999/1GmY6mru23ikTJo0KXzgAx8IM8wwQ2s/AmGeccYZhYMWn/jEJ1reqr/61a/CFVdc0TrWvlCuU/9j119/fTjttNPsp1C3bmwlVPML7uvW8fCxCaokGws6xDKA8bve9a7sPlpaMGVaMcJKntVpv5Feivtq+eIZI/g3bUnjw2+0I7kOnhuCRMtEQASGm0AZQYcrxON0gQUWyC6WWDrf/OY3WxfO1CwGPZZYYolMBLE60Hagr8NKkPQ9qJPa2Re/+MVWnFIGLKhX6fdZ7FI79sEHH8y8Um677TbbFOqUob6/hYCDmGMet88880w4+OCDszqPMBF2fYhVxLFJIVjZRbDggg9STb1MHWP217/+NfuKyI54htG3t770T3/608I6JttZ/4WhEHRo/PkbX3Tf6Dgff/zxrQeA/WgY0rgx4aWdtwQNQFZ3MuOBRnAxIw1WMPrIRz4ySsCw3+2ThgENYB7GPKORSRBKjNFQRocs5kre/sS0Sfli+XP4VZnwMLKpQn6f+PvHP/7x8NGPfjTe3PZv3Lkvv/zybJ9U92S+KUvrIsCZ0einkMwzCsmDDjoooD7Hluq++nT9yCz3jnsoEwEREIEmEfB169VXXx2OOeaY3Oz5Ri8NTaZpmcUdfxussN/tk06/NdQQfSiTMRqRDGpUsbwyFcGBjnono/N++umn58YC8tOPGdBBDIltzTXXzFz12R6L9XXrxvhc3f6NpxUDHhgN5f32269yUv6+cjBtFQaA8gxXee5pnpeyf8byjrVtee03fktxX0mHThujv+3akoiN5513Xis4J8fJREAEho9AWUGHqT4INtgTTzwREF7M4kV9bHv8yWA+dRiDx0XGVOY55pgj+5lBegYPivor9JX8FNg6Zej73ve+sNNOOxVlq3D7ddddF37yk58U/l71h27qRn8Oeet4Gvnfh07QQUFFMHn88cczlfG9731vy6OGS2QlDkZZvLEcN8tyY7EC6/fzBQDnsEDBtg8iBg+lGS8xozmotHjrsKqEGQG2aFTlua15Qcf255OGJlO3GCml0cHUMqyXgg7CEvnBrr322nDUUUdl39v9B3PzPmE/lphl6hmGqJLnIsf8fZiYpbgnsaBD2jC8/fbbs9FfGnDcFzNGU4lkH1uq++rTRQVn1NqMSoLKQiYCIiACTSFAvDRbBalIwCCvXuiIAxX6jj/lb1VBh/TxwrB88DejeWYMqsRCPGUpooy3vI4/7QSOxzODessEJeplPIZMVLJ0/HUW8Wgn6KSoGy0vdT59J4V08EoiHgFeymXN31c7Bm533nlndj+oX62Dwu/U8XRUYvOdkW7abynuKyPSeIHhIWZGewSxi2dj0UUXHeX9zQj6rbfearvqUwREYMgI+P6cn9ITX4YXWgjR4AeJvaCDN4uVXwjY9HkoN8yrBU/SL33pS4UzHPx5fB7oR+IRM80002RTQKmj2gk6loeyfeBY0KE+JKyH9dksL4hRbDPBm+ul70mdnsIY5F5nnXWypGBmsYtI3xwWfBB/+pI+5AhtlXg6eIp8jac0hkLQofPPDb/00kuzB9/fALwrmN5CgCuMh4CH0E9RYiqNdyWnIYdXjDdeogMOOKDlinbmmWeG888/v7UL7rkINNYgvPvuuzP10jc0iTWw7bbbtrx3jjvuuHDllVe20rAveYIOBQ4Kr7mhsy+jjbwEiBDmdmZppPokYO8iiyySJXfRRReNch8vew7vjVJ2lasU9yQWdCgYGGFGmMIoNOhcmGDF74h0fhQx5X31vCiYeJ7MmCta5JJu++hTBERABPpFAAHFB8Bv5726ySabtNyl46lZvuNPGduNoBNfczcxdOKOP40/0qHBjOFdQtvARkXz4iXUFXTi6+imbozT6OZvpm9vs802ow6FA4MtCDK0N2zVl1E7uT/8fWUzbSq8rGzAhsCiMGcKFpbnNcX2uu23FPd1s802C6uuuirZyTooTMH20+noXDCKbR5IDOjRsasigGWJ6z8REIFGECgj6PgAvGT697//fWBqjxmD9HjSENIDj5U4XisDt8xqsBkgcb/R0uEzFnQQhQj4i3hjhuBMOAuEJb+9ThnqBR2cDxhcRphnwJnBeMwGabgOBHibjsq1+f6S5bPuJ2ExbGob8fgQ2zFf/8ojpzrloRB0Ol1W3Hn27twcS8eeB8YCT+V58XiRhUYpQof3qNhiiy3CKquskmWFl4LKns/YfCESu6bbvv5cbENc2nfffZMpoXaeMp+8VLxcWLvCqF1a3TRaU9yTWNDJcxGk08JUKxv9veCCC7LRSruelPfV0uSTgtG7KzJFwc+J9fvquwiIgAj0m0DcYUcMZ9pVnnl3aepHPDv5xHw6TRJ0aLAy8unNXwed9c9//vOjBkt8g7IbDx1/Lr53UzfGaXTzN3GJGMR697vfXXg4I72IGnRi/ACYHeDvK9sYUEP080aH58tf/nK2iU7C9ttvP6bT4/fP+96p/RYLOlXvK+n/4Ac/aI2kExvpnHPOGZMVrpepaQhVWF6A8DEHaYMIiEAjCfi+GJ54Nk2Y95vFZ+iH+bgxXASCNVM/qxjepRYyI56C69OJBZ1uzuXT89/blaFe0GFGCf0hzPd9fFw86kRi9mCUh7BLbb5e9DHeCHNCkGjaEeQDBw1ZeQJDJ+ggBKAe8s+8ZbhcP++RFYYYgfLmp9WYSulHX3xwSEagUAe9+RgCjPzhwuzNggLjhkycHYzzEGQ4tljQYZUpgikPwmjo2NQu3KVxm65q/uUs66HDOerek1jQoaDKC2joG+k33nhjYCUvs5T31dK0T6av4UaJpSy8LX19ioAIiEC3BBid8wGM261w6D10aGTRybb603f8myLoMJ2GBnRsPq/8Fnvr+rpimAUdro26BwELzxQb0GB7bIzOcu/jkdiYVV7jHm8n2i9miDtMBSiybtpvXtDp5r4yDQ7PLDPadr6jYG03fmdRBZuK0M5jzdLSpwiIQDMJeEGnTA7L9F0o7xAciANq5QbTspidgeH9SJ8qz7ygQ3BiBtMRwata1TLUCzo+Th6B/W0K1LnnnhvOPvvsLCt+QRfiz95xxx1Vs9hxfx9yg5kMhMmAK4IOBh+cHGTVCAyFoMOLw1xFHj4EEx7odpY3soIAxMNix/rGa/zboYceGni5vbHNIoP77e2+07ilMWIu37avF3TYh4c7duWzfXv9OSgPHa4r5l71nsSCTlGcGh/zALXZB4hMeV/9vZKHjqeh7yIgAk0jEHfYx5OHDqtT/vznP89F7uMGxW7d40nQsYsnRgxT0hE2WPGElVtiI5YQ04Jpj5jFz0fRCph4olLfcWw8pZm06rbfvKDTzX1lEYtNN93ULqv054UXXjgmTlPpg7WjCIjAQAmUFXQot/BSJCabF3ot85SXrM604oorjlox0X73n0WzMtjHCzreK8UfX/S9ThnqBR2mjjHdFPMD6jgoMHsB89x64WzAtTD4jijGoBBTXZl+hihG/YvlzaLJftB/bQk0XtDh5jMn39alb3s1r/1YNL3FNwy8F87aa68d1l9//exo4gMQ2Mo3bJjXyNJu3Rhp+Tg7pOEFHX5jn0EZI2oWzLnbBky3Hjpcc7f3hGO9oIPSzQoW/r6xD4Y7JG6RmJ/nn/q+Zid47T8qAXPxZBPCGS7uMhEQARFoAoHxHEPHjzjGrBkRxEUd86sv8vd4FHS4Lm8sFMAiEauvvvoozx1iR9CxMfOCDvVqN7GRUrTffBuhm/u6+eabZ1Pf7LrKfsbxNMoep/1EQAQGT8ALE/QPfPv72WefzZYZZ6nxG264IVtoJy/H9DHoI7XzcPTHMXsDh4I884JOXNbm7W/b6pahXtDx02b9QDcLCSGiYL689IPslp+qn7BjMMGMeGVM98IQdCw0BYKZ7Ud8O3Oq4L51WhLe0p7on40XdHxnnJuFtwtuYwSNonNubt877rhja3pLkaDjH2waKHh0kIZ/0eIYK5wTJZHgimYXX3xx5lpnfxd9kjemAMUigxd07rvvvrD//vsXJdHz7d717Zprrqm8fCwZrCPodHtPOK8XdGDs4zrwu5nPHwU5czOx1PfVzsdnPJ2BmEw+4LXfV99FQAREYBAEmApqU5eLphiRL4L9L7/88lkWLYCi5TdFx9/Sss+6QZGJkUKslDxjcMZWO4qnGZcRdNZaa63WtOp2MRM4t697yrj05+W3V9vmmWeezCvH0ieezvHHH29/JomNlKL95gWdbu6rjxXBNHgboW5daMEX4i898sgjBb9qswiIQJMJeEGn3SpXRdfAbA7Cd1hICvYjHVaDIr6qzbxAHLcpV2UFHeq3sqvo1S1DfR/Lr0TYL0EnXgCniHfRdhYnIr6rrDOBxgs6e+21V5g0aVJ2JQQP5gWzF8kuD3dfGqY2napI0OF35jfa6BwudrygxFExw2WYFQ5iQ6nEdRk79thjAwpit+YFnXYFQLfpVznOu93RePEsyqZTp9Fa5554QYe8FrmEMxK58cYbZ5fDM0TcBLOU99XS5HPllVfO5uPznecVEamb+bIcLxMBERCBXhDwS0rnrfpk56RetBWAYrdy5r6TDmV5kSdH7A0UL1xg57HPuoKOD/JoadqnL/PjEcitt946sPIJRkwBPEJi23DDDQONYWyYBR3yz32wlU5YuZP7aJZCqEvRfvOCTjf31df/xAli5RaZCIjA+CZQV9AhoLyPzYrY7VfGM3qsgvV///d/2Z/t+nPecaCoj2pp+s+6ZagEHU9zfH9vvKDTrvFltyb2hmj3svhAUHj53HXXXa3Vq9qpuKxqhYCAVZ3/mB3k/muSoLPwwguHPfbYI8sdggPCQyyYuaznfvWCjo+inrtzzsZu70ks6BAjifsZmy/YUdf99LmU99Wf149oN21k1udT30VABCYuAab72rLTjDr6BqxRIXYcZaYNmLCcKlOVzGJPR+oQPCG9EWjWe6JWEXRoCBMMt5P5jj/1gAVY9MfFq4HEgX69u7kfzfRpsJITMWmwKoJON3WjP28vvnvvFVzbmRpslkLQSdF+q3tfF1tssWxqN9fFal6k9/zzz9tl6lMERGAcEvDt/nZ9u6JL90JwPBDsj6G+s1WheiHo1C1DBy3oEEiaPJhZnBzixpq3JN6/iyyySLYLC9f4RYKo++UpafTafzZe0PEu4UVCjW9gcblF+/EbjRSWMGdeYmztPG+8KzaCByJIVeHDztckQQfvJgJf2VKdcZBIy3O7T+IbEb8GI7iiXzml3XH2W7f3JBZ0Ypdx0uf6CMBlAa2ZLnfyySfbqUfFTKh7X1uJTvniR75PPfXUMHnyZP+zvouACIjAwAkwb52GrxmiC9OAvfnyne3xKpIIPcyDN8GHsg9vD29+uXC2dxJ0iEMw++yzZ0mUXSHQd/wZnKCOjldc8tOl2AevzmeeeaaV1XXXXTd87GMfy/5mdQ9W+fBGPYJQRCMV6yToeHbd1I3+3FW+zzHHHNlCC+1iD3C/GP1loQksHuxIIeikaL/Vva/+OrjOIs8rfpOJgAiMDwJ1BR1fVxQJNXPNNVc2bdXqvqL9INqth07dMnTQgo5/mojfZqtX+UEX37/upg/qzzGRvzde0NlnysoLzPXGUO5YocJPXVlllVWyIE5eoGkn6JCOH5Xkb4zo5ixdTrTtPJtzzjmzB9FeXBRE1EXmZHtjqdAll1wyC8RLkOG8ZbSbJOiQdx9H59prrw0st13FmBJHw9CsE3/bz392c09iQYfRN0aS/TJ7frldzhcHJ055X+16fL6YgpC38oftq08REAERGBQBphEjeFscHaakIOqYyIEwwDRcE8SZjkx5FhvCB6sWYiw4wCCBxY6bd955AzHELA326STo4CmEyztGec4UrE4rQfqOP8chUDC6acdxLQS4tFWe/MII7I95gYt2Bo1wE7io+wmub1Oy2L+ToJOibuQ8VY2lZxGTmEaHR3G8JDkDHRtssEG2eoul7QNjss0LIUVT6djPdzji+5qi/ZbivhJfj3YX9txzz2XT5mlPxsY1rzxlujQeZXQsZCIgAsNJoK6g44UQyj/qAlbJNaO+Y4Um2vtmvRB06pah/jq812m/YugYGz5ZLYw+GUaMO2KiYdZ+gDMeT3mrjWU76r+2BPou6OAFYQ29opwxFYpRQMy7QPM3q0LRwOOTF4mAS7F1EhT8qJkd65dzs23xZywOIObQKKCRS4OYBiNuYxaj58QTTwxEFY+taYIOeabBjSFoMWpZxSWZ0UpiE9moJfeXEVpGRy1oNW76vjCMmXRzT7j/TJnyhqhD0GwCELNEq38+4tgPdlyq+2rpsUQqS6ViBD6jMyITAREQgSYS8HFjyB/1GnUsXpusgOiFGOLOMVAR25Zbbpl1hG07DVs8UhBBqO9MMLLf446/bbfP2KOH+h5hxaZyMT0MTwtvccef36hzKIMRrpZddtksEL4dgxcQ+fRGIx0PHBu4oT7E85P8M7pIneOtk6CTom705yv7HUFnueWWy3anPqZNxf2gXmSKHPWiCXDsRAOauhTOZikEnRTtt1T3lQ6ZX63m9ttvz+4/zxRxoBAemYLO4CDvAO0gmQiIwHASqCvoUD7ibYr4jVGO3nnnnVk5yt9xfcK2Xgg6dcvQJgk6CGDkBzNPXzxxbWWwQS8SlGVsiP/ru6BThhVuwhbXhdUo8KqwUbW84xl94ndrNHYSdHhBWbrUVrogzbzGXXwuGoXbbLNNtuRn/Fve38Mi6JB37w5YFPwr7xpt2xprrBE22mgj+3PMZ7w8bLxDN/ckFnQQj/CQyjM6ADwXNGpjS3VfSZfrQG2m0YwdeuihreX3sg36TwREQAQaRIB6k2nLFgegKGuM7jFd1XvI2r4MZhCPxoQQ226fzIFnH7NOgg5lMnWSDY7YcfbJ6pS77bab/Zl95nX8R+3g/oin3rqfghdD/Hb7zkAFAgDWSdBhn7p1I2lUtU7X4NOjo0I9hUeTtxSCTor2W6r7SiwdYttZ3eyvNf4uQScmor9FYLgI1BV0uNp11lknEOOzndGnYOoV1gtBp24Z2hRBh7YBMygYIGKgZOedd868Z/0qXnmrTLdjr99GE+i5oOMfptGnLv6LF4QGnxkNQYL3WfBG204HnoBJzN/HTYsRFowYOffcc4/tlvtJY9DWvK+6uhPL1CFeWKMuPgH5x3MHr5+8paqXWGKJ7GHmOEaJEJcGbSussEImVpEPeDBaR0OvisEfl7pFF100zDDDDKMa9+3iE9k5qt4TL+gwysYUOFzivVBH2hSyuIX70Uc7p/+se19JyweI5jnYZ8qUwaocfZ70XQREQAR6TYDGFiOBeNN4jxzOixiOp+l5553XNhvUz0xt8R1myj5G3Y488sisTuc3tpWZhoo3BcvB0uBDXEDkMctbrch3/FnmFO8ipm15kQnPU35DnCoyBC46A4zAeqOTf9FFF2Xeq7DCrrzyynDcccf53XK/160bcxNts5EOBvln+rcX0vwh3Ifrr78+c33PW9kTMQ3PW/jhwcPoap7ZimRF97Vu+y3VfSXvPH+bbbZZNkpscQP9NfF84NEFF8Q6mQiIwHAS8J73cXywsldE2bfqqqtmcdVsBoId++STT2b1AWUn4gTWrj/H6rpV+qh2Hj7rlKHUy4S0wPwS4IQroV+N0Xeiv4oxFZf4QVgn54hsp5L/0S9jejdM8QC2+HR+URx+zwtTUvIUE363ngs6KQnTqCPmyUwzzRQefvjhwKoMTLGparyYiCjmflvkRt4pXdKh4YTLGPP0GbnjXyfhoFO6g/idlwwRzeIVlRFgUuazm3sSCzqIerhL84zwGw1wOhN8VrFu7ysNRFw0bVQ5ZWFYJf/aVwREQAS6IUD5SX3GypE2YJI3KFGUNmUgdSJlMHF4GFjpdvGAonMUbfcd/5NOOikTbRCnmHrLAMMDDzyQeWgiPJQxXO6pDxEBmLrFtKU876QyaQ1yH+ojBp/4hAMdEdopeEITT6Zf1m37LfV95Xpp78w222zZc8ozgscXzzmds27alP1iqPOIgAj0nwCDCdRpCCt4l9CvoAztt3VbhvY7nzrfYAgMlaCTCpGPXk6jFfWSCn2im1dyEcz23nvvviHp5p4g2lgMHTx0EHQGaT6gZrcjAoPMv84tAiIgAsNKIK/jP6zXonyPENB9HWGhbyIgAiIgAiKQR2BCCTq4Ui+11FKBAI7mnXPddddlU7by4EzEbUyXYpQVgYRl5Xptde5J0wQdRkBtCVgU/GH01Or1/Vb6IiACItALAur494Lq4NPUfR38PVAOREAEREAEmk1gQgg6dPx33HHHzN3YhBxuC67XBHK8//77m32XxmHuUtyTpgk64/A26ZJEQAREYCgIqOM/FLepciZ1Xysj0wEiIAIiIAITjMCEEHQIfhyviIGYQzBlBb4bzBOf4p5I0BnMvdNZRUAERKBpBNTxb9odSZMf3dc0HJWKCIiACIjA+CUwIQQdAhuyjCfGKgZ33313uOmmmzquhDV+b/vgryzFPZl11llb0eUJ9EgQYpkIiIAIiMDEI8DqHEwZxs4666xw8803TzwI4/CKdV/H4U3VJYmACIiACCQlMCEEnaTElJgIiIAIiIAIiIAIiIAIiIAIiIAIiIAIDJiABJ0B3wCdXgREQAREQAREQAREQAREQAREQAREQASqEpCgU5WY9hcBERABERABERABERABERABERABERCBAROQoDPgG6DTi4AIiIAIiIAIiIAIiIAIiIAIiIAIiEBVAhJ0qhLT/iIgAiIgAiIgAiIgAiIgAiIgAiIgAiIwYAISdAZ8A3R6ERABERABERABERABERABERABERABEahKQIJOVWLaXwREQAREQAREQAREQAREQAREQAREQAQGTECCzoBvgE4vAiIgAiIgAiIgAiIgAiIgAiIgAiIgAlUJSNCpSkz7i4AIiIAIiIAIiIAIiIAIiIAIiIAIiMCACUjQGfAN0OlFQAREQAREQAREQAREQAREQAREQAREoCoBCTpViWl/ERABERABERABERABERABERABERABERgwAQk6A74BOr0IiIAIiIAIiIAIiIAIiIAIiIAIiIAIVCUgQacqMe0vAiIgAiIgAiIgAiIgAiIgAiIgAiIgAgMmIEFnwDdApxcBERABERABERABERABERABERABERCBqgR6JuhMP/30Ye+99w7TTjtt1TyFp59+Ouy3335hnXXWCSuttFLl4zngzDPPDNdee20j0rj11ltrs0jBsylp6L6G1jPelHuSIh+6r7qvcWGtcniEiNVrKd61FGnofdX7OvJ0vvpN7+sIEb2vIyz4lurZUHv4Va72fKkcVjk8+k1L966l6APrfU3/vsb3O9XfPRN0Zp555nDggQd2lc/nn38+7LDDDmGrrbbqWtA5/fTTw4UXXtiINK6++uraLFLwbEoauq8h2DPelHuSIh+6r7qvcYGvcniESNPeeb2vel9Hns5Xv+l9HSGi93WEBd9SPRtqD7/K1Z4vlcMqh0e/aenetRR9YL2v6d/X+H6n+rtngs6MM84YDjrooDD11FNXzivK9a677ho22WSTsNpqq1U+ngNOPPHEcOmllzYijeuuu642ixQ8m5KG7uurIxI84025Jynyofuq+xoX1iqHR4hYvZbiXUuRht5Xva8jT+er3/S+jhDR+zrCgm+png21h1/las+XymGVw6PftHTvWoo+sN7X9O9rfL9T/d0zQYcMMt2qG0Hn5ZdfDi+99FJ2bDdTtjj3iy++GF555ZXGpFGXRQqeTUmDZ0L39dVnvCn3JEU+dF9DsLIrBc+mpKH7qvvKs+hN9esIjaa983pf9b6OPJ2vftP7OkJE7+sIC76lfDbUzxkpe1QOj7DgORsvz0aK+wqPXlhPBZ1eZFhpioAIiIAIiIAIiIAIiIAIiIAIiIAIiMBEJyBBZ6I/Abp+ERABERABERABERABERABERABERCBoSPQM0EHt6SVV145TDPNNJWhPPvss+Gqq64KCyywQFhwwQUrH88BROZ+5JFHGpHGo48+WptFCp5NSUP3NQR7xptyT1LkQ/dV9zUurFUOjxBp2juv91Xv68jT+eo3va8jRPS+jrDgW6pnQ+3hV7na86VyWOXw6Dct3buWog+s9zX9+xrf71R/90zQ0ao5r94iVgZQlPBXWSiq/8hrayxSvCdNSUOrNWi1hpEnfKT8S7HSQoo0VA6rHI6fT5XDo4mkWslI7+vody1FHa36VfXr6LdVqyF5Hk0ry/W+6n31zyffrX6Nt6f6W4JOG5IGv+6LqY7Eq5CtwK3LM0VjMUUauq+6r3HxYc94igZ8U9LQ+6qGSfycp6obVQ6PLkNTvPN6X/W+6n2NCYz83bQ6Wu+r3teRp/PVb6pfR4iMx/d15OrSfuuZoMNUK5YcZypHVfvPf/4TLr/88rDwwguHSZMmVT082/+WW24JDz74YCPSwO2tLosUPJuShu5rCPaMN+WepMiH7qvua1xYqxweIdK0d17vq97Xkafz1W96X0eI6H0dYcG3VM+G2sOvcrXnS+WwyuHRb1q6dy1FH1jva/r3Nb7fqf7umaCTKoNKRwREQAREQAREQAREQAREQAREQAREQAREYDQBCTqjeegvERABERABERABERABERABERABERABEWg8AQk6jb9FyqAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIjCYgQWc0D/0lAiIgAiIgAiIgAiIgAiIgAiIgAiIgAo0nIEGn8bdIGRQBERABERABERABERABERABERABERCB0QQk6Izmob9EQAREQAREQAREQAREQAREQAREQAREoPEEJOg0/hYpgyIgAiIgAiIgAiIgAiIgAiIgAiIgAiIwmoAEndE89JcIiIAIiIAIiIAIiIAIiIAIiIAIiIAINJ6ABJ3G3yJlUAREQAREQAREQAREQAREQAREQAREQARGE5CgM5qH/hIBERABERABERABERABERABERABERCBxhOQoNP4W6QMioAIiIAIiIAIiIAIiIAIiIAIiIAIiMBoAhJ0RvPQXyIgAiIgAiIgAiIgAiIgAiIgAiIgAiLQeAISdBp/i5RBERABERABERABERABERABERABERABERhNQILOaB76SwREQAREQAREQAREQAREQAREQAREQAQaT0CCTuNv0cTL4Iwzzhje9ra3ZRf+8MMPh+eee27iQdAVjysCb3jDG8Lb3/727JoeeeSR8N///ndcXZ8uZnwRWHDBBcM000wz5qLuvffe8NJLL43ZXrRBZXkRGW0XARFoEoG6dfTUU08dJk2aNOaS/ve//4W//OUvY7ZrgwiIgAikJCBBJyVNpZWEwC677BIWW2yxLK2jjz46XHPNNUnSVSIiMCgCyy+/fNh2222z01977bXhqKOOGlRWdF4RaEvgzW9+czjggANy9zn44IPDH//4x9zf8jaqLM+jom0iIAJNI1C3jn7nO98ZvvzlL+de1l577RUeffTR3N+0UQREQARSEBiIoDPttNNmBd90000XXnzxxfCDH/wg+0xxQUpjuAm8613vCl/60peyi/j3v/8d9txzz7YjwoyKlLVXXnkl8E/WnoBn+vLLL7ffWb+WIsDo3/e+972AxwLP4Le//e2At4NMBJpGoJ2gc8ghh4RbbrmlVJarluWlEtVOAyEwwwwzZF6zs846a6Dd9s9//jP84x//CE8//XTp/KRIIz6Zr6v4bVjqK/I9yyyzZEx53/7zn/9kPOFa1gOuFzxh6JkOS5vpda97XeYBC0u44AELS0SUF154gcvqaHXraAk6HREPdIepppoqzDnnnIFn5Zlnnsmej4FmSCcXgcQEBiLovPGNbwyHHXZY61J23nnnrEJrbZgAX2Aw33zzZVf62GOPDaxwQVz74Ac/GNZYY40w88wzZ/mhAkRU6XfjiIbEN77xjTD33HNn+TjhhBPCZZddln3P+4+C+dBDDw1cQxljqstXv/rVMrsO/T7dPl9LLrlk2HHHHbPr5znge7+fg6GHX3ABa621Vthggw2yX3HBRtSRiUDTCMSCzhFHHNEqA26//fbw/PPPd8xy2bKcd2KdddZppUdn9mtf+1qhUEBbgY4Tdtttt4Uf//jHrWP1JS0BOsbLLLNMWHbZZQNT8PLs/vvvD6ecckq466678n7OOtd108hNeMpG2ivf+ta3AnUdRj216667hmeffTb7u4n/MSWH9tbSSy8dpp9++jFZ5Pn/zW9+E84777zcabkp7smYk7oN66+/flh77bVbW8gL97ep9tGPfjQsscQSYd555x0lRFl+me7029/+NpxzzjmFZYrty2edOpr26Hve855WcjvttFPruzx0Wij6/uVNb3pT2GqrrcIiiywy6p174oknMm/Tk046SQ4Ffb8rOmEvCEjQ6QXVEml++MMfDptvvnm2580335wJEyUOS7bL61//+rDiiitmQs5b3vKWMel++tOfbjXix/zYow3ve9/7glWCFLZ77LFH29EqRlQOP/zw0rn517/+FXbffffS+w/zjt0+X8stt1zYbrvtWpc+iOegdfJx9oV37oc//GGrUcH3O+64Y5xdpS5n2AnEgk43ZUDZsnzDDTcMa6655ihkdGZ/+ctfjtpmfyDIL7DAAtmff/7znzOvN/tNn2kJbLLJJmG11VYrlei5554bzj777DH7pkhjTKKvbdh+++0zYcT//oUvfCE8+eSTflNjvscDme0yRvtn//33D3gpe+slTwbSGFDzHjqIIT//+c99Fhr1fZ999gnzzDNPxzzhsfPNb36z47SnlHX0T37yk9ZgowSdjreoJzsg+jI4jWdOkdEGw/O0rCdXUTraLgKDJjAQQQcl+8ADD8wKO6ZcffGLX5xwL1O3He4UD8x73/veQCOd6R9F1k0jviitsttpjNkIxwUXXBDOOOOMtofi+o2XA59mqPHevEs4AZaZ9jIRrNvnS4JOb58ORopWWmml7CR/+MMf5GHQW9xKvQsCKQSdsmV5nqDD9BPEfD5jk6ATE+nd37F4gMjw4IMPZtMV6PzPNddcrZMzNeeggw7KvKZaG6d8SZGGT8++E2OP+EyxDZOggzcObRI8h2kTM0WRTzMES8IReA/ZXvFExPnKV74S5p9/fjt99jlMgg4cmWJFmw/xLPbIeOCBBzKRrNOUtlR1tASdUY9S3/9gihWCn3n8kwF7Rii7mPJoduONN1YaHLbj9CkCTSIwEEGnSQAGlZduO9wp8suIKA1pMxoMuNGb6zLb+y3osKrVd77znUAhjH3961/PGo/ZHxX+o1Fi7uG/+93vws9+9rMKR4+fXbt9viTo9PYZWGihhbK4UJwFd3A8xpo6otxbEkq9qQTqCjpVyvI8QQcuZ555Zjj//PPHIJKgMwZJzzYgHqy66qrhpptuyu7FX//619a5qKepYzbbbLPWtrxppCnSaJ3gtS94UeC98ta3vjX+KQyDoIPXzUUXXRRon/jpi3QwmVLoPU4Y+GRqoVkveJL2Rz7ykbDpppvaaVqfTRd0eAYx4no9/vjjrXzzhSltn/rUp7IpWfYDAtmdd95pf+Z+pqqjJejk4u3bxjimkV+MgjANDBqYtyd9IAJaxx5xfcusTiQCCQgMraBDgwKVFY8MRjkYPapqCBjzTYljg6L/0EMPjRoJqZpW1f277XBXPU/e/iboMFJx1VVXZY01VGyb7sQx/RZ0mLfN/G3s73//e+b6m/1R8b9Ugk7d58uOxwuKkaF+z+vv9vlKKejAgOun4c0nwgUjJJ1GyCre8tK7MwrJ0uF0WBnJI2hiHaNjMdtss4WZZpopPPXUU1mavoGelzZM8BKz0aETTzwxXHrppXm7apsIDIRAXUGnSlleJOhQJ9PAjt3gqwg6vO9zzDFHtvz6c889F4hV121QfGsrUI7TVqhahtFOwe2f46jfOpUTA7nx0UmZik0ZR5ldZMRYI+4alhdzLUUa8bm9l8o999wzaqnoJgs6lP2IBQhfiPl5Rjts3333bf2ElzLeyma94En9jEDGvcYrjra0TVFpuqBjXIo+eW9/9KMftaY+nX766eHCCy8s2j3bnqqOlqDTFnPPf0TMI14VRriFvffee1S5y6qjrGxmxjRfpvvKRGBYCfRN0CHQYV4QOKZcUZnEDTcDSmeJeb3TTDNN1ikmCB4ukbin+ilDrLhAgEQ6z7HNPvvs2ag4BTUvNp2oLbfcMhOEbL4wDSyWY8Wjg/m2sdEgo0Bgf37HgyTPyCv5ouH43e9+Nzsf+9HBXnfddVuHsA/5MfNTg2wbQYGZlpHacEUlkByjRKZI+5gHnK/fgo530aeDyz3qxqoKOimeL55Ha9QSQJDAhx/4wAeygJB2DTyfNM5w7cyzT3ziE63K51e/+lW44oorxuyG2LLeeutl26+//vpw2mmntfZJ9XylEHQWX3zxsNRSS2VLz8dT4BgJoYPA6CQBF2OjwWtBmWn08p61E8MoV0wcOeuss7IAiHGaTOODG41lH0CbxiujzwTFo8NXZBzP+4AR/PPoo48OG2+8cVhhhRVGucibQNopyB4xiuCMadpVhkH/NYhAXUGnSlkeCzrUreYpynt0ySWXjCJTRtDBw4H6nUCpfjoudTzv7y9+8YtWvTwq8Sl/ELSc9xqjgU/7hDgypGltBcqliy++OCvP/XSY7CD3HzHeNtpoo6wc9HHqOIb6gLriT3/6kzti+L76gQNyz6g3wlkVq5IGU4Ko47kXtOW4l37qVZMFnbJMDj744FbblgG3Y489tuyh2X5VeHLA5z//+UCdjdHuev/7398KPD7sgg7XhFeOeXOVDfKcoo6WoAP9wRjlPu8RIiUW9ynw4mcFXSvT2Yd2KfWLTASGlUDfBJ2jjjoqE2XyQLVb5Qr3bYQRM0Y3bEqNbbNPRCE6gPHI+3xTvHDo+Jn5RqNts0/miDMX3IQO2+7zQYPMOnj2u30iKtk8aD9t6OMf/3ggIn8Vo7Fy+eWXVzmk630HKehQqLJaFQ1gjAYMDZlurKqg4+8r5+vm+aJByZz+TsZzwwjR5MmTx+zqRxMQJn7961+P2cc8q/ghFgJSPV8pBB3eV7h2Mq7h+OOPHyWmIAARLNiEl3YeLAhnBBs0+/73vz9qtRXSoIOGO7kXT21/+6RzRVC8opFoxLnPfvaz2e48H4ywWxwcS8N/7rbbbpnHjt/mv/sGN5497C8TgaYQqCPoVC3LY0GHaVa2yg4ddhrd3puhk6CDGLPFFluMEnJirixZS3vET2WxfXw5/Le//W2UkGP72Ge7zi6u/J/5zGfaloMM+jAiTHk/rMYg1cc+9rEs+9RviPFFg3NF11g2DZ4t2lQ2JYkFERD78eQyG3ZBhzqL67L6j/eB6YdVrCxP0kS8+dznPpclz2AogYMJImsrybV7xqvkaVD7IqTSLrCOO4NgDGR2shR1tASdTpR79zue04RwMKN/weIzGM+CX03X9qH8op3Hp0wEhpFA3wQdFG/z0KGysuC3QKsi6LA/DSHmwSK60HDCrdqMET1G9rzFgg6/8dKyBCuNRtLwgbPwfjjyyCN9ElnDzIQljq0q6BCI2EblSfgd73hHNv2D7whMzAGOjWu599574809+XuQgg73D88rMxrtRZ1r26fos66gQ7pVn688QYf53IwGI1Jx703kI20qE4RDb74j0Y2gk+r5Si3o0ClCMIEH7z/5NI8arj9vxIwGJg1NjONpZObZJz/5yWylNn7jHNx7bwioCF1meOQwIo43HO+7NVr5nfeMANvcn9i8oON/oxxg6hZTyRCwbBSwk6DDKDPefmYEhe9myqgdr08RSEmgjqBTtSyPBR2WnaYDZuUlgq/3Vmwn6Mw666zZO4w3L8b7yQomCAwIxQjAZkVTunw5bPvSziBALXnCk8E6h3jkISbEMbAo5+hMeA9iype/TolBQ32w6KKLZvmx9JkScuutt9qfQ/VJ5x9vaYzr22+//Srnv2wafklpxDjiy8RxMoZd0PHxWwCJlwGe41WsLE884Wh3sRIQ9R7PLAMWPNNWNw6zoENZRDud9j/GNTKdLc+LP9vB/Zeijpag44D2+Ws82Mc7YYP0vhyJs0X9kzdbIt5Pf4tAEwn0TdDxF09Fcthhh7U2VRF0cIHGC8YEEFzr6FBboyJvxDsWdGjoHXPMMYEgWRgNNAp+Om4Yv9M59J4+3pOD36sKOlnC7j8/AjCIZctdVrKvgxR0Fl544cxV2/K0ww47jJrratvLfNYVdLp5vmJBh0YRjXRbpQWPMjr5NpUgz43adyS6EXRiNt0+XykEHQQZ3kPcXIlv5Q0xF3F36aWXzjYjZsIG7maIvTTMzRDA8IrxRufqgAMOaInEcRBVRuYQaKxjePfddwcaWFapkxbzq5lHbd47xx13XLjyyiv9abLveYIOHTymXiEIm+GlBXfE4HZTuOIO8z5TVmIo08i08+hTBHpJIH4+qeuo88pY1bI8FnQoBzfffPPsPeJ8CPt419r52wk6XuBFbOH99FOW11hjjWwKlF1HXjwNXw6zHwIw7Q17n+OyKc+LlkDBBBPGyDfTuL0oxTRf4tWZpzFiNNfoPZGygxv+X3yvu/EmKZsG7S+Efcpz7i11As/GeBN0EPff/e53Z3ceIZJ6kDqyrJXlSXp+NSffJhlWQQdvXAZVeEYYNGLQxsRXrhfP6FNPPZWvHS0uA7upoyXodMTcsx0IgWBT9zmJ9Sl8OULZTPlNvWGW19a03/QpAk0nMHSCTjwXEsC+UkeF33777UcFLYwFneuuuy7r3Pmbw1xLplrZnMt42WwJOp5W2u/EW6HAxWis2fSWbs5SV9Dp5vmKBR28L2Ihw0+JouHOvHXrJHCdviMx7IJOp/sWN5b81ESOpRHGaCEj7lieF48XWaiYWS3Ke7kw7WKVVVbJjmfEhQ5T3siL7wTaqG92kPvPn4vNiEuM9Fkn0+1a6iuiFo09s3glE9uuTxEYBIH4/awi6FQty/MEHTpleMOapw2CigkzRYIOAztMVbFjijwLKCeIIYcxYMOULm++HKYu4ncvArOv7/AyNdZPmYIdMTusI0k8tHPOOcefIvtOewJvFovxg/jMQMCwGF5IlNtcB4YHJn/7Oq3TtVRJwwsdXjjybT/ON8weOn4QhmvJiyHF9iKrwhPhh+eYwQwGnnivGITB/PNd9B4V5WGQ2xng4f3LM0RV4vaVtRR1tASdsrTT7+ffJdrbTH/FGDzESxwjOPZll12WLVKRbZjyn9piRkKfw0hg6AQdGkG49nqLPX6okPwShrGgg3CTF4zQN+YIXksD0UyCjpFI/0ncg2222SZLmDnxiB3dWl1Bp5vnyws6uNb76WN2Hf75YVs8EuCfvfEk6NCxwVuGf+Ytw/XTQDdj1Sc8Xrz56VIIMezvR7BpuNu0Tbz1iIHjzXf86CgRkNqbeeUw3ZKRPYzz/CHZnwAAQABJREFU4HIbWyzo4F1IMOU65mOK+Q5rnTR1rAikIFBH0KlalucJOlyDX4Hk/vvvb638499rygzKDiye6pVXjrOff5cRZBn88eWKL4eZ6sKUl9j8Kkt49OHZZ0ZngU6DGeWS97CwcoffCdzMqntYu1hh2Q4N+o8yHQ8jC6TLIBoiFlOMy1qVNPCk5L5giGs8A7ZK2HgRdJgigtiIkIAxVRCBArZlrApPzsGABO8MRnBuv0DBeBR0mOJOvc0U6bJWt46WoFOWdPr91llnndYCIgiWzAJZZpllWsIOYj4CNAP4eNObqS1mJPQ5jASGTtApmuNohSeNtHi6VCzo0Dn0o/l243zQWUQjGoVmvkPOOTTlysjU//SB+QbtodPN8+UFnXYjWkcccUTLA4yGvk0bhKDvSAy7oEOnhTgRVKoIJjQ221ne6HQczBBx1VYIi3/zAe/sPGxjxLKK8V5zL22qnB0bdwKZUsZz2q2lGP3r9tw6TgQ6Eagj6FQty4sEHTqb1L9WdlicmSJBx0+FohOMl6cXauyaY+EHDxw/tdqXw3nemqTjl2X3U1X4jQDsm266KV8rGaPFTAEbBvNT4shvN3kvmwaxj1gF1eIRMZ2VGIdm40HQ4ZmkzTrDDDNkl8XAAtPL/HReu96iz7I8OZ4g1rbiap636bAKOnCEIZ102ut4C9qgD9cNVwbS4phX/BZbijra+iSkzeINVcSkOD/6uxoB76Fj7ToGWpnuiiGWEkMVb1DEaDN56BgJfQ4jgaESdLoVUrygQ2MP9zvSio2Va5hXjMWxeCToxLTS/R03yugwV10pw3JTx0On2+fLCzrnnntuOPvssy07oz69S3Ace8F3JIZZ0EHMQexkNKSsFVWinqv3wvEdKhpnBLzz7zON/7yR9TL58cHzbH8v6DBCzD51jCCUXLMZjXeCP8tEoAkE6gg6VcvyIkEHDj44OjGwmIZVJOisuOKKrVgIrGJF2ZFnsTdvvDKeL4cpxynPY/MDP/Fqg3HHOj626O/f//734ac//WnRz43Z7sUAMsX0dTwZynqScEyVNPz9wGuFFRC9xc/bsE25okNJZ59PDM8jnkkGFMtaFZ54hO0zJWabeQLleccOq6CTx8t7d/F7kUgbH5uijpagE1Pt399+YIGzEi/V2qRehCfOEt5qZrwbimdoNPQ5bAQmnKBDx69oaTqv6sZTfyTo9O7RnnPOOUd5Q9UZzRi0oEO8BOIm5Bkig400xvPjfcO1SNDx0fnjjkR8Pv8sVwm6XTcoshdFyRPeLldffXV46KGHMpHURs0JWGfxLooEHR+om/cWzzqEVkZaGI3D4lhXbGNU17vRXnzxxdmqePzWzsgbUzG9OMT+XtC57777shHjdul0+o0V7nD3NcPVvsporB2nTxHoBYE6gk7VsrydoBO/J3Q+2R+vP8xPufLlTtHUSY5hlSk/lRqBAKHArEw53E7QiWN3EbujjBFzrduVHcukn2IfX6eQHtwoZ6t4K1ZNg7qBzjX22GOPZfVI9sdr/xH8dq655mptQvhjihtTbOJptq2dGvKFegoPMavLqHdoI+SFAyjKclWe/j0hzbwVtAjKbIIP7WCL7cTCA/HiBEX5atJ2BnCtM88ztMcee3TMXlz2dFNHS9DpiLlnO8TBwe1E1A3EuET0x4ZdELbr0qcIQGDCCTpcdNG0mtVXXz1svPHG7JJVXLhnmtFwYIQQF3Aq3rwpV7h6Mq3GjE5bvDy1/eYr4iodbjs+9afvPJN2lUCYdfNC44EpMhZjhYqQkb9ubNCCDkHWTjjhhNys05GgQ4H5KUT8vfXWW4cPfehDfM08fPJGhn3np6mCDmKcLQ9M449OWDyFifvNXGWbTlEk6PA77rB0MDGmJNCJY5TejPvNKjGxedbHHntsYFSmW/OCDo1bpojVsZVXXjmLn0EasCFmVJUR7jrn1rEi0IlAHUGnalnuyzTyhaDizQexpKPLNMo8QYcpnhb/ivq5aNCGzrOPcRaXH3UFHd+GyAu67K9tmL7HKwLiQULZXCUIcjdpeEGnCi/qHt9+q3JsP/blOcbTE+EA45llVbYq7Z5ueMaCTpVrjdssVY4d5L6+rQ3nOG5WXt5S1NESdPLI9mdbXM7bWeO+BasRsioh1q7esOP1KQJNJjAhBZ3YzdpukF/xJg6IGI/60wlj9MIb7qzM9zYrK+jQUCVQ8yBtkIIO103jxpaez1vVqCybQQs6BIbk+Yot7iTFQTu9q/4ll1ySrXARp0FDxJb7riLoVHm+6nroeCGlqAEYj34VCTpc/3rrrZfF4uE7Xj7wtdWr/Ag9v3tjVSumWmLxSjTZxgr/pRZ0fMDXJoi5FVBo1wlAIC6rqor7VcryToJOPNKKF5159vn3P/YMKqp7/XK2iKjxiph1BZ3FFlusNd3rxRdfzL5bAN9uHx08VFZbbbVRh995552VPDlGHVzxD5gx/c0EeAapqONslLtMct2mgQcVMdOqGlNYmcqaZ4PmycAVU8MWWmihVvbiKditHwq+dMvTT00sSLpwcxz3z+9Iu8F7SvGeUu/yDgzavGczeSkzpT9FHS1BZ3B3nneMQWLzNCMncZ+ObX5a/3gS4Lk22cQjMCEFnSuuuCIcf/zxo+42Lz6iigVSZZrGySef3NqHxgwFtDVq8NbBvdebX5qa7UWNSn7zEddpILHvIG3Qgo5f1ahdY6wTo0ELOnQScOn1q6yRZ9+oYB9Gk32DmCCFzIXH8mIF8FzSiCYGBNZJ0On2+aor6OB5Y55WRUKNF6a4lqL9+I2pjixh7leHYTvWzvPGd8zwguGexJ5Cr6bS+f/Ugg5lB9eFnXrqqWHy5MmdM6E9RKBPBOoKOlXK8k6CDpfMtBTf+TUMXtDBOxYB2cqJokEB34BnYQS/2h7p+nKjaOpruylXfmo26RXF4eG3ssYy60z58MYSzGWnc/njqn4nqCyDV9YxotND+ZW3qERR2nXSYCDNvFrz0mfKhK2Qye94TzItl2lXvn71xw6SJxxZcceWTiZfTA1j6nBZq8MTMdTi9RSdj+nQ88wzT/YzCxGcdtpp2XfaNHgx5Bn1K+KrN4RdYs4N0rhe2oQ2uNNuOqbPZ4o6WoKOJ9r/73hp0nYzY5CPQUEzwh/QpqbuwNrFv7Rj9CkCTSYwIQUdRg2Yq+znzvulSLlheYFK/WgRQVpZBtEquHnnnTdrdJkgRBrtBB2mpTA9xaxdp9b26eXnoAWdeIQ1doUve+2DFnTIJyMBdC4stgDunwQatFgAPsCvXZcfOUPwYVoA8VowRESCdduULLZ1EnS6fb7qCjr7TAkq5xuDTEHkeszwrsEbyTpebO/07PsRf0uHBjujnEXBs3meCHZnAizLjNMBokHnjQYfo524orNaS178gpSCDg1LGhYYZQfPq19lx+dN30VgEATqCjpVyvIygo6fTuV5eEGH7T6IMl4xxHdhHzPeY7yNrEw477zzwi9/+Uv7OfusK+iQiO9IMCUJ4dlW6PMnQ/xhageevXg/FNmgBIgFF1wwE7ysw0PnnI4ucUjKWoo02p2rmxgYg+LJc8ezQcBWs6qdyF7zJF/dBEUehKADR8RjBmiJ0xcLeIiBhFCgTWPmA+LatvgzVR0tQScm29+/ET5pI5r5e8+7iLhvwiptVPpjaosZLX0OI4GeCzq8OHiu2Kg9kPhOB8qMpZ59xwzBhdExOjx+xIu/82LXkI73DIiFFF9A2zk5B5UAwUgZ/ePlN7vttttGrUJj27fccsuw8pQGmBnxNPCu4RppLPprZJ84H3Ycn3haMAfdPC64Njx+GAWxwLGXX355YK56L4x74q+F5R65DjPf8aWwo9Pd6+jvvlFQpqEz3XTTZd4bfJpZ0GH721fyxFqJ45+keL78qK+dl/t26623ZqOLyy67bBao134rs0w37wMNFZ4pIvHzDHvrJOh0+3zFgg5eLTyb7YxRD2LlYH7qGH/TCUA45ZNr8O8Zv2OdBB3vbfTqESGUGaGORVqeaTpWPAdwRWijcU8HFjvxxBOzVTDsHPaZUtBhSWOWNsZ4PnzwZjufPkVgkATqCjrkvWxZXkbQIT3ioTBo4i0WdJjygYhrYrHV8QgQ/MZ7bL8htNBxjadNpxB0mCKEIG9CCHlmiVzaC5yPeHxcCx4N5KeT18CgBAg6Q3F57etTfy/sO+WZDUSwLUUalnbe5zAJOogxCPjeOvFkirGPy9hrnuRtWAQdH/+EwTPaIXTIGeyhDKNd799B9sHbt1ObOlUdLUHHP+n9/05/hrY2bXwz7j1tUbbRrjYr6vPZ7/oUgWEg0HNBhwLVV0hloeywww7ZEo4pOtx0JG1UnPP7efhxfnAlpoOZF8yYDiCxT7zw4Y9nlQr2MWsn6LDPGmusETbaaCPbfcxn1XnVYxJos8HHC2qzW+snxCfm7ffSfMeZipnGTzsxIV6xpFPeEO9i1/UUz1eeoFOUl3gqn99vu+22GzWa5H/jO2IfnQGsk6DDPt08X7GgQzqdzK8cgaCGd5t5I+Udy73ldxNAOwk6uKmz5LsX6/JEsfhcPB+44/sR0Xgf/3evBR2uAy8/Rg4x5ngTQ0cmAk0ikELQKVuWlxV04mVo4RULOmxjxJ6pqybcsC02OnZ46+UFSk8h6HA+YukQh8Pe9TgP/u9Ogk689DLHll1+2Z+n6vc88aBTGnFZniKNdufsRtAZFM88QafdtfFbHIS/1zw5ZzeCjp+iRBpY0eIjr/5a/38v6HRKjXYk7zyDZO0sZR0tQacd6f78Rvue57ndNENEHsqtWNzvTw51FhFIR6Dngg7eE3RcvBdFp+wzssa8bbwUaFwiJiCioLzvtNNOuYczMkTjiYI7nsbgBR1eWgp2prD4DiKJUnni6YOCW2QE7sVt1jfUOCejUkceeWTmlVOUj7w0Gakj4CFu5bGXTLsYIXlpVdkWey90OpZgz37krdP+3fzO9BfOM9tss2WHH3PMMZkXVVFaCAJx4LOifdnO0rAsWegtxfPlBR2W9qSRydKfXvhjCgC/EfC4yLgehDY8erzR4L/ooosygRMPGOzKK68Mxx13nN8t93vV5yueepebaLQxjgGFqMnyvRbk2nZHSL333nuzWFS83zbizqjZPffcY7vlfvrVbhBO/UpXuQe4jXQGEU5NDHM/ZV/JP547eP3kLR++xBJLZHEP2JmRdsSlbsyvtsE5mZ7WTrDs5hw6RgTqEogFHbzv7Dml8WtTSdudp2xZ7uPtUEYykJNnlKUMpvgBkyIPN7xKEHLzguny3lGv3H///XmnycotC7p+yimnBGLxxObf43blMO0AVlChTM1r/3C9XMP111+fCfTxeexvrmWFFVawP7MprIjmRdfQ2rHmF2KpMB21isVleYo02p3ft+2oX6gnOnXMBsUTjwA8yKpYXN/0mid5o25efPHFs2wS3404b+2MzjJtdG9McSYkQS+Nqd2sKkc7I+9dt3PfcMMN2dRK2g2dzL/bVetoyqj555+/JSb7kAp8f/TRRzudXr/3gADtPmZ2sDoi9ZIZ/Us8c+hn0beUicCwE+i5oNMEQL7Sp7KnwmIEj7n+/EaHGbGCzzJG4wwXbo7HZZbOaLcBV8ucbyLtQwOSRgvG1BgEGOtMNJWDF3ROOumkTLQhlhIuv4h0TFWjcVD2Omic0FihQ0AHCldiH4emqRzifDE6wjsy00wzZWIawa4Ra6sa08cQUcx9muXLiXdT1UiH93b22WfPOqV4PPGvnYBb9RxF+1NmMIpJZxmLR7KLjtN2Eeg3gVjQ8ecn9hwxwsrYoMtyroNONOUpXoGUw506+2Wuq+o+dPQYpKAspF4gaC/CMfVbmfIQQW3WWWdtnZYOajdez60EJvgX8Uz7AMQeT7RVGKxg6fh+GZ123nPaGgzU0i7nnUdEQTgtY3Xr6NhbzJ9Tgo6nMZjvDJgiuNEupU1Ne9TCWwwmRzqrCKQlMGEFnbQYlVpKAj7+Al5PjGA22fIEnSbnd9jy5lcIowImSDKdomEyH/Q6b/nMYboW5XV8E2gn6LRbujiPyrCV5XnXMMhtCDkWn4x8MChAZ5kBAll1AuJZnVmnI/w0Rfa97rrrMi/cTsc17fe6dbQEnabdUeVHBCYWAQk6Q3C/84IiVs02I4FMVyvjLl817dT70+iy6TgEii7rOZU6H2XTk6BTllS1/RhRWWqppQLByM07Z1gbi3SScfnF8Absh1dQNdraWwReJYBHCVOAvXu6scFFveyIN8cMW1lu19mUT6ZaMUXI7Nprrw1HHXWU/anPigTEsyKwErsz3cpilCA4Eq+yzPSmEkn3dZe6dTTtFVs1yWccjyVWNi3roe2P1XcREAERKEtAgk5ZUgPcjwCLiAZ1bfvtty/l4l33PBPteAk6ae840yCZdsd0NRNyOAMNIuJo9Dp2RNqrUWoiIAIi0B0B7/1A+ccUZKZqybojIJ7dcSs6yi8qwT5+aeiiY7RdBERABEQgPQEJOumZJk9x0qRJhcGgy54MD50999xTc0bLAquwnwSdCrBK7MooF8EtvdGZYdUIVveSiYAIiMBEIECAZ2IRUf4R1J/VrWTdExDP7tnlHUncGVaVY0VJPFEIokzsGpkIiIAIiEB/CUwIQQe375133jkj++STT2YBSfuLWWcbzwQ22GCDbIoC13jWWWdpKeqaN5uA0CzhjjG9g2l3rJrRaSWsmqfV4SIgAiIgAiIgAiIgAiIgAiIwVAQmhKAzVHdEmRUBERABERABERABERABERABERABERCBDgQk6HQApJ9FQAREQAREQAREQAREQAREQAREQAREoGkEJOg07Y4oPyIgAiIgAiIgAiIgAiIgAiIgAiIgAiLQgYAEnQ6A9LMIiIAIiIAIiIAIiIAIiIAIiIAIiIAINI2ABJ2m3RHlRwREQAREQAREQAREQAREQAREQAREQAQ6EJCg0wGQfhYBERABERABERABERABERABERABERCBphGQoNO0O6L8iIAIiIAIiIAIiIAIiIAIiIAIiIAIiEAHAhJ0OgDSzyIgAiIgAiIgAiIgAiIgAiIgAiIgAiLQNAISdJp2R5QfERABERABERABERABERABERABERABEehAQIJOB0D6WQREQAREQAREQAREQAREQAREQAREQASaRkCCTtPuiPIjAiIgAiIgAiIgAiIgAiIgAiIgAiIgAh0ISNDpAEg/i4AIiIAIiIAIiIAIiIAIiIAIiIAIiEDTCEjQadodUX5EQAREQAREQAREQAREQAREQAREQAREoAMBCTodAOlnERABERABERABERABERABERABERABEWgaAQk6Tbsjyo8IiIAIiIAIiIAIiIAIiIAIiIAIiIAIdCAgQacDIP0sAiIgAiIgAiIgAiIgAiIgAiIgAiIgAk0jIEGnaXdE+REBERABERABERABERABERABERABERCBDgQk6HQApJ9FQAREQAREQAREQAREQAREQAREQAREoGkEJOg07Y4oPyIgAiIgAiIgAiIgAiIgAiIgAiIgAiLQgYAEnQ6A9LMIiIAIiIAIiIAIiIAIiIAIiIAIiIAINI2ABJ2m3RHlRwREQAREQAREQAREQAREQAREQAREQAQ6EJCg0wGQfhYBERABERABERABERABERABERABERCBphGQoNO0O6L8iIAIiIAIiIAIiIAIiIAIiIAIiIAIiEAHAhJ0OgDSzyIgAiIgAiIgAiIgAiIgAiIgAiIgAiLQNAISdJp2R5QfERABERABERABERABERABERABERABEehAQIJOB0D6WQREQAREQAREQAREQAREQAREQAREQASaRkCCTtPuiPIjAiIgAiIgAiIgAiIgAiIgAiIgAiIgAh0ISNDpAEg/i4AIiIAIiIAIiIAIiIAIiIAIiIAIiEDTCEjQadodUX5EQAREQAREQAREQAREQAREQAREQAREoAMBCTodAOlnERABERABERABERABERABERABERABEWgagZ4IOossskjYfffda13ryy+/HKaeeupaaTz++ONhlllmGXgatTIw5eCXXnopTDvttLWSaUoauq8jt7Ep9yRFPnRfdV9HCIz+pnJ4hEeKdy1FGnpfm3dPdF9H7kmKbyl4NiUNva8jT0RT7kmKfOi+6r6OEBj9Te2mER4p3rUUaaR4X88+++xw7rnnjlxcwm89EXQWW2yxsMsuuyTMZndJPfnkk2HmmWfu7uDXjkqRRq0M6OAxBFLckxRpjMmYNtQikOKepEij1kXo4DEEUtyTFGmMyZg21CKQ4p6kSKPWRejgMQRS3JMUaYzJmDbUIpDinqRIo9ZF6OAxBFLckxRpjMmYNtQikOKepEij1kXo4DEELrjggnDGGWeM2Z5iQ08EnUmTJoWddtqpVv7e8IY3hOeffz688sorXafz1FNPhZlmmqnr4zmwbhrTTDNN4N8LL7xQKx9TTTVVLRacvAlp6L6OfgyacE9SPBu6r7qvowmM/FW3DCWlummoHB65H3zT+zqah8rhER513zVSqpuG3teR+8E3va+jeeh9HeFR910jpbpp6H0duR980/s6mofe1xEe559/fpg8efLIhoTfeiLoJMyfkhIBERABERABERABERABERABERABERABEYgISNCJgOhPERABERABERABERABERABERABERABEWg6AQk6Tb9Dyp8IiIAIiIAIiIAIiIAIiIAIiIAIiIAIRAQk6ERA9KcIiIAIiIAIiIAIiIAIiIAIiIAIiIAINJ2ABJ2m3yHlTwREQAREQAREQAREQAREQAREQAREQAQiAhJ0IiD6UwREQAQmEoG55547TDfddOG///1veOSRRybSpeta2xCYccYZw9ve9rZsj4cffjg899xzbfbWTyIgAiJQnQArAr397W/PDqT+oR6SiYAIiIAIVCMgQacaL+0tAiIgAuOGwHzzzRf23nvvwLKSjz76aNhrr73GzbXpQuoR2GWXXcJiiy2WJXL00UeHa665pl6COloEREAEIgLLL7982HbbbbOt1157bTjqqKOiPfSnCIiACIhAJwITQtB5y1veEvbYY4+s0/L000+H/fffvxOX0r+vtNJKYeWVV872v/zyy8Nvf/vbwmOnnnrqcOCBB4bXve51Y/b53//+l3WsnnzyyTG/aUN/CMwwwwzZiPSss86aeSz885//DP/4xz8Cz8xENTr6c845Z/bMPvPMMwEmVW366acPs802W+D5//vf/x5eeOGFqknU2r/ufSXfs8wyS/ZsvPnNbw7/+c9/sucCFi+99FLXeSNdby+//LL/sy/f99xzz7DQQgtl5zrxxBPDpZdeWnhengX+VbFBXFOV/DVt32mnnTZ8+ctfzsqfF198MfzgBz8IfPbb3vWud4UvfelL2Wn//e9/B56TOs96v/M/0c83nt7VuJwsc28nQrlTt14zjm9605uy+pl2D9yo1xD3qeeqWLf1PB463/ve9wIega+88kr49re/He69994qp9a+IlBIIEX7rU4aVcqvvHKrF2U57zztetq1//rXv8Lf/va3Sp5xtFMoL2afffZAOUS/lTLj8ccfz8qQwpvR5oeYUx6LNofrpykEJoSggzuniTjPP/982GGHHZLd/E022SSsttpqWXq/+c1vwimnnFKYNg8sI51FRqO5mw5zUXrDtP0d73hHoEFAhf7nP/+560Kh6jVTGC2zzDJh2WWXDQsuuGDu4ffff392X++6667c31NvpLD84Ac/GNZYY40w88wzZ8kjgtDB6lchR4G/1VZbhUUWWSS7L3aNTzzxRPjjH/8YTjrppI4dzRVWWCF7N6g4rLBGuKTyOPvss8Ntt91mySb/THFfJ02alN2HpZdeehQDyywdXN758847r1JlyPHc129961vhjW98Y5Yc93XXXXcNzz77rCXf88/3v//94XOf+1x2Hip1vHPaddo322yzsOqqq5bOF8+spV/6oCHckXs43xRPJ+yxxx6rVYaT1mGHHZalxX8777xz5Y5V6+Auv/CufuMb3whMxcNOOOGEcNlll41Jba211grrrLNOazvPzte+9rVCAZxreec735ntz7v/4x//uHWsvqQl8J3vfCfrpFdJ9dxzz83K5SrH9Hpf6uVPf/rTlU7Dc8iz1u+Bg0qZnLJzN/V8inrN8jnXXHOFDTbYoOWFZ9v5pD7CI++ss87KOnz+t/h7inqesoS8YH/5y18yUSc+j/4WgSoEUrTfUqRxyCGHZKJHp7wzYIpXbGwpy3LaKdtss02rbrdz0e9i8Jr+66233mqbx3xOM8004cMf/nD4v//7v0yAjXeg/fPLX/4yXHfddVlfLv696O8mtIeL8jZM2yXo1LxbVQQdlNZNN910lIfOhz70oVYOJrKg86Mf/SggImD77rtvQETph/n71+l8vW7wvv71rw8rrrhiJuTgVRYbDdt+CDoUrohHCDFFdscddwQqqqJGMyInbIuMRjcduptvvrlol1rb697XuGPdLjOIXAjGeDKUte233z4gFHn7whe+kI10+G29/L7PPvuEeeaZJztFJ+8cdqIhQOO9in3qU5+qsvtQ7ksDZ/PNN8/yzvN86KGHdn0d8XM3CEHnfe97X9hpp52ya+DZxrs1T+jbcMMNw5prrjnqWhE3adDl2Ve/+tWwwAILZD8h2jMqL+sNgQMOOCDgTVjFLrzwwnD66adXOaTn++IBzcBCVWPQjsG7Jlqder5uvWY8EFap422gxbbHn3jpfPOb3ywUqVPV8zD54Q9/2Bo44TttDJkIdEMgrkfbpVHUfkuRBuf9yU9+kom37fLAb3ji0i6MLVVZztTGrbfeuu07T//i+OOPD1dddVWcjew4Bh3f8573jPkt3nDxxReHk08+Od5c+HcT2sOFmRuiHyTo1LxZH/3oR1ujlOeff34455xzKqWIx45VqhJ0Bi/oULg/+OCDAbWcEWpGscxQsQ866KCeeJa8973vzUYicTsusn4IOoiOdPRtdJ68EKgQd0pY4KJpduONN4bDDz/c/mx9es8PNiL64JVDp3DhhRcOqPwYlQdCCL+ltrjhW/W+xpU5eScwLCyYMsmUFD91kg4q02PKCG7EJckbiemnoIM32le+8pUMO9e22267dfQEiQWdTlMRCaJLmTbeLaWgwzPFtFxG72ngffGLXywUTXvFlefQGm0XXHBBOOOMM3JPlSfo0AFEAMqbriFBJxdjTzbybjPNtZ3ZAIrtw3PXS69JO0+VzyWXXLKUoOOvpckeHnXr+br1GuwpY77//e+3BtDYhvfxn/70p6xuhvm8887L5syY/oQ3aWyp63mEOwQ87A9/+IM8+GLg+rs0gRTttxRpkGEv6NAWLhKa6XMQzzC2FGU55SOePlyT2U033ZSFQGDw+AMf+MCo9izvezztkRkDG220kR2eeaX//ve/zwYymXrFYB/tFjMG6dt5+9h+TWgPW16G/VOCzoDvoASdV2/AID10mEZC4YYg99e//rX1RCBu0FljqolZrxqLjHTTQTJDGKDg9wVwPwQdRu6I4WHmgxRSWNNZs1F28si+sWcK22xqBRUYlQOxc7C48Cbm1M9//nM7XbJPGr517qtV5lzbRRddFH73u9+NqogRtvCeMA8XMl6mQ8RIJCLWW9/61jHX2k9BB88ZpvVhN9xwQzjiiCPG5Cfe4AUdeViM0Ekp6IykOphvrGpFw4+yD/v617+eCdx5uckTdNjvzDPPzMrS+BgJOjGRwf695ZZbhpVXXjnLBFMuKdvLCNKDzfXYs9MhQaCwgbGf/exnWXk9ds/Bb6lbz9et1yCw6KKLZtN7jUYcKgCOTJVF2DHbfffdx0y9Sl3PE8vNBgCYms05FVPS7oA+qxBI0X5LkQZ59oIO4gxTm1Jbp7J84403DquvvnrrtMcee+woLxwEXAaPbECZ9i7lqDfEpvnnnz/bxIAN07KpN8zmm2++QB1v5fAVV1yRefvY73mfTWkP5+VtGLf1VdDBBZhRfqbT2OguDUe2MTeY7WWWLOShIy06mIyYd1pOtSiGDi8ssVuIW4FXxiAaMykEHdRXpscw0k7HuUgB7vSAwoOXEh4PPfRQrpt9uzTq5GNQgg6NQQoVnqMi23HHHVuNGwQK/k79rFhDj3uIuyPiEl4yNvWBvPVD0PEdfQprCnH/PLEaBa6bZkyvYJqFWdwhJNbOJZdcYj9ngdT8VAved4QMuKa0uveVcokGJgIejcs84/4wPdAMTwY8GtqZH2G95557AnO0zfol6FDhEqeF5x7Dywpvq07WZEHH6hHKIDyp8MiqavCgriCWF88+88F5HzvZeBJ01l577bD++utnl0xdQqOtyIoEHep2OnvxO11V0Klazxflc5D1WlGeBr2dZx1XfthgvZ5O3MvrxUv64x//eHYKnjm8DTu1CXuZn3Zp163n69Zr5O1jH/tYWHfddbNs0o6hjeHreH6gTc5UKzPqCwa9zHpRz1OG0zYwL+Ay04AtP/oUAU8gRfstRRrkqdeCTpmyHCHJYoTefffd4bvf/a7HlX23sok/4nY5bUauw8Qa+icM3MSGEIznHvbAAw+EfaZ4+7ezJrSH2+Vv2H7rqaDDCDYPEZUGjUNz4+ZvgibR+aHytbnebD/ttNOyQKMxSB4kKm1GlanUzJgGQ8N78uTJhSu0xIIOUyO22GKLzK3UHlAaArgb//SnP82m21j68SfnpwGRZ6wQw2hHFetW0GFlANzf8HjwPGBowa1woc0zgs9ZLAw65Lj2MxcabwPjQSeWeZB0UkmzyLrJBwUlo8Aca0ZHzIxOVCzscY/xbDBjlMnicyBAEYyzKJ+kTcfbro3CrJ2AY+ewT99hYxsjmeQnpRF8eIkllsi8Qczjxcey4Fy9FnSmm266cPDBB7c6+jzPNKrMeJcZQTOObIcjHTUz37jm3eT9NvGWffxSyHYMFQXlQb8txX2Fl41qIMQx8lFkjG5QscIPweAXv/jFqKlX/RJ0/HQr8sq8aH+PivJfV9DhPUQg5Pp5v/H+yDNEBJjy/PCu+lGgmWaaKRMZmLbHe4/3F676TIGz+0CalIHEaKJR0c5oDH3iE58IvGt0JCibzCgDcf1HkETYM+O5sQ4R2zivPy6PJYGFSSvPKLsQkWKjXKbMi4WReL+Uf/MMWj0dv//xeWJBh3tqAkEs5HJsGUGHZ6Pben7Q9VrMp8l/+6Wiec8IiD6MCzLwvNCWYMUVrFMZPOh70o96vlO9xvvl27DWjvJsKBe912Y8ot+ren677bYLyy23XJYVTbvyd0Tfe0GgSvut6Pyd0ui1oFOmLPcD5gQ+zuun0rYipISZF3EpZ4866qhWO6doAMALNNQn5nFnafrPprSHfZ6G/XtPBR3UOT8loSwsGn6+w01HAK8IW163KJ3rr78+HHfccWMawF7Q4VjcxfIa0PxGrBAe/iK3OFb1WG+99dh1jMWuq2N2yNnQjaDDlJfPfOYz2TLKOUlmm2ik4TnBKgWxeS8M4pd4ISfet92UmG7zQeHAdVexp556KhMH7BjmbLK8pRmjjbfffrv9OepzlVVWyQQ8NjIShcgQj0iNOiD6g84bo1oYohHPYj86Wf0WdIi5QOPYjOCuFrSYe+ZXvrF94PHZz362JaYhOrEyCYbHnfdgIQhwXtC3X/3qV5VjT9n563zWva94COLdYvOGi0YtyCP8EDCsPOQ4BAk8Gcz6Jej4FUU6VbqWNz7rCjqM6trIEM8Nz0qeIcRYfKJ4yo9Pg2MRWmzkKU6Ld5TjizqqeEfRgSDNdhY/x3GHqN2x9hvi3eWXX25/jvqkoWRxpUb9MOWPfgZF5hnlnTehPe7ExXmLBR2efzx8MEQ4GnPeu62ToFO3nh90vRbzafLffsSW+ClMWRpGQ3yk3DTjOvq1GqWds+5n6nq+U73mO4CUw5///OfHDKDhue4Fd8ptRvbNelXPezEqbvPZufUpAikIVGm/FZ2vTBqxoMNzTX1PzJwU1qksj8XZdgOoPq+x8EPZat5zOFCceuqpY7JPWbL44otn2wlqTnDzPGtSezgvf8O6rW+CDqN3eMDghWAdIKDxcDP1gO3cZIzo2HiHmH3yk5/MVv+xv2ks0simsUiQVRqCZqzSwGoN3mJBh98QPBAAHn/88WxeoHW0+I18Eg8jz/CIWfm1eef8jqhh5++HoIMQRafbj0YTvOqvU2K/0BDHc8XyQ/4Qp+LAVL7hyz4YXiHExaAjxQtp9wJvGTqd8VzmuvlgVJ2CxsxEAP6mQWZeKvY7UyjiFTj8nE6Cc+FdlWd41PCcYD4mTN6+edtYDQIPAAzO++23X95uybelbuh1yiAdXEZqzbhuuw9eBLDf7dN7eDAP993vfnf2k78nPC94G9gy7HYsnyyJjAdDv63uffVz/sk7IzUs6Z5nnp+VL3G8on4JOghwBMHDqrwPXtBB8G7njUTZzLvizYsxqQQd0qcsv/POO7NnlfJ4jjnmaJ0W7xq8RWKjjEQQ9sI+Yj4CEXUVgjFCEWVqLOgQ2NRGkUmXzg91DMaxt9xyS/bd/0c+4iCD9juikuWDutE8ZPi9n4IO3PB4MosHVmy7fcaCDuUAjT4T41gtg3n0Zp0Enbr1fBPqNbvWJn8yPdvXYZ2EuyZfi3fxR7il/qI8GCZLXc93qtfw6Kbss/cUrybeVcpkjHYZ5Y61eWgD0Gn0g1i9qucZtaddZ8Z5upk+a8frUwSKCFRpv9VJw4skPh0cCwhtgfDBYIh/v/x+7b6XKctpUzBIZn26Y445Jlx99dVjkuV38mr7xStWEkuUuJQYefWxMdlG3xhBx7yV87x02Q9rUnv41RyNj//7JuiY2hdXXubW5ZV5HjYeOgxFEAHDRjDp7DOKSMMZo3Ki0LcGdd78/VjQoeLyS7PxAG+99dajYoMwf7jM6jvexawfgo5/qbgOAlf5RjNuc8yJtlFrOl649PuR0rjhy9QsXnibdx6PeuWNLqfIR3YDX/vPuwSWXbb8Ix/5SLYMPEnwPNChiGNe8Hwwxc4KmXadbp8f+44QhCBk1s4Lw/ZJ9Rm/K72eckUQRLyPzGzpVzrivA80AHnmeB7ofJnhuWNBj7l3tkKWXwbXB20jXgvvHCIuVrRalqXfi88U99U3aqngEGSsXPJ59vx4PuGFB+KgBB2CTeL6jxHwmWmuZcwLOmX2x0PDe8f0QtBhWhJll4koTBtkWp91RopGeb14QAeQ6b94M1qnhutjKjDLkSNCem/A+Np93TXMy5bH74S9//H12t+xoEO9Ai94YDzjfipsO0EnRT3fxHrNWDXp07dZuvFYbcq1IMoyAmwDhGeffXYWC6gp+Subj5T1fPwOF7VXGJTcdNNNW+yIO0bHknY2g4IWtJ/y8Mgjj8wC5/vr6VU9T5mLt7UZXv6dps3avvoUgSoEyrbf2qVZJo0iQcenSzsJYd17wfnfi76XLcu9d01RORkP6MSBkSlv6WPNNyXOKkafkrY7gisDYJQb1s+67777sn6773dmB035r2ntYcvXePjsm6CDmscIJQ8FnXczG93nIaHxhzHKTccb8+7tVC54i5jXQLbDlP/iSiwecYoFHYK7ISR5oyPA/EGLAdBuqpE/zr9QvRZ0qOwQJ0xBLZqqwgvDCBzXhNEZ8TEgfMOXDiYdr5gpnG2lol//+tejpm6lyofn2I2gg3hF5W88DjnkkFbHztL2S+0h9tHp9p022y/vk1Fz3I7hieHNxd8mfOUdk3JbyoZemXz5jikFMdP6MKao4ZWAIdLgUeMDG/vVnfx9tIDJiIuMnFLYMyrBe07gVVtliWezXYc5O3HC/1LcV8+KrLUbjfCVvm9gD0rQQZwj6CVWtCJR9mP0X1VBh3uO14tZLwSdvDgvnitiDdP8vNCLeMAUAis3rrzyymyqruUz/qQcRTgqMv8sDLOgs9RSSwVEHAxeeHK1szxBh44gbG0ABrHNYge1E3RS1PNNrNfa8RvEb4gf1Jnm4VtmJZJB5LPMOX3dznvOwAt19LBZqnq+ar3GFGvaeT4Go2eHVzZtaTxKY+tVPc/zSQfYzLctbJs+RaAuAV9nk1a79lvRucqm4QUd6lUG/VgEyNoflj5T8Kkj82Lw2T7+s0pZ7r32EI/wuIv7QcyaWGmllVqnyGvLkGfKXeLVFRltMtqVRf2kprWHi65jGLf3TdCx0VoaesQMMDOvA9/Y90vi+ukBXuix4+0Twchc7eOATbGgY15Bdqx9+ge63fw/25/Pfgo6dKrpXJshYHiPAFNH+R2PCPNailcL8A3fIqb+uuIOT6p82HXw6RsIZT10OA6BxqYoEFjXNwb4HfHAFOW8zh/75BkFF55ONh+UBiNiWj/n56dq6OVdX942Hx8K4QW362WWWaYl7FARIGjhks39MvOdNh8PBK88mOORYgICS5QjlvqRfNxOTcy1NHv1meK+MjUNLxcbGaasoJPEMxIbohXvG4ZoSoVt8Zu88MDv/ZpyxXNsI7BVGjJe0KFhktfQ5zowWNAZ4Dky82U8jYkUMXQQruOpXYjyXrCn0+I7ergG48WDkQ8aN96TyPJb9tM37PIaQWXTYb847/2cckWgfO4xRuMS9+l2lifosL9fCc9PV2sn6KSo55tYr7XjN4jfmGrphbo4Nsog8tTtOX2br2x7rdtz9fK4FPV81XqNQTmeAwZDi4wy/IYbbsjEbquzbN9e1vM+bd+2sHPrUwTqEKjSfis6T5U0aGsw2+Oaa67JYrPyXjFIRL+FfpYNGHMu9ikbX7RKWe5nM3AewiEQ5sAGuvidpc0pR8zo6+DZ423FFVfM8uzDZfjf+Y4gxQwY85r2vzexPezzN+zf+ybooMrZXFgLBMyopwVJ9a6WfrkzH/Ap9hTx8H2DMF7pIBZ0ilYp8g1zGviIUJ3MCx+99tCJX8pOebPf/dQXtvmGb5HI4ZevjXmmyoflj89uBR1fQNDowOPL5qLGgZNjTyV//vi7Fxz4LWYY79+Lv1M09Krkyz//dHSpiGg04wmFIVoQdwoxAFHAzI+i+Yj/uHbi6WPLICPUUkFQoXlxoJ8eOnXvK6IxZRIjLBiVFx4vfiUm44I3InGDbDQc13UCt5sNStBJ4aHjRXe7nk6fvRB0zMMzPreNiuUJNquvvnrWeOEYpqRyP+uYf2+GWdBhuVFikmDdeuhwLO8IQps1DinbiePWTtBJUc83sV6DR5PMj9SmePYHdW1x/AvalHSGhtFS1PNV6jVEY+oAE/VhRr1Eu5v3nqkTNmWV35g+QduJstSsV/W8PHSMsD57QaBK+63o/CnSsLTjd5G2MV6y1oex/fI+q5TlOFLQFsUrz4ygzPTJ8ezzZYH9HodCYCVk+rtmDFAiDNHvoq2L2ENaGGUFq+QxG8asqe1hy994+OyboEPn0KJ6o7oTj8O8AADpp2IRj4NRfYw50uYSGnuaZDu89h/zgREasFhZjAUdRKQ8F3oCXRKgEivToGW/fgo6caXN+cuYD07L/r7hWzSfcs011wyMwGLx8pGp8pEl/tp/3Qo6BC1lqpwF+KMjZ0tg+6U1ywp0ZIcVrfyyxKTHqBGFbT8tRUOvSn59h47jCJiLhw7mRT1i5OBFZebnuXuxgM4tAZK5N7xPFjuG44jVQ8weLG8KZPZD4v/q3lcqPaYRWeVHRYZAFXuIWLb9e5Y3gjwoQcdPp6wiVHoRrgmCDo2Gbrx8fDwnBEqEyjo2XgSd+HlE3GnXsCzy0IGlD1ZLXAA8QdoJOinqef++NaVeq/NcpT4WQZV4hObJa1NiU5+nH+n5e007kgHDds9qP/LU7Tnq1vNV6zW/ChZ5zvOC8W0n9okDqfaqnidemV+QhPOUiWVJHmUi0I5A1fZbXlop0ojT9V7w/Fbmme+mLCf0AX1fa7/G+aA9S9/b4q/6BSUQnhjE5RNjEJb31E+rwimDOt7Sj2Po+TK7Se3hmMMw/903QQf3bdy4sSqCDo1tHhSsnaDjhZW4sxELOnjz0MGMzbucN1HQ2WKLLQJLcGN4BhAQuYwR8M4vA+9fLAKB4vkUWztBJ1U+/Dm7FXRIw3tneXEARdqmncXT8Py5/XffOWM7BQ95y3te/HG9+F63oVc1T3EsKjueZ42VJ0yQjTt+fqqQHzWw4/mMO1heVCgbr8qnV/V73fuK4IzHHqMzGGICo5QEFC8yKjxb1euxxx7LVjTw+xLLxaaisZ2OL1MoH3zwwXDGGWf4XZN+953tKi6+40XQ8QGRbcWxOoD9szXMHjrxihlxDKSYUTtBJ172mJhb7M8qZFhcR6eo55tYr8XMBvn3euutF5hWi1F+UQbHsfMGmb+y52YUmOfFBnH6UX+UzVs3+9Wp533Zw7nLtFcYgLFVXYum3ONdh5ed1XfxoF6v6vm43GBqc573azecdczEJdBN+y2mlSKNOE3+RgDxHu9lvA27LcsRZDiWNjzvNp47eOnQDiIIMvFxbKCVNiiLRWBxGZU31Z39WK3YD7IRooH2LNbU9nCWuXHyX+MFHb80dbtOue/Ux0vxxoJOUSWx8pTI/4zeYmU9OryQ1OspV36qQNn85T2ndRu+qfLh81ZH0CHOjcV7QHghzhCdZRouZijHXtSy7f6T6VvEf7ARTDwvKGi9Cu337/X3uBC1eFNlzss0KQuKbfsT5LCdMEUBzxSr2LzXE7+xdCGrnGF0DHj3+MT8e5htmPIfBToePRbxnsYihTsVJNbuvc52qPlf3ftKB4IGLI1NjGul0jVPsKLs+QqsaJ+87d5DMe/3utv86CvvBO9GGasr6PhgxDD0Fb+dn7nZuOqa+QYB21JM2/JLZla5fstT/Ok7VcMs6DDdgRUkraMcv/fxdbcTdNjXB1NH+OQ9KhJ0UtTzTazXYmaD+psyl7rMBse4H3i2DqP59438U2ex4MawWrf1fLf12uGHHx7wbMbigRbP0L9PPgwC+/SqnvdtcDyvaNf12zPaM9D34SfQbfvNX3mKNHx6/jtpU++a5a0qbL/xmaosR8yhDW6hUPgbj3OrI5iVQF8a830+2vF4+libP9vhtf98+4xNPl5tU9vDPv/D/r3xgo4fTSZAm2/se/h+GUW/kgz7xIIOD1ZeQE8vztx5552jVFN/Lv/dH9ONoENcDet4d3K188E8mTLGNDbc5Kqar6i78dBJlQ+fby/oVG2gURBxTy1WyXHHHZfdczpuGC67sG1nqNI8axSWGCIEhZt5pbQ7tle/ddvQIz/+ubL8tfNwYx86clQsFuyXbXkjeDx3PANYLCz61Wr4nUKf6RZ+lTW/oh37xKvSsS2V1b2vMMEDiZgNZp0qXNvPTyOxbWU+yzyvZdIp2sd7YtFYptHsgxcXHVdX0KHxwHtu5r02bVtcVvdC0PHvFQInngqInd2a72DW7SgzguYDOvczKDLX70feO9VnnQQd/5yRNg1Bymos9tBJUc83sV7DQ4/YA95oW7Tz7PP7pvq+xBJLZEHuLb04npdtb/fJlHTvUcj9xLs3b/p6u3Tq/sbU3XnnnTdLBu9jxMAq1pR7Ynn25RHbygzcdFuvMVhFR83aOUUrpZIPwg9wz7F+1fM+oHpdcTzLuP6b0ATqtN8MXIo0LK28T6Y4+Th+tBvxtCuyFGV5XtpxyAXf9vKDgLTp/SCuTysWdPxUzaa2h33+h/174wUdlkezjjmNbxqcTAHxFncQmYqE+5hZ3EmIPXjYjwoON14LAMvxZaY01RV0EC/MrbVTxzZ+WdqNrti1533WbfimyofPG0H3CGKM5c3p9vvmffdxMRDrSGvWWWfNdj311FPD5MmT8w7LthFtns6lCRk0XhAhTLkuPLDHP3TT0LMsWeBx+5vPk08+OVx88cV+05jv8cgbq0+xCpUZohlCl0W5j71rYi+fvCktBEkm6DZG3ANG8nvhBVX3vvI80KG2JdvJr3dD5e92hoBhI6F5++H2ikhixjvw1FNPZdOueikkcl2IBiYks1pe3ooEli/7rCvoUMbi9WGdibwVdmJB0DcqyIcve4q8fNjPpvXyPU4jfkaLAsNzbBnzc+ARgjlftzZoQcc33DoJi50EHRgwTdGLocYlFnRS1PNNrNcWWWSRbEU8u24+y7Yt/DF1v3sRHvGWMredt2be+VhMIl4VifZYP6dtxW29008/PVuwIC+/Rduack8sf1Xr+br1mp+KHsebtDxRRtM2tSCq8X5xGZqqnqdOoIzHOrXbLK/6FIE8AnXbb6SZIo28vPlt9D1sNV225w10+f1TlOU+Pb7Tnmdgi/IVi9/3pZdeurWAEb/TB2Cf2JZffvlsloNt9/s1tT1seR0Pn40XdBgRwvvGpsEwl5fOqjVGaADTOLFATnQMeTAtXg83KRZ0OBZPBD9K5juZHFN2RaS6go5/Oek48wLEghX5MfMdbq4VEYho5LH9P3vnAnfZVP7xlUvJuEbJNTRqUBO5VnIp18g9hEhRLqPpYlwqUomUKOUSQimhoWFCxl2EFJV0lUpRJpMoySX+891/z3mfs969z9nn7LXfc857fs/n8777nH32Xnvt715r7bWe9TzP4qW48VwXMu6dAZuXqh1f0kqRD58nghoSQBcp4wfuz+VzvOqF/c6gDxe7IuUM5YZrm4KCzimdCmKe9Fo67ej5/Har0KGziEWKiQ+GTCeP8moKDqw7iLOBAsyLX7EGhQ3+tqYUYkDA87BBvU/fp1H1c9XnSv4o48xamMTKK9vf7bZVLKJu0yx7nreIyFNw56VTVaFDmn6WBiUSiiXqKMKsO2UDE2STWBmTQqFD2qwksdZaa2WXwdLgggsuCCh2vKDwIuAobag3ifbH8JklTKkHJkUWoPZ7q22vFTpxHB3qMqsh5UkZhQ4r5rASWSyxQifFe74f32v9oDxgUQn6FdbmdqvA7AeFjp+4od1AodSpdV0/PBNfHzp5z1d9r3Fd33fje2xxSl97hx12aMRb4pjLLrssYM3jJfV73ivreLakH/ct/PX1WQSKCKTov6VIA2s7FOjE+cKV3guTfdQzW8yH39pNolRpyxkj0X8iVICNn7kmihbe0abMYR/xIbHON4kVuNwLfTk/ViVtJnDMZQvLTSZEy1pw9rI/bPc56Nu+V+gA2A8++E5sE5ZBRXuK6alZdvDb5ZdfHli9wUus0OE3CvStt94a5syZk/n1mwsJv8WdTfYhLOXoNanso8NKYUeIx0C+vDBoaWU+F7/MCYqKv7JVAlao8ksdU6GZOTEFBNdipRZcWlBiEaeCgREDZ17MVLi4Q52i45siH55TPDOPYoXlMk0xh0IGi6Qi4V4JvMn9e4E9DU+RoLxAieGlnXUEbiPkLbXAACWcCUtjWyecfb7xRJnCwJGykifdKnS4HspMGmcT6hvPg32scGWSNyvHb7GWnjJNjAPKNHXIyi73wDMjGHBqqfpc6TjTofTSrlwwY1HkEurTsc+9fIH5QQ1KN9qIdu6bKRQ6fkAGB9otrFood+uuu24jfosxqkuhQzvBTDXm1CaUQ/5oc3inUFYp8/fff3/Tqm52vG1RwvgVIBiMkA7vFpRFyA033JC9t+wcttwzdd7ngc8bbbRR4zA6gn71HuoQbrKmBGscmPCDH7i3UmKWUeiQLe8iY9nMe8dWfc/343vN1zO797G20PFWV+ShKKCl5a9o68uFHTOWFjq8N7Cipr4hee7Alq9W214/kyrv+arvNbjwbmPg5fsWvMtps3CJZGKNY0x4LzCpGit2U7/n/Uq19KO9e67lRVsRKEMgRf8tRRreA4PxIXWIfjyeICuttFIjjiT3RF+BPgn9jSKp0pYT95L4l1yf8Qv9WcbGBEhnLG3C9QlRQf/cS2xJRD+JCVkU6nhDENPL92U6DVbfy/6wv89B/jwQCh061Qw4THFSBJzBAS+BOB5ErNChU2zuBnFaFE6CBeYNlM1ntjwAAEAASURBVOMObHxu3verrroqXHTRRXk/NfahxcQvMk/ieEAcg/IJX2M0q+2kLoVOinz4vKOtpvEz7a7/jc+4omCJ1UpiKyuOPeecc8LNN99ceFpeB6nw4Od/qDID3yptv/pOq+PsNwaRxGPIk24VOqRFfcPKzZYfzEsfJQ8cTOEWH8PMwzbbbBPvbnxnQHruueeGW265pbEv5YeqzzXvZd4uf7Q/KMPKSq9fYL6z0a6ecE8pFDq04Qwo/WDC86LT49v5uhQ6XJPJgHe/+90BxWkraafQ4VxWh9hll10Kk4lnwTmQAWonCkBLHOuidso3O7abLYo1ZvERZshRbOYpkOL3IQqVPIl98zkmT6FT9T2fQqFD3qq+X0nDhE5uzKVbCxlLs5MtEx0sVQ5bpErAde8OY3mgX+YnGmx/HdtYgUBw3zzr5HbX7vUzqfKer/peMzbxwND2x1vqPXEwLDhq/Huq9zwDSiberE+LRSQxdCQi0A2BFP23FGn4Plar+8DAABfD2ErYn1O1LTeFjk8z/oxil/hq8Ria46ibR89daKZojObTos/Ee8dPRvnf8z73uj+cl6dB21erQsf85ymsuGpYnAyULhQOZv2Z4UEws0eRQsOeNzCi88vsLvEK4sEAChpig2CZk9fxRHtI4eI8FAMEzaWTZS8Pe2gMUukkFC2TiPn9tttua4eX2raa4bQEqKiY3eGniAWEWTDwe1GcHPJOBcXCJ085RYefWQ6se3BT8+KXHcfVgOCXsfhAnwy6YZYnVfIRp8d909lihpoOqI8/wsCC8tRKMNv3wY8pF3SA8honS+eggw5qLNNn+9ptKUv33ntvu8M6/t2775U5GW1+kaUQriw2k2lplRm027FYMGAuyqo0FsiU32igafRx9cPyppUQ+4qAoLZ0tx3Ls8TKoKiTaMdV2VZ9rtRDZiU7ESzlmEEuKyvO9VcmRhHC7AwKyyIFWdk0Ozlugw02aMTwQZFCXvLaT0vTz6BWCf6L5QsKA9/+cl3KMp0JlDj8xr7Y7J7OBIpM2nLK35QpUyx7TVt7x+Sl4Q+kbO6xxx6Z5aWfpbJjeBexdOddd91luwq3WEZS3rHajK3r8uKj0W4zcMlrv4suQpvGTFknHaWitIr2U99pWyx+hg9s6M/xg0LeNyia8oRnhRLPK+qKZuCrvOf78b3mlaCwYdaTd1SrWdg8ht3uQymPuxV9DKTofd8u/XhpXY6nTvgA3u3SqPo7ytc3v/nNWTIokXCVNgu4TtLu9TOp8p6v+l7znGiHd91118YS5v43PvOeZzIydhWJj0vxnvf9TSw2GTi2ehfFedB3EfAEUvTfUqRBbDjGrEWTo7wP6Deef/75bVfhrdqWM76kDxf3x+HGmBdXrIsvvrhlvWOcvvXWW2fj1bz+En0yxrzXXnttk1uXfzZFn3vdHy7K1yDtr1WhUwcIChEWN1Q2XuZY0mDG1k3jT7wAChGmZwwmxmqmKSUXOst0vLkXKhsKKyonTOj8j5X0Sz7G6n6H5TqYUGIaioKNODj4+HbaieZc0uA8BshF8YyGhWm/3Cd1FuUJJrdIntKhrryixEABS7tF+4uCtJXita58WLpx+2VtKG5TwyhYLzF4RHiXsJJQN+/YbtmlfM93mwfOi8tFp+9XXEotOD/ptVqpk9/7VWKrFgYiDLrbDfb78X7GyzNJxRYlOX1IyinvaCZcZs+e3fEKn92+53kXYP1lM/91WUCn4qV0RKBTAtQvJklxtWLSAuMGVujj3Vrn5EycTxT79LssL7zPGPuy7UQYF1DfuS8WSqFPT5tB7NFOxwedXFfHtiYwcAqd1rejX0VABERABMoSYJbWrCS7WQK47HV03OAR8DFTulnmevDuOG2OGSCjPDBBIYYSBAuEQRPvzkbemc1lxbpBk/H0TAaNfVF+N9xww4AbGtJtXKSitLVfBERABIaFgBQ6w/KkdZ9JCeQFVuz0AlhQ4TLiI853msZ4Ot4PILu9ryIXxW7TG4bzcBFilhR3r7ylKIeBge5xNAEGv7iRIQRMHUQL1tF3NXZ7vEsjVy27mtzY5bD8lXBzNLcBFFO4Z+KmOWgynp7JoLEvyi+WObh2I1gLEIpBIgIiIAIi0BkBKXQ646WjRSAjQOBM4kJVlf33339MXeOq5rfO8xkk4AJZRYh3Mn369CpJ6FwREAERqEzAW7WgBMFtDRP7QRNM63GJMWFlE9wzB1HGyzMZRPbKswiIgAiIQH0EpNCpj61SHscEJk6cWBiQtextY6FDoGf5nP4/MVZ7Y2WBKsKqcLNmzaqShM4VAREQgcoEVl999SzgPsoc3BlbrWBS+WI1JoD1HgtCsEgBsXNoX4mzMogyXp7JILJXnkVABERABOojIIVOfWyVsgiIgAiIgAiIgAiIgAiIgAiIgAiIgAjUQkAKnVqwKlEREAEREAEREAEREAEREAEREAEREAERqI+AFDr1sVXKIiACIiACIiACIiACIiACIiACIiACIlALASl0asGqREVABERABERABERABERABERABERABESgPgJS6NTHVimLgAiIgAiIgAiIgAiIgAiIgAiIgAiIQC0EpNCpBasSFQEREAEREAEREAEREAEREAEREAEREIH6CEihUx9bpSwCIiACIiACIiACIiACIiACIiACIiACtRCQQqcWrEpUBERABERABERABERABERABERABERABOojIIVOfWyVsgiIgAiIgAiIgAiIgAiIgAiIgAiIgAjUQkAKnVqwKlEREAEREAEREAEREAEREAEREAEREAERqI+AFDr1sVXKIiACIiACIiACIiACIiACIiACIiACIlALASl0asGqREVABERABERABERABERABERABERABESgPgJS6NTHVimLgAiIgAiIgAiIgAiIgAiIgAiIgAiIQC0EpNCpBasSFQEREAEREAEREAEREAEREAEREAEREIH6CEihUx9bpSwCIiACIiACIiACIiACIiACIiACIiACtRCQQqcWrEpUBERABERABERABERABERABERABERABOojIIVOfWyVsgiIgAiIgAiIgAiIgAiIgAiIgAiIgAjUQkAKnVqwKlEREAEREAEREAEREAEREAEREAEREAERqI+AFDr1sVXKIiACIiACIiACIiACIiACIiACIiACIlALASl0asGqREVABERABERABERABERABERABERABESgPgJS6NTHVimLgAiIgAiIgAiIgAiIgAiIgAiIgAiIQC0EpNCpBasSFQEREAEREAEREAEREAEREAEREAEREIH6CEihUx9bpSwCIiACIiACIiACIiACIiACIiACIiACtRCQQqcWrEpUBERABERABERABERABERABERABERABOojIIVOfWyVsgiIgAiIgAiIgAiIgAiIgAiIgAiIgAjUQkAKnVqwKlEREAEREAEREAEREAEREAEREAEREAERqI+AFDr1sVXKIiACIiACIiACIiACIiACIiACIiACIlALASl0asGqREVABERABERABERABERABERABERABESgPgJS6NTHVimLgAiIgAiIgAiIgAiIgAiIgAiIgAiIQC0EpNCpBasSFQEREAEREAEREAEREAEREAEREAEREIH6CEihUx9bpSwCIiACIiACIiACIiACIiACIiACIiACtRCoRaEzadKkMG3atEoZfvbZZ8M888xTKY05c+aEJZZYoudpVMrA3JOfeeaZMN9881VKpl/S0HMdeYz98kxS5EPPVc91hEDzJ7XDIzxS1LUUaai+9t8z0XMdeSYpPqXg2S9pqL6OlIh+eSYp8qHnquc6QqD5k/pNIzxS1LUUaaSorzNmzAgzZ84cubmEn2pR6EyePDlMnTo1YTa7S+rRRx8Niy66aHcnP39WijQqZUAnjyKQ4pmkSGNUxrSjEoEUzyRFGpVuQiePIpDimaRIY1TGtKMSgRTPJEUalW5CJ48ikOKZpEhjVMa0oxKBFM8kRRqVbkInjyKQ4pmkSGNUxrSjEoEUzyRFGpVuQiePInDllVeG6dOnj9qfYkctCp2JEyeGKVOmVMrfAgssEJ588snw3HPPdZ3OY489FhZZZJGuz+fEqmnMO++8gb+nnnqqUj5e8IIXVGLBxfshDT3X5mLQD88kRdnQc9VzbSYw8q1qG0pKVdNQOzzyPPik+trMQ+3wCI+qdY2Uqqah+jryPPik+trMQ/V1hEfVukZKVdNQfR15HnxSfW3mofo6wuOKK64Is2bNGtmR8FMtCp2E+VNSIiACIiACIiACIiACIiACIiACIiACIiACEQEpdCIg+ioCIiACIiACIiACIiACIiACIiACIiAC/U5ACp1+f0LKnwiIgAiIgAiIgAiIgAiIgAiIgAiIgAhEBKTQiYDoqwiIgAiIgAiIgAiIgAiIgAiIgAiIgAj0OwEpdPr9CSl/IiACIiACIiACIiACIiACIiACIiACIhARkEInAqKvIiACIiACw0lgueWWC/PPP3944oknwt/+9rfhhPD8XS+00ELhpS99afbtr3/9a/jvf/871Dx084NPgNVnll566exGqN/Uc4kIiIAIiIAIDDoBKXQG/Qkq/yIgAiIgApUJrLjiiuHjH/94YInN2bNnhyOOOKJymoOcwNSpU8PkyZOzWzjzzDPDbbfdNsi3o7yLQHjTm94U3vOe92Qkbr/99nDGGWeIigiIgAiIgAgMPIGBUOhsscUWYbvttsuFfdNNN4ULLrgg97eUOxdffPFw2GGHZZ39f/3rX+GYY45JmXzptOaZZ55w4oknhhe+8IWjzvnf//6XDUgeffTRUb9phwiIQGcE6qpr8803Xzj00EMzS5Cnn346fP7znw9sB0UmTJiQWW4sueSS2T38/e9/Dw899FCgXawqtGuf+MQnAoyeeeaZ7DPbsZDDDz88rLLKKtmlvvWtb4Xrrruuo8uSZyx8lllmmfD4449nTGBDu5wndZWvvGt1um/VVVcNhxxySHbaI488EmAzVs+h07zq+PoJpKrzCy+8cHjZy14WaDueffbZQP1Aefqf//yn9E2gcKWO0Vb8+9//ztIoezIWOscff3zA+uy5554Lxx57bLjvvvvKnq7jREAEBpRAlXZjQG9Z2R4yAgOh0Nlhhx3CNttsk/tofvjDH4avfe1rub+l3ImZrilxnnzyyXDggQemTL50WgwCmC0tEjredJIkIiAC1QjUVdde/OIXh6985SuNzB188MEdDWgaJ47hBwZ06623Xlh//fXDK1/5ytwr33///Zly/Te/+U3u72V2LrjgguHLX/5y41DaWdrbumXttdcOBxxwQHaZf/zjH5l1TlkFxhve8Iaw2WabZcqceeedtymrDFSvvvrqcM0114x6xnWVr6YMdPGFfKFUQzmFfPOb3wzXX3/9qJS22mqrsPXWWzf2w+vII48sVOxRzl/96ldnx99zzz3htNNOa5yrD/USoP/ytre9LayxxhrZpBRXu/TSS7OyWXTllHV+2WWXDTvvvHPD4stfE8UO1l/f/e53A3WvSFAG7bXXXmHSpEmBdsLkn//8Z/j5z38ezj///FKKccoteUF+//vfZ0odS0tbERCB8UUgVbsxvqjobsYjgYFQ6Ky++uphnXXWafBnFvXlL3959n3YFDpomd/5znc2Wei8+c1vbrCRQqeBQh9EoBKBuuraICp0dtttt0xpUQbozJkzw4wZM8ocOuqYXil0jj766LD88stn+SlrnYPiY5dddinF5a677mpS4nGhusrXKKgd7lhzzTXDlClTsrMYLGOZmqfcesc73hG23HLLptQvv/zycMkllzTtsy8f+9jHwsorr5x9/e1vf5tZSthv2tZDADdClG48U8qbl+9973uZEsXv859T1XmUeFh7UV9aCcrPT33qU7kTUosuumiWBpY5RfKrX/0qnHzyyeGpp54qOiTb/6IXvSiccMIJDaUQnzlXIgIiML4IpGw3xhcZ3c14JDAQCp0Y/Nvf/vaw/fbbZ7uHTaETs+A7FjvWWZJCJ4+Q9olAGgIp6hquArhN4qKDq9VHPvKRtoOQNLnvPpV4cMdA/4EHHshcHrDkYAbeBFeGk046KWCF0an0QqGDxdFHP/rRLKsoLj70oQ+NsqbJuw8sTrB4MOHcP//5z9kfs4IrrbRSWGyxxbKff/azn2WDTTu2aJuifBWlXXb/hz/84cAkCnLllVeG6dOn556ap9BhUI4CKM+FRgqdXIy17fzABz4QXve61xWm36lCp5s6T1v3uc99LlAfTLDg+8UvfhGwZnv9618fVlhhBfspc3/6zGc+0/jOBxRRKFzNYox9BDTGVYt2Z4kllmBXJnfeeWc45ZRT7GvhFkufjTbaKPv9xz/+sazFCknpBxEYTAJ1tBuDSUK5HhYCUuiUfNL94nKVl91+GATk5Uv7RGC8ERjWuoZC561vfWvA0uSKK64If/zjHxuPlo7TW97ylrD77rs39nXrytALhc573/ve8MY3vjHL+09+8pNw6qmnNu6j6ANWB8RBMiHOzBe+8IXAalAmcNlkk00Cig+zHrDfira9Ll+sanXcccc1rDmOOuqoTHGXl988hQ7HXXzxxVkZic+RQicmUu93yjHWKCa4YtuqZewro9CpWudf+9rXhg9+8IOWhczFy8c8ZCIKV0cUOybTpk1rcr2K65oPZoxSHAWiWX7hvkW9pD62Eqy8mfxCiHHFNRV7sBUx/SYCg0WgjnZjsAgot8NGYMwVOgQXxmyW2ZVuY72ksNAhKB6zp3QImO1ptyRrkUIH94lXvOIVWRBMZqzpUJQRZq64PjwQBgKPPfZYmVNHHdPrQUDKexl1cx3uoIPIs4JtlTLmL8sMIEEckYcffjjrbGKFkCepWKSoJ+SPmVHqG9YDf/nLX8YkHkkeF+rJinPN/wmc++CDD5auJ5ZWp/XVzvPbFGn0uq75+zGmBOCFaZ5bjD/ef+60fHE8g8NWS3kfdNBBjYEZbg98b9UeWhBh6syf/vSnrGyOtUKH9oJ4RjbwZXafWf52ggsJgYMRArPiKjJnzpzc05Zaaqnwkpe8pJRbR6/LF3FWdtppp+w+aC+IpVMkRQod6jiD6tj1pVOFTor6St6r1BPO75c2lLx0IqbQoc7iCkecGhR05lrYTqGTos6zmMW2226bZZu2AFe+OCYWVjbUHxPqI4pjE69wJcYOK9H5NFi1itWrTHD5435bCcpWgiObdU9ZN8tWaeo3ERCB/iFQR7vRP3ennIjAaAJjptAhaCSrVdFJMMGE94YbbsiUKczwIgSQbLe6SLcKHTrvuGoxG+vzweCcgfqsWbMKrx0rdFiZZs8998zMhUkXoQOLm8G5556bdfKzne4f/px0PDDTxxzfzrND6Aj/4Q9/CHQuyE9Z6cUgoMq9MGtHY4swGCWQZtHAj870Jz/5yQarz372s7mDSlwEdtxxx8wsm4GiCab/dA4JmNhOaYfri3V2v/jFL2YDgV133bWxz9JEcXfhhRc2XEqqsLA0bZuinrCSB7E9WHLYl3MYsxoRM6SYvNchDF6Z+aTDTOebsvyud70rM4238k5nnCCWX//618MTTzxRmI0q9dUSTZGGpcW227pGGUdZEQsuVwRbjwe//jgCeG6wwQbZLgYrnEM5oawaU2aZCbyLe0xRXSKBFOXL581/pg3fY489GruYOc9rx1BuMAhjltzqKvnH6oeyycDfpO6gyN7dimtiTdButa74HN5Z3urA8t7Nttvy1c218s7x7la8h6m/RRIrdKjLKE8Q2ttrr7226dQyCp0q9TVlPellG9oErcIX+ie//vWvMwWltQm4Ltk7rp1Cp8yl29V5+lv010zsvW/f2aJM9VZxLHKBKz0y//zzhy996UsNhWtcJqmLvG+sHeQcFFi+DWFfnuy7776BgOaI3K7yCGmfCAwmgTrbjcEkolwPA4HaFToM7DDFN4VNO6jf//73w3e+852Wh3Wj0EExwIyxLUtbdIE77rgjnH322aMGWF6hw7koCvIGaPyGZQgKAQbPXug80IloJ6RNx55BbxnpxSCgyr0w6Ge5UBNcFX75y1/a16YtLgt0TBEUAcS38LNzDAjpyG+66aYNN4GmBJ7/wrMgYGIrCwN89y3YNgMSVlezAUqcJorI8847L9tdhYWlm6qeYHr+vve9r8m03q5hWxSYzGCyqkhqwQoH5YWJH+TZPtuiGCPWSp55fNX6yjVSpGF5tW23de2MM87IYkZYOn7bbpUrP9OEJYtX5Ph0+HzjjTeGb3zjG/HurG6kbofjizATz4w8wgCS9jZWVFE+uF+LLROnQd02axl+q1uh41e8wWLU3DDifPnvTEygMDVp5ZZkx5Tddlu+yqbf6jgGxawwhjID8QPrvPNihQ6ueFj4IChzYemXbG+n0KlaX1PUE/Le6zaUPNQlqRU67eo8E1gobxHaBOL6xEp8LJypQyZM2vzud7/LvrLEOS6AJpTPn/70p9lXyqtfjc2O4Trvf//7Wyq2OdYro7COpm8hEQERGHwCdbYbg09HdzBeCdSu0ME3mo69CcoKrFiYZV5ttdVGdezrUujsvffeYcMNN7RsZB1Oltmlw/mqV72qKWgfCiXy4SVW6PAbA2MUEZjaY3FjM1/8xj0S+NSLH/hjps/gDEUDgx7Sx3LFZppwn8Csv91sMen3YhBQ9V4wm4YZ8oMf/CCzasq+RP+Y5ef5IN533g7zyj32Ub6wPoEbQRTxozW57777MkVSkbuUV+jYOWxJE5c4gjjivoSLSJFCp9vnmqKeoGCk84urggn3jOUDgzTKF4MmE5SOd999t31Nso0VOiRKB5t6wiCPwZIPbokC9fTTTx917ar1lQRTpBFnrNu6hiLXFMAoIS3oLOl3otCx/KAEY6UgyiKBT327gbtLHA8iRfmyaxdtvRsSZe7Tn/5006EoLbG28wGUibWDaw/1Kk/ZXrdCh4Hfuuuum+Uzr31puoHnv6AYI7YIwnPgvlNJt+UrxfVRZvuAtChgWinAY4UO1k0EwKVMIuecc064+eabG1lrp9CpWl+9Qscu2mk96Yc21PJexza1QqddncdClMkbKxNY3lAueCcgKG9p/8x9kedFgHJTBE+cODEcccQRDRRcj2MQr4xtHPD8hzKWdvQ/6IeYYKGL1bhEBERgsAnU2W4MNhnlfjwTqF2hw2w9gzyElyXWGMR7QPKWlKtDoYOfNANdBuQIqyww02MzRXQ6eJmjVEFQBsQxAGKFDh0SOiZmGsyA6t3vfneTLzd+4ShtTFAuYPVx0003hR/96EejYl7QwWBW09wQioJLWnq27cUgoOq9YFHD8usIz4EOWBwDhOeCaxsDQQTTa2+1FHcWmdX76le/2ujwcQ7udcwQWhpYX91yyy38NEpihQ6dSlyrUN6YoCxhVpIybPursiDtFPXEDzQpn7g0+QHVIossksUwwEwdQZnIdf0sevZDhX/UdW+hQz7OOuusTBlHstST/fbbrzGI5nc68D6eVor6miKNPAwp6hpWX8SJMOlUoYPC8rTTTmu4EKIcwlXGBMsxK5u2L0X5srTytihdUb6aYK1B++UFV1Pu1SRe4jpWznJc3QodgqFOmjQpy9JVV10VLrroIste4ZZ7sNWt8hRXhSeW+CFF+SpxmdxD4mfYjn2s0EGhgsudWeOiDKLcUceRVgqdFPU1Vuh0U0/6oQ3NfTiJdqZU6MTlJa/Ok+2NN944e9dbv4bJEYKE0x9jkgEXTIRygnKfwOQmsSLayiTBnelfoSjiPNo8FIImWO6gKG4lWAnSHzWBDavUSURABAabQJ3txmCTUe7HM4FaFTrx4C7PhDsejNSh0PF+3Lz881ZBiDsncV5jhQ5xWfygjEKC3yYuJOamU+T+0KpAdbOcZi8HAd3eC8oFOlNmWYA7FEv7evGuDSjZGLTa4IDjcMXCJQvhdwYPbGPxM795llN2fKzQYdCMb30KafVcU9QTOqcov4znpZdeGi677LJRWacjjOUEZRVh9hQriVQS3wuKS5RsXpiVpZ6Ya028NHKK+poiDZ9n+5yirlVR6KD0ROlrs9SWL9o0s0aLY2PEzyRu20ijm3bYro1VAy4TtoIOFot8j2NWTZ06NYvrxHlFcS4YiPlljG0AZ9dKvWVQaBZDZRXoWBmZlRkKZhTNXhhkWv3y+1Fc+/bL/2afU5QvS6vT7VprrZUp0DiPcob1UivJU+gwOMdlxiZPfBvaSqGTor56hU439aRf2tBWzKv+lkqhU7bOW35xgaCNYhImT7AopF3i/ezFu0Ux8YA7MYJ71Gte85rsM33G66+/PgtynO2Y+w/r6Dgt+822KJj8u6nMOXautiIgAv1LoM52o3/vWjkbdgK1KnTWX3/9bDYeyHRmGZCbKa2BZwCKmba96OtQ6Hiz+rwOuOXFD+hnzpwZZsyYYT9l1jsELzWJV2Kw/X7gzizUCSecYD+N2jKwY2YSNxizIMGVyZbQJaAhg/R20stBgOWtm3vxATjzBv7eqiAOhsh1/QABpQQBYb0YU9x8iLODoPDBGihP/PPHHY/BXpF7Vt75tq9TFinqCZ1bHwMABZlZoJEvY8FnghSbNRpBT9sFIeecshIrD1Dc5AVg9oMvVhVidSGTFPU1RRqWH79NUdeqKHSK2i+WFSfgMYIFGpZoJinKl6UVb2m/WbkGty+E+kKbhRVkLCgPiZ+FEL8JxVMsuDJhJWFSt0KHvJqFQF4gX8uH36KIthhAt956a2aB5n/37qR+f54izf/O5xTlK06z7HcCb++zzz7Z4QSrJ95JK8lT6HC8X3WIdhQFGOLba9wFWWXIJEV99W1KN/WkX9pQY1LHNoVCp5M6zz1QV3i+TJoVCe0Gljm0Wz5G3tZbb50tdsB5uD5jHbfeeus1FDtYdqI8ZnIAF2ITr0i0fXlbH9us7Dl56WifCIhA/xCou93onztVTkRghECtCp1tttkmczHicrgeMUDOEz+wr0Ohg0uHuZnEs9c+P75TiSsVHXCT2EKnaAUXrxnOC7KJmxkDL+L5TJgwwZLP3bayJvEn9GoQUPVeUFzRCUfoxKFoMYVfHDg5z5IEtzmLS+J5tPrMDDmWAnQOY/EKnVblJD6P71VYpKgn3oUtL39F+8rUt6Jz8/bHCp2iuARbbrllYECIxG4rKeprijTy7i9FXaui0MlTbJJPv9x03HalKF95LNjnXWz4XlSeUCjiTmFuF3kWeZwfW0rWrdCpaqFDgFbaIS9+0Oz34wKJu20rSVG+WqXf6re11147HHDAAdkh3VrocDKxeLACNGtBi9XVSqGTor56hU439aRf2tBWz6jqb75sdvqOs2uXrfMcT1tHHTOlKfuIm4ZrE2UMlyuLn8NvrPLJu94s2Xx/yt7dvKex8EVQrhKfjfT95FcZaxtZ6GQI9U8Exh2BOtuNcQdLNzRuCNSq0PHWKgRf9TMonqA3xS8aEPjjfayFePDij7PPWMmYBVAriwRiutCpQ5hhxnLIJFbo7L///llgZ/vdtj5YcNwpZrCL2bG5mtg5RVusTujctJNeDAJS3AuBerHgsICJmD9jqYP4Z5ynGCOWTezq0I6T/e4DK9o+tl6hc+6552bBmv3vRZ+rskhRT+JOdlFe4/2tAlLHx5b5DgssqxBmXTGRt865P3+jjTYK3DcSrzCSor6mSMPn1z6nqGtVFDpYDWI9GItXkMVL8KYoX/H1+M6KVsSTMqHuMuOdZ9WGFaJv//1KNnY+2zgwb90KHe+qVubdQx79+yqvjcbF09zGqA8m/a7QwWUPHiYod0zBbvv8tshCh2M4FwURQmwznncrhU6K+uoVOt3Uk35pQzNoNf2rqtDppM5zC34VLL7nWcH4dz3HEHMNyzfEKxn5TuByLHQQ3/fDBdIswfiN+2wXD4dJGL9wBYonH/OQdCQiIAKDR6DOdmPwaCjHw0KgVoWO7yC1sjZhFSyCWCFlOtW+A+Bf6kUPzZvIt1LoeLeF2CQ8VuhgzYPCJhZvtu4VOsxWYmLuZ6q4BqbhBIs2ixEaInO5yhssxNfje4pBZl66RftS3ou3ivJxiXBvM7eg2P2NfMUDxGuuuSbgotZO8MPHBShPyeAVOjar3C69FCxS1JM4nhCDxzJCgMpWK9mUScMf4xU6MOb55rH2Myixe0eK+poiDX9f9jlFXaui0ClyVWql0ElRvuz+beufH/twL6XO5LWJ/M6MOspbE5TleW5ZsWVe3Qodr3i47bbbsrbU8li09TzzlM3+PN+m9LtCh5XG/MpkrC40e/ZsfztNn1spdOKlqHn3cTzur0j8fk1RX71Cp5t60i9taBPkxF+qKHQ6rfNk3V+vyA2OdyjlDmUu4hXSscVedsDcf7hO49rIqpJIrIzE6jte6S870P2LyygB0lmJUSICIjDYBOpsNwabjHI/ngnUqtDxgwxWHCDgZZ74lUbqUOj4mAZ5ygHLk1cuxEvYxgqdopf/xnNXdCBGCeI7+yzRjvuJSbykq+1nFSxcJJB+VeikvBdib1isBgaDxIEhrhAdQRNmdvOUDsRdwcoHKROfwtIr2vrBVxmTbdJJwSJFPdl8883Drrvumt2aL3dF91rXfq/Q4RpFy8f6/MZtQ4r6miKNPEaDqNBJUb48i3jlOFzmcHeIgyD7c2KXq3jFOjs2Xm60boWOnxwoCtRsebOt54k1EooP6lye+Dal3xU6uKDgPpZnMZl3b60UOhzvA9eiRMc9tkihk6K+VlXo+Dapl21oHutU+7yCpROXq27qPHn27+giqymO888Oyxp7/8cWexyLeGtevvvYW60mEjjWxPfVmFCjH5JnXWjHaysCIjAYBOpsNwaDgHI5jARqVeiss846Adck5Omnn86CIpslisGmE8ng2eLJlFHobLXVVo0gt6yMRDyGVuJnYQm8d+qpp+Ye7lcviZfgjBU6RQN+b+Xjgxr7zmI8gPWZoVNhAUbLKnSITWGrqoyF2XDKe2E1FFjiQoUQFBHWPGOkVewlHzS5k85plnDOPz/4Knq+8WkpWKSoJ5MnT85cQcgfdQ23EB9cMs53Xd9jhU6RJYZfeSyeuU1RX1OkkccoRV0bawudFOXLWGBJCVtm1ZEHHnggc021mXI7Lm+L242thHXhhReGWbNmjTqM2GJ++eG6FTp+Jo/BHO1v/I6KM+nP4bdWkwS+TSmj0ElRvuL8dvIdd1SLaXL11VeHCy64oPD0dgqdmBPWkbb6VWyhk6K+eqVANxY6dbShFlvNQ6RfkBco3h9T1+duFDrd1nmUuLhgWltRtPIi97rvvvsG3NURr0xDuYiS0WJv8Xv8vmCfd4P05/Nbkfjg3XmxsIrO034REIH+JlBnu9Hfd67cDTOBWhU6sRKEziGdRC+xr2MZhc66667bWFKVlzczpK1mVljhyBQEWIHQaY2Xt44HonHnO76X2IKHe6Ljgum4BewjACbpIF4JVaSoYflcOlzWASo6LkvQ/fODhhSWKi7p3I+p7wWLJmbLEFzzcLtYcskls+9FAz9+9B14BmEEqm43GMsSLfjnOZZV6KRgEZetbuoJA2UGzCatZkPtmDq2cT26+eabA9ZoXuic435jAa1xl/v2t7/dOCRFfU2RRiND7oMvI93WtbFW6KQoXyBgaXMUHja4ou2lzOEyWkZ88Hs/C+/PpS3HSsekboUO98KKhaYQLwrWbPmxrQ/ii5sGivT4ncKxvrzE7xRLy2/98d2WL59ep5+9xVIrZTrptlPocMzhhx8eVlllFT42SazQSVFf/fugG4VOHW3opEmTAta8Xny/wO8fi8/0L5ZffvnsUmUmQarWee86HccltPulv0O5Z2lzJD7OW07zOxM5Dz74IB8zYTKIiQOLTdhKwWrnsC2jYPbH67MIiMDgEKir3RgcAsrpsBGoVaEDTG92jcUAnWdWJUAYaBCE0RQg7Cuj0IkHja3i4pAmihKsb2zpZny0cZ2weA8MsMinrYSF6wD5IraHSTwo4lxmjvxM20477ZStNmPn+JWZ1lxzzWx5X37DJJgODK4KJgRtZvlf7s2krELHz07R0aFzkze4sHSrblPfCx1+Ov6xwInOcNGAkZgPPFdTgBGDh0FTfO/MCjPLSCBeypd/Zv6afjBVVqGTikWKeuJfYJRhBoQsCR4LAxcUaJTpdtZt8bntvsd1E2sh3GuIsWLirdjYF1uVpaivKdKw/Pptiro21god8l+1fNE24jJqg6ZHHnkkGxA9/PDDHk/Lz95SiAPPO++8cMMNNzTO8fHHbGfdCh2u461D8hT1lhe/ZXUe3AlNcNcisC9cTHAHZUBrAfnLKHRSlC+7fjfbOI4OiquHHnooN6kyCp2YkyUUK3RS1NeqCh3ylroNHWSFToo673nCN67z9MlwM2eZYZPLLrssYM1jglIJZbCJj5vIu586w5LzSDsXSEvDv6foZ1DOUVBLREAExgeBOtqN8UFGdzFeCdSu0InNrnl53n///ZlbCEHpzF/fAJdR6NAJOOqooxoriXAuwV0toB2KI1ZT4FomvtPOPpQprLzFDC2DfaxCTC6//PJwySWX2NdsGyt02IlSh9UY5syZk8UGwGTbJO6w0qlnRshmt8kbpte4LCDrr79+FujXzmdbVqHjlQqc98QTT2QrPDCYRljJiKVCU0nqe+F5EjST2DleUAIwSGolsXIAZQ5KDAYhlC18aelUL7bYYlkyrZR/3Sh0UrFIUU/IC/dgg25uGOUp5QjlJHxZfYdrwRxWflDainPZ33xH2c6hHFJPqJ8o73jRmhQFS69aX0k/RRqWT9t2WtcYcGy//fZN7RzlEuWiyY033ti0mhC8sDCw9ivFQLVq+fLWNZbvdm5WBElmGWITWKDkNrcr9rMyFkpoFAkofCiXXsZCoeMH3azqRJ0o467oY7+RZ87hvcL7jTpIWSF4u0kZhU6n5cvSTrnF0pHygrSydiij0CENYufZql98R+L3I/uq1tcU9SR1G+rLFveIjJWFDm7sTHhYn4Nr+/LIdz/5wbsCFymTFHUepRCTNTbpQtq0+ax8xkQLMehsIo3fqEPk2SsR89oN6hnKU9oSVrgyKXqf2O+29SuatlqB1Y7XVgREYLAI1NFuDBYB5XbYCNSu0AHoxnOtAVgZxL/UDTR+9cxuMkOHlFHocBwdTjrefvDKfhNmhswCh328+DneVlKw4+Itg18GIrHrTqzQYdBlZvpxGqyugEtJvGwms1A77rhjfHjTdxQ8xqKsQocEDj744LDGGms0pWVf4nhAtr/KNvW9xNZN5K0ocLTPN7Pg++yzT2OJXP9b3ufUCh2ukYpFinqCUpHYAHHHPY/FWCh0fNyMOA9YXmEJZUpN/3vV+kpaKdLwebLPndQ12qeimF2WXt7WKzJSDFS5RpXylTe4y8u335dn5UZ8FlY1xEqpjHgOZY7v9hivzC3T7nAdBsq4i2JZ1E54x1EO8sp6fG4n5Ss+N8X32KUZ6wVTLvr0yyp0Yrdq0shT6FStr6nqSco2lGDC5MvLddddF3gP1S0op9pNiPg8xM8kVZ33bnz+evFnyhhLlmMlFwtlA6tpv0pofAxKHtocb1kdH8N36i1c7B2JpTUxdCQiIALji0DKdmN8kdHdjEcCY6LQARyz8gSQZTaGzjwKk3vvvTcwO73eeusFOpFIke979mP0D6saFCRY+hBzxWZ3mWXFfYnBpBcGV3TAuV6sXEJBQywPLHPyOq+kf9xxx2XnPfbYY1nwXjpq1imw69CpYGUHsxay/Wy5JqsxbLfddqMGNCiBrrrqqmxmig49wowZMXnKCPe+6aabZrPczFh5RVcd8VRS3wtKLFxvTHgedChjxZr9Hm8ZNOyyyy6jrHzsOAZSWO4wO5r3bDjOzyTzrCmfZSQlixT1hDK5++67ZxYCeUpHZkGZlcRqC/fDlOItdOhYY5Ww1157NYJe27VQVmJF511U7DfbVqmvKdOwtGzbSV2DPwOGvOdg6cVbyj6xamjHEL+ccl58JY7xSwrfcsstWfvE/li6LV8oYbBk7ESK6hDtE9YYsXKdeombIO5hDLrgQFvItm7x7l4oX4jTkfceyMsH7DfZZJPsfuL3CtaSBH9Gqe4nGPLSsX2dlC87J+UWywlcxSymCYNsLOxi8QN12hSUb3kCE78sNccUWUVUqfMp60mqNpTJBq/wwyWI9xxWXHUL94CCIy6TRdeNn0nKOo8il1UYLX5PnAcsay666KLAghFFgoXpfvvtl1lDW3BtjqWd5HzaDupbO/FtJf2Co+fGFSpb19ulrd9FQAT6i0CqdqO/7kq5EYHRBMZMoWOXprPKyg8oRewl6gMnYvKbN0Nj56fYMljA4oaBBUofLGkw8bX8dHINXAUYxOJ+gHuBN2EuSgerEs5jQENnhPNw2xpE6bd7QVmIcghlHwMouPLXSnGQintKFinqCR15BmWUNQIQU+cYNFPW6xokxwodFBPci9UT6kfZemLPJUV9TZGG5Wc8bFOUr6occINkGWtc0FDw9TKGBXUFN14bcHYTkBhlBK5FtD3UNQanRcrjquzqPh/lHQN6hPaCZcW7eT92m89+qa9V21BciS3APyxarbLZLatBOo86zzsJJvS9qPOzZ8/O+k9l74P2YqWVVsosMHHXJHh3PHlXlBaKdVzfzQU7z4qw6FztFwERGFwCVdqNwb1r5XyYCIy5QieGi0kccRVsFomZQR93IT5e30VgGAkMSj3JU+gM4/MatHselPJVJ1esCFgBESEmG0qMYRYfS4fl1FPGYRsGrigtUOiYoBDDGqSM252do21aAhtuuGHYe++9s0Tzlj9PezWlJgIiIAIiIAJjQ2BMFDqYybIaCksY+1lYVrci1g2BAxF+oxNd1jR9bBCNj6vkBWfs9M6w6sCVTc+nU3Llju9VPUlZNrB6w10FweUKCx1JfxDoVfnqj7svlwtWZWIWn7LL8snDLCgkLJgxQWzLWJ8OM6/43r0bH7+VXUEtTkff0xEwq0BSZOJwLCx30+VeKYmACIiACIhAPoExUegQIBjlDf7jmG/zh3sKnUUfHJOYGqljeuTf9vDtJdAjy3tWlf333782d52qeRv083tVT1KWDdzdpNDpz5LYq/LVnzSUKxGol4AP0ox1DpNV9H0kIiACIiACIiACIpCSwJgqdIoyTmeHpcIJ3iuph8DEiRMz65oqqWOhQ7yjsv7qVa41jOfagLvo3uuqJynLBjFIpNApeoK93d+r8tXbu9bVRaA3BAhATiwi2m1c+FjdSiICIiACIiACIiACqQmMiUKHFaxWW221LAAmAZEnTJiQBQOmk0NQO1aXIrCdRASGmcB4qCe4adgqbazcRtBJSX8QGA/lqz9IKhciIAIiIAIiIAIiIAIi0B8ExkShE98qAZCZtZKIgAgUE1A9KWajX6oTUPmqzlApiIAIiIAIiIAIiIAIiEAvCfREodPLG9a1RUAEREAEREAEREAEREAEREAEREAERGDQCUihM+hPUPkXAREQAREQAREQAREQAREQAREQAREYOgJS6AzdI9cNi4AIiIAIiIAIiIAIiIAIiIAIiIAIDDoBKXQG/Qkq/yIgAiIgAiIgAiIgAiIgAiIgAiIgAkNHQAqdoXvkumEREAEREAEREAEREAEREAEREAEREIFBJyCFzqA/QeVfBERABERABERABERABERABERABERg6AhIoTN0j1w3LAIiIAIiIAIiIAIiIAIiIAIiIAIiMOgEpNAZ9Ceo/IuACIiACIiACIiACIiACIiACIiACAwdASl0hu6R64ZFQAREQAREQAREQAREQAREQAREQAQGnYAUOoP+BJV/ERABERABERABERABERABERABERCBoSMghc7QPXLdsAiIgAiIgAiIgAiIgAiIgAiIgAiIwKATkEJn0J+g8i8CIiACIiACIiACIiACIiACIiACIjB0BKTQGbpHrhsWAREQAREQAREQAREQAREQAREQAREYdAJS6Az6E1T+RUAEREAEREAEREAEREAEREAEREAEho6AFDpD98h1wyIgAiIgAiIgAiIgAiIgAiIgAiIgAoNOQAqdQX+Cyr8IiIAIiIAIiIAIiIAIiIAIiIAIiMDQEZBCZ+geuW5YBERABERABERABERABERABERABERg0AlIoTPoT1D5FwEREAEREAEREAEREAEREAEREAERGDoCUugM3SPXDYuACIiACIiACIiACIiACIiACIiACAw6ASl0Bv0JKv8iIAIiIAIiIAIiIAIiIAIiIAIiIAJDR0AKnaF75LphERABERABERABERABERABERABERCBQScghc6gP0HlXwREQAREQAREQAREQAREQAREQAREYOgISKEzdI9cNywCIiACIiACIiACIiACIiACIiACIjDoBKTQGfQnqPyLgAiIgAiIgAiIgAiIgAiIgAiIgAgMHQEpdIbukeuGRUAEREAEREAEREAEREAEREAEREAEBp2AFDqD/gSVfxEQAREQAREQAREQAREQAREQAREQgaEjIIXO0D1y3bAIiIAIiIAIiIAIiIAIiIAIiIAIiMCgE5BCZ9CfoPIvAiIgAiIgAiIgAiIgAiIgAiIgAiIwdASk0Bm6R64bFgEREAEREAEREAEREAEREAEREAERGHQCtSh0Jk2aFKZNm1aJzbPPPhvmmWeeSmnMmTMnLLHEEj1Po1IG5p78zDPPhPnmm69SMv2Shp7ryGPsl2eSIh96rnquIwSaP6kdHuGRoq6lSEP1tf+eiZ7ryDNJ8SkFz35JQ/V1pET0yzNJkQ89Vz3XEQLNn9RvGuGRoq6lSCNFfZ0xY0aYOXPmyM0l/FSLQmfy5Mlh6tSpCbPZXVKPPvpoWHTRRbs7+fmzUqRRKQM6eRSBFM8kRRqjMqYdlQikeCYp0qh0Ezp5FIEUzyRFGqMyph2VCKR4JinSqHQTOnkUgRTPJEUaozKmHZUIpHgmKdKodBM6eRSBFM8kRRqjMqYdlQikeCYp0qh0Ezp5FIErr7wyTJ8+fdT+FDtqUehMnDgxTJkypVL+FlhggfDkk0+G5557rut0HnvssbDIIot0fT4nVk1j3nnnDfw99dRTlfLxghe8oBILLt4Paei5NheDfngmKcqGnqueazOBkW9V21BSqpqG2uGR58En1ddmHmqHR3hUrWukVDUN1deR58En1ddmHqqvIzyq1jVSqpqG6uvI8+CT6mszD9XXER5XXHFFmDVr1siOhJ9qUegkzJ+SEgEREAEREAEREAEREAEREAEREAEREAERiAhIoRMB0VcREAEREAEREAEREAEREAEREAEREAER6HcCUuj0+xNS/kRABERABERABERABERABERABERABEQgIiCFTgREX0VABERABERABERABERABERABERABESg3wlIodPvT0j5EwEREAEREAEREAEREAEREAEREAEREIGIgBQ6ERB97T2BhRZaKLz0pS/NMvLXv/41/Pe//+19ppSDnhB4+ctfHl784hdnq8Q98MADPcmDLioCw0hA7fAwPvXxfc+sPrP00ktnN/m3v/0tPPHEE+P7hnV3IiACIiACQ0FACp2heMyDdZNTp04NkydPzjJ95plnhttuu22wbkC5TUbg+OOPD0suuWR49tlnw1FHHRVQ8ElEQATqJ6B2uH7GusLYEnjTm94U3vOe92QXvf3228MZZ5wxthnQ1URABERABESgBgI9U+i88pWvDBtvvHFYaqmlwiKLLBJYpx65//77wymnnNJ0q1tssUXYbrvtmvbZl5tuuilccMEF9lXbASew6qqrhkMOOSS7i0ceeSQcfvjh4Zlnnim8K8qNlZ3Cg6IfUA54UfnyNPrr89Zbbx123HHHLFN33313+OIXv9hfGVRuRGAcEui0HR6HCHRLjsCECRMyq1mU6/PPP3/4+9//Hh566KHwr3/9yx3V/uPCCy8cXvaylzWU9KQze/bs8J///Kf9yc8fwft+mWWWCS984QvDv//97ywvZU/GQodJAqzPnnvuuXDssceG++67r+zpOk4ERGBACVRpNwb0lpXtISPQE4XO5ptvHnbZZZfcgfiDDz4YjjzyyKbHsMMOO4RtttmmaZ99+eEPfxi+9rWv2dcx277iFa8ICy64YNYp+O1vf5tZEHRycdxIVlxxxeyUhx9+uKNOSSfXGaRj55lnnvCJT3wiLLfcclm2v/nNb4brr7++5S3svvvu4a1vfWvLY/yPTz31VDjggAP8rtBv5StF2UiRRhOkHn150YtelHXAGQggJ554Yrjnnnt6lBtdVgTGP4Gy7fBWW20VULiaoHjn3V00yD/44IPDq1/96uxw6vBpp51mp2pbMwHcjN72treFNdZYo9HvuvTSS8PVV19deGWUOOutt15Yf/31AxNwecIEHBNqv/nNb/J+buxbdtllw84779ywvG38MPcDEyxY4X73u98N//jHP/xPTZ95B+y1115h0qRJWd/LfvznP/8Zfv7zn4fzzz8/PP3007a7cEu5JS/I73//+0ypU3iwfhABERhoAqnajYGGoMwPBYExV+gstthi4bjjjstmVyDMyxxLDPNl/vOf/xzOOuusJvirr756WGeddRr7VllllUBsDaRXCh0sBWyQ+clPfjKzLGpksMSHt7zlLWGPPfbIjvzpT38avvzlL5c4a3wfsuaaa4YpU6ZkN0kn7bDDDmtpncOB++yzT9hggw06AvPe97636fh+K18pykaKNJog9fCLt9JBecoMq0QERKAeAmXb4Xe84x1hyy23bMrE5ZdfHi655JKmffblYx/7WFh55ZWzr6rHRqXeLZNGtJ8809iS9Xvf+16mRCnKwW677RY222yzop+b9s+cOTPMmDGjaZ99QYmH1S2KwlaClc6nPvWp3MmtRRddNEsDy5wi+dWvfhVOPvnkLN5a0THsZ5LghBNOaCiF+My5EhEQgfFFIGW7Mb7I6G7GI4ExV+gw4/O+970vY/nkk0+GY445JmCV04m8/e1vD9tvv312ihQ6nZDr72M//OEPB5QryJVXXhmmT5/eNsOxQqdodtgSIsAyblytpNflK4UyJkUarRiN5W+LL754+NznPtcYEBBLRwGSx/IJ6FrDRKBsO5yn0GFQjiI+z4VGCp2xLUUf+MAHwute97rCi3aq0GGShXYXNyesaLG6McF96aSTThplPYlbFG23TX5xPNY8v/jFL8K8884bXv/614cVVljBksncnz7zmc80vvMBRdTRRx/dsNxlHwGNcdUiD0sssQS7MrnzzjtHuezbb36Lpc9GG22U7frxj38sazEPR59FYBwQqKPdGAdYdAvjmMCYK3T8bHu3L9JeD7gpD7LQSVsrWNUKyy2bRSw7aPcKnVSzvr0uXymUMSnSSPuEq6XGDC9xPZDrrrsufOtb36qWoM4WAREYRaCTdjhPoUOCF198cbjiiitGpS2Fzigkte449dRTM2sUuwjxamz1SPaVUejgznzXXXdlz/OPf/yjJZW9p3nH4PJskue+9NrXvjZ88IMftEMyFy8f8xCrHVygUeyYTJs2rcn1CgufQw891H4OPpjxfPPNlykQzfILi2+Oxeq7lWDlbRM7//vf/wLXfPTRR1udot9EQAQGiEAd7cYA3b6yOoQExlyh4zuB3Q7Mqgy4URgQEO8lL3lJtuUlzmxPq8C7eeWinxQ6zH5hisw9/OUvfwlYPnUjFnfl8ccfz6ymOmXSzTXtHPz7d9ppp+wr90AsnTLSbwodZiRxK8SqBGFVpscee6zMrTSOSaGMSZGGZShF+aLeMZtKWjBhtrcTwa2OZ41gheUHCZ2ko2NFQASKCXTSDvt3uU+R+smgmnhlXjpV6PCepi1l0M47GuvKbqTqey1F+9dNvqueYwod2OEKR5waJkqWX375LOl2Ch3eYbi1r6azAAAbnklEQVQncX6RHHTQQQ1lDM+b737RARaz2HbbbbPT2Y9Lddw/4b2Aq5XJV77ylUyJZN9xkX7jG9+YfSXGzsc//vGmNFi1itWrTHD5435bCe8jXHfNuocJAvqjEhEQgfFBoI52Y3yQ0V2MVwK1K3QIPudjnNA58hK7yPzpT3/KTHf9MfHnbhQ6mB6vtdZaWVC+OA90NOi0sGJWXpBAXv5Yj7BCgolPA8WHxQCy3wl0jDuZCQNs69iwj84q6ZrEHNhPUGCsmPKEvBBYmuW9TXnAcdwLq08wC4ZZc574Z0Lnh0CC+MrT0TM/d2atrrnmmsztyXfQ8tJLsc+b+Xei6OsHhQ5+unQoCTi50korNRgaF57tH/7wh8yqhHIRS4qykSINn69uyxcr1qGMw5wexSDm85i3Y11DmTehjBIUlZhZZYSZ5c9+9rONQwm+2qmrZuNkfRABEcgl0Ek7HCt0eAeiPEEIUHvttdc2XaOMQof3D+7UDOD9ew2XHtrOWbNmFQ68U77Xum3/mm64x1/23HPP8Otf/zrghmTvcFyXyip0ymTfTxxwPO52/h3Hs6S/ZhLHr2M/SiOUTyYscoErPcKKWl/60pcalkZx34BgzVjaWL+Fc+jLUdbayb777hve8IY3ZId1ay3e7hr6XQREYOwJ1NlujP3d6IoiUI5A7QodryUtk6W8Va7i87pR6DAY9ObGcZr2nRf7Oeec0zQbSGfhzDPPtENKbbHK+NCHPtQ4Nu7YNH5o8eG8884LN9xww6gjMC8mDlGr+6EDzCwVK0fE4p8JCjSvyImPvfHGG8M3vvGNeHfS7/AlKLQpzHyHrt2F+kGhQ6eQzmE7Ia4E5YgVObykKBsp0rA8VSlfseIFM/yiFVKY0WXGGFeAMuKt4sqsgFYmTR0jAiLw/wQ6bYdjhQ5uVlj4IFhSMNBmYsCknUKHSRIsPHCHaSV33HFHOPvss0dZAKV6r1Vp/1rlux9+S63QYZIKKxwEpRHPz1tmMdGBBY39TlyfePKLFUN5D5jQV/vd736XfWWJcybTTOgnsIgEQnn1q2LaMeTj/e9/f0OJZfvjrVdGxf21+Fh9FwERGBwCdbYbg0NBOR02ArUrdBjsvuY1r2lwfdWrXpW5O7EDP+d4uUtmV1gxoZVUVeigxMBCYM6cOdlKB+TPTG+5LlY63s+bfVgZMJNkwlKeJtxD7LONS8l3vvMdOyRjYLNB7KQTw1KiCB2cn/3sZ9ln/48Zzvvuu8/vyvJLB8dbO3DMH+f6t6MQwWfdWw8xCL777rub0vAdX/uB/BODBpchrJnoLCFYH2E+X6d/OSuW+UCIdPxbmXlbntl6hQ7PtNUS9gwu4NROOi1fXqFDwEgrX3RsecY8E8+TeDDeIovyV7VspEgDLgsuuGDWge62fMUKHdJEuchMMWWMwZKtUMdvlHFm88sIblawRLCm+/rXv17mNB0jAiJQgkCn7XCs0KF+EgCXdwjCxMjNN9/cuHI7hc7ee+8dNtxww8bxKIVYFpt2m36Df6/xbv3+97/fOJYPKd5rVdu/pgz14ZfUCh0f24x366c//emmu8bK6thjj22UCSxvKBdmMUSfiuXsLT4a74iPfvSjDaXQxIkTwxFHHNFIk+tZX8svP9444PkPlEX/jo1/5zvWtLhvmXzkIx/p2BXYztVWBESgfwjU2W70z10qJyLQTKB2hU7z5ULwnUBcer797W/Hh7T93umAmwQJvMcsDCa7xPDwgo8+Fha2NDoKFqxrcEUqEm8tMJbLlhOEkECFCJ0iBrW+04zLC37qZhWBkgP3FD9TGnd8cc3C/cViFLDSFKb3JkWWQvZ71S2ddUy1TQ488MAmH3nbn7f1Cp283+N9zBq3swjptHwRfG2HHXbIlAw/+tGPRsVjouPIdSlnSFHQUMurnznsdkn7btOoWr5ihQ51iLJlCktMYadOndrowHcyM+pXJiFQJ7EWJCIgAmkIdNoO+3c5OeC9ssceewTaHgSlPO8eG7y3UugwocJEBa6aCJMkWGOYNQeKAQbcNgnCYD2O05PivVa1/csy38f/Uip04vKChRbvtlg23njj8M53vrPx/qP/xTLhPGsU9MQzRCgnp59+evjJT37SSIJgyVj9mFjfgPcMcXdQHnIefRQUgiZY7hCLr5UQn+kLX/hC4xDYlHUBbpykDyIgAn1HoM52o+9uVhkSgecJDI1Cp90Tj1/u7VZZ6oVChzx+/vOfb1h7XHrppeGyyy4bdWt0dpgpY/CMMEOG64uJ7/higYOywWa97Bg6yygqkHbBE+2cbrfENqKjhpAfzKXLSqcKHWb7WO60lXSq0GmVlv3mlRHt/PW7VcbYtdh2k0aK8hUrdOKYB+TNrz6A9c7+++8/SgnGcbEQNNtcOvJWVImP13cREIHyBDpth/MUOgzOcZkxxQzKXIsD10qh411GGaDz/onfSbECIXbNrfpeS9H+lafdmyNTKXSwZKKPRHuPYO3Md5sUiu8OFwieqY+L5I/BApjnec899/jdTe8xJqVwNUeYcDPLbyy1rr/++izIsZ184oknjkrLfrMtEyxf/epX7Wsoc07jYH0QARHoWwK+/5u63ejbm1bGhp7AUCp0cH+hY8GfmYdTEpgBNGEFBFyQiqQXCh06MD4uz8knn9yYwSSfPsjyu971rsZsZryCg+/4Es+FoIOx7LbbblmgZPbfcsstWcyC+JhU3/0KRgTSxc++rHiFDsqguEPo00F5QKeRWDatpKpCh8CgzDjjImDPBJcqW6kD9yMUc0XiX0ZjaaGTonzFCh0Ui7GbG3y8dQ0dfQYE7WTrrbcOO+64Y3bYAw880BR3od25+l0ERKA1gU7b4TyFDlfwqw7hMoUFK9JKoYMSf911182OK3on8SOuueayiWv2jBkzsnP4V/W9lqL9a2SmTz+kUOjQf8IKGNdshPcq77PYfd4QoCjj+aKQKxLSwDKH2Eh+FSzf5vPexj1rvfXWayh2sLZFkYTrFn0yE69ItH152zPOOCNX+Zh3rPaJgAgMBoG6243BoKBcDhuBoVHoMLDGvJeKThwPi2lS9MBjq5b4uF4odDbddNPMdDnOS7vvzGD5eD6+45tnQUF6fvla/N5bxaZpd/12v6+99tqZSxzHVbHQQQGHIq6qdKPQYaUrVgojBsSECRNaZgGlE7OBRdIrhU6K8hUrdIpiGTAzygwps/HETGjnBgcrWegUlRjtF4HqBDpth4sUOihcUOTaO5Z3JXHcWil0aAPMTbiVRahX/MTvparvtRTtX/WnUG8KKRQ63q2O3Mb9C38HKO9xjTK3Kn4jqDWuTbzr6ZNZ/Bx+YzVI+l68FxD/LmQf7roo9XAtR3CZ+uUvf5ml7ydJyljbyEInQ6h/IjDuCNTZbow7WLqhcUNgKBQ6KHP222+/bGan7JNr1yHohUIn7kiVvZcf/OAH4dxzz20c7ju+zHDmBaHecssts3hHnNTORaiRcJcfvAsOSRDvyK+U0SpZb6HTK4XOiiuumJmT+6DZrfLczl3Iv4zG0kInRfnyCh064NS7PGEG1azj2rk32vnebY2YPFioSURABNIQ6LQdLlLokBvacBRECCsW4YbVSqFzwgknNNxxYovSLJHn/xGLBcULgkUIQZhNqr7XUrR/lpd+3VZV6LCiFStbmRAzDisXLGzyxK+Cxe95ljN+AoVjzjrrrHDrrbfyMStDlCWT22+/vdGP8wq95ZZbrmEJxrHcZ7t4OEzC+IkVFE8saCARAREYbAJ+coI7SdluDDYZ5X48ExgKhc5GG22UrVJlDxLTXToMLJFOUFYLGEzwPfP970eFzp577hk22WST7DYICll2lR+CEPpVo3zHl2XNmRGNZSwVOssss0zT6hhl4txYfnut0GEWGqsgPwOJYgm3AVY6M/cuXjDmctWvCp0U5atOhQ6ueGbmTyBwVkuRiIAIpCHQaTvcSqETL0VNG8nxWMcisfIdSwtcc5BWCh3vChynUfW9lqL9y26gj/9VUej4iQZukcDGTGxhaVMk/npFrnS8Q7HoMlc6P4EUx02y69D/YYUqVpVEYmUkizq0W5kzLqPTpk0LrKwmEQERGGwCdbYbg01GuR/PBIZCoYOCgGXsEFY+oHNpA217uJjfMntkZuL9qNDZfPPNw6677pplGRcVghl3I1U7vt1cs9U5sGdFE7PYwB2Hmb8y0muFzmqrrdYUeyleqtfugVWwttlmm+xrvyp0UpSvOhU6J510UsPU/oILLghXX3214dVWBESgIoFO2+FWCh2y4gPXspIigXSLFDoMzlkNEIlj42Q7n//nXa6YdcU6xKTqey1F+2d56detV7C0cm2L889kBLGRLCYccdFwcSoKgmznn3LKKWGBBRbIvhZZA/Ojf3ZY1pBPBCUPLlaxxH0EVv5khTIEy1DKibltxefa943nrr5FrEGE/iATBkWWRnaOtiIgAv1PoM52o//vXjkcVgIDqdDZaqutws4775w9szKuF75Twec777xz1POOZ2s6UejQ4bjvvvtGpdlqh5/torPLYLWdTJ48OfMh5ziWg8af3AcQbHe+/e47T/1goUO+DjnkkIYvPQN1BuxlpA6FTiflyw8CUBayXGqeeOuSThQ6ZctGfM1ela+6FDoEmfbuFQRaJeCqRAREIB2BTtrhdgqdeJYUS1izgI2ta7yLFsFxTz311Nybot7jXoPEy2RXfa+ler/6jFtsNb+PoPi0672QbhQ6LAHM87HJLgLS0xabdUzRfaD8QeFm5xWtysn5++67b2DhAMRPVjHJw2QPykaTPEsf+kI8P8Sfb+fkbX3w7m5dm/PS1T4REIHeEqiz3ejtnenqIlBMYCAVOqyGwQwMwssbC5xWMys+XkeRooalk9dZZ50GqaLj7AAC9y211FLZ1zy/cDuuaOtXaii7Yo8fLJNuqxmvouuyv2rHt1Xa3f7m/ejxY8efvYzUodDppHx55U+RombZZZfNZhytY1t0nN1vN2XDzrVtN2mkKF8+jZQxdOjs0+lHnnjiiWw2td0MrLHQVgREoByBTtrhdgodrogV6SqrrDLq4rFChwka2lIEFx4US7jVeCFW2ZFHHtnYhcvxTTfd1Phe9b3m2y4S7fb92sjQ3A+TJk0KuPJ4Ic9l3aX9eSk+d6rQWX311bO21hQq9LeIh4Q7cRk55phjGqttxjGP7Hzei0yKsbQ5Eh/nrbL4nTKAu7zJQgstlCmYLIZdKwsvO4ct98EzRy688MIwa9as7LP+iYAIDD6ButqNwSejOxivBAZSoRN37Fr53PPgfCcG6xxm/7wCiLg0BEQ0c2LOaafQYYlz3G2QMr7k2YHuHy5gKKJM2l3PjvONFObOrD6VZ3FERwWT4qWXXnpU8NiqHV/LS8ptHL+BVU8eeuihtpeoQ6HTSflac801syVcySgKBjqmfpnuxRdfPPudNE3aKXS6LRuWPttu06havvygKKVCx8/AKn6Of9L6LALpCHTSDpdR6LCKESvdxRIrdFB6Y31j72DiqJx55pmN+CysloQLl62Exbvv0EMPDY8//ngj6RTvtartXyMzz38YZIUOrOnnmKLkkUceyZQgDz/8cHybhd89Tw4677zzwg033NA4nueNOzKrj5pcdtllAWseE5RKxMQx8cGQUQbxbmDJeYR+Hf0qFE+txL/jeU+VXWmxVZr6TQREoH8I1NFu9M/dKSciMJrAQCp06ASwMs4KK6zQuCMC/1pAO9yQsJqxGfx49Qo6Jihh2PJip+LH0k7Bsv322wdmM01IiyU3rYPJDBYzfEVCBxUfdLYIeWU1kDlz5jSCNNPx8coBjkNBgNLAOlnsY9lOlARcG9cUuGDuDidmOeMOdYqOL9dNLYcddliWb9ItO8tWh0Knk/LF82Cmz2YweY6Y1GN1hay//vph4YUXzj7bv3YKnW7LhqXPtts0qpavOhQ68KM+moUTs77UNYkIiEB6AmXb4TIKHXKHG6p/V7MvVuiwz7td8Z13H8ud07bi9mMWsfx2+eWXh0suuYSPDUnxXqva/jUy8/yHXip0JkyYkCnJ7N1EluJ3kbeCoh/hYxKhRIn7Ru3crAiS7NtmlEJYaVnbTR7uueeerK+D+x2TYqak4zf6bij2/GQO52IRbdY0HEfZoM/FPnPBYz9p865oJ361NMoY+ZaIgAiMHwJ1tBvjh47uZDwSGEiFDg8ChQWKCq/Y8A+ImSFbfQGTXFx48GcvEmZ0+N0C87ZT6BDoD8WKrcwRp8vqWcwotpItttgi7LLLLoWHxLNZdiC+4vh/x50z+91vB0mhE7s6MWtmSjl/T/5zHQod0u+kfDG7uOOOO/psjfqMgodZaKSdQodjui0bnGvSbRpVylcdCp3NNtsssLoN0ok7nnHQVgREoDyBsu1wWYVOvIQsOclT6NB28E631Y6Kckz7yQA8XtgghUKHa1Zp/+I8E0yYfHm57rrrspW8/L46PqOcYjn4shI/kzyFTru08vpN3o2v1fm861mynGDXsVA2sMjyq0nGx6Dk4fo2qRb/bt9RcMHF+k/E6CGGjkQERGB8EUjZbowvMrqb8UhgzBU62223Xdh2220zlnFQw04BM2PHQJqAxksuuWTDXPupp57K3FxsOXLSpZPIsqSrrrpq02U4hoDGrJpA4FqbSTzuuOPCvffe23Rs/AVlEh02lkWn4bDVHDgOBVGZVai4HgNWTNOZUfMzWbhTYV6cJ3RGWNUBl5/5559/1CHMdDHzdMcddwTM17345VmLVgvyQXVvueWWcPbZZ/skavnMjB3WF+ZLT+eO5eVbiZ9p6zaAcFH6ZcsXz4xVNijbZnFlabJ06lVXXZXNOB588MHZbmZCWaa3nXRbNny63abRbflCwYnlGUyIdTNlyhSfncZnBmRcg058K3N3Ot9YQDE4QU4//fSsTDcS0gcREIGkBMq2w36gzvvmwAMPzM0HbYFflpqDiqwieKey8hAxwPy7kHNYCOCaa67JLHPyFP0p32vdtn/k04ufcGA/LkFMLo1FQHfuAQVHzNHnz3+On8lBBx2UWUb5Y9p9Luo30e9ihc7ll18+Nwksay666KJsFdLcA+buxPp4v/32y1ZKs+DaHEt/j/PpL/HOaSe+b8NEy9FzV9TKK0/t0tHvIiAC/U8gVbvR/3eqHA47gTFX6PQaOIoX4gQsssgiATctZvzpKA6q0FlDAcI9sSwslkG4nmGyPIj3hWk9HUmEe2A520HpbKHQ4zmgPKSTiek5LnSDLL0uX3452jKWTYPMWnkXgX4h0Ot2GEUu8d9wp2HShaWseR+M9bugavt3/PHHZ5NN9lxbreBlx4znLQp/+itMwPFcmfiaPXt22xWzPBOsqFninr4cwZHpw/nJO39s/JnJLyYIzLI6z6IoPkffRUAEBp9AlXZj8O9edzAMBIZOoTMMD3XQ79HHcJBFxqA/ze7zTywjrH2wzmFmGxdHH5+h+5R1pgiIQDsCaofbEWr9O0oLFDomKKOwBrH4arZf27EjsOGGG4a99947u2De8udjlxNdSQREQAREQATSEZBCJx1LpZSIAB1hc30jULQP3JjoEkpmQAgQNBPLJ1w6MKuXiIAIjA0BtcPVOG+wwQYBlysTYsP4oMO2X9uxI4Blzsorr5xdkMkBAitLREAEREAERGDQCUihM+hPUPkXAREQAREQARHoKwI+SDPWObgP+9Wb+iqzyowIiIAIiIAIiMDAEpBCZ2AfnTIuAiIgAiIgAiLQjwRY8ptYRChziNfH6lYSERABERABERABEUhNQAqd1ESVngiIgAiIgAiIgAiIgAiIgAiIgAiIgAjUTEAKnZoBK3kREAEREAEREAEREAEREAEREAEREAERSE1ACp3URJWeCIiACIiACIiACIiACIiACIiACIiACNRMQAqdmgEreREQAREQAREQAREQAREQAREQAREQARFITUAKndRElZ4IiIAIiIAIiIAIiIAIiIAIiIAIiIAI1ExACp2aASt5ERABERABERABERABERABERABERABEUhNQAqd1ESVngiIgAiIgAiIgAiIgAiIgAiIgAiIgAjUTEAKnZoBK3kREAEREAEREAEREAEREAEREAEREAERSE1ACp3URJWeCIiACIiACIiACIiACIiACIiACIiACNRMQAqdmgEreREQAREQAREQAREQAREQAREQAREQARFITUAKndRElZ4IiIAIiIAIiIAIiIAIiIAIiIAIiIAI1ExACp2aASt5ERABERABERABERABERABERABERABEUhNQAqd1ESVngiIgAiIgAiIgAiIgAiIgAiIgAiIgAjUTEAKnZoBK3kREAEREAEREAEREAEREAEREAEREAERSE1ACp3URJWeCIiACIiACIiACIiACIiACIiACIiACNRMQAqdmgEreREQAREQAREQAREQAREQAREQAREQARFITUAKndRElZ4IiIAIiIAIiIAIiIAIiIAIiIAIiIAI1ExACp2aASt5ERABERABERABERABERABERABERABEUhNQAqd1ESVngiIgAiIgAiIgAiIgAiIgAiIgAiIgAjUTEAKnZoBK3kREAEREAEREAEREAEREAEREAEREAERSE1ACp3URJWeCIiACIiACIiACIiACIiACIiACIiACNRMQAqdmgEreREQAREQAREQAREQAREQAREQAREQARFITUAKndRElZ4IiIAIiIAIiIAIiIAIiIAIiIAIiIAI1ExACp2aASt5ERABERABERABERABERABERABERABEUhNQAqd1ESVngiIgAiIgAiIgAiIgAiIgAiIgAiIgAjUTEAKnZoBK3kREAEREAEREAEREAEREAEREAEREAERSE1ACp3URJWeCIiACIiACIiACIiACIiACIiACIiACNRMQAqdmgEreREQAREQAREQAREQAREQAREQAREQARFITUAKndRElZ4IiIAIiIAIiIAIiIAIiIAIiIAIiIAI1ExACp2aASt5ERABERABERABERABERABERABERABEUhNQAqd1ESVngiIgAiIgAiIgAiIgAiIgAiIgAiIgAjUTKAWhc6kSZPCtGnTKmX92WefDfPMM0+lNObMmROWWGKJnqdRKQNzT37mmWfCfPPNVymZfklDz3XkMfbLM0mRDz1XPdcRAs2f1A6P8EhR11Kkofraf89Ez3XkmaT4lIJnv6Sh+jpSIvrlmaTIh56rnusIgeZP6jeN8EhR11KkkaK+zpgxI8ycOXPk5hJ+qkWhM3ny5DB16tSE2ewuqUcffTQsuuii3Z38/Fkp0qiUAZ08ikCKZ5IijVEZ045KBFI8kxRpVLoJnTyKQIpnkiKNURnTjkoEUjyTFGlUugmdPIpAimeSIo1RGdOOSgRSPJMUaVS6CZ08ikCKZ5IijVEZ045KBFI8kxRpVLoJnTyKwJVXXhmmT58+an+KHf8HAAD//yA0vZQAAEAASURBVOydCbx1U/nHl6EyRUWUkAz1GguZhTJESkUJhUjF6zU0SIQU0YRUhigk/DNPmV5Cypw5JEIyR/IiXsT//S6ec5+z7t7n7HPO2ueee+/v+XzuPefsYe21v3vvtdf6rWc9a7qVVlrplZDZFllkkTBp0qSeUp1pppnC1KlTwyuvdJ+9KVOmhNlnn72nfPSaxgwzzBD4e+GFF3rKx3TTTdcTCw4+CGnoujbfBoNwTXLcG7quuq7NBIZ+9VqGklKvaagcHroefNPz2sxD5fAQj16fNVLqNQ09r0PXg296Xpt56Hkd4tHrs0ZKvaah53XoevBNz2szDz2vQzzOO++8MHny5KEFGb9NV4egkzF/SkoEREAEREAEREAEREAEREAEREAEREAERCAhIEEnAaKfIiACIiACIiACIiACIiACIiACIiACIjDoBCToDPoVUv5EQAREQAREQAREQAREQAREQAREQAREICEgQScBop8iIAIiIAIiIAIiIAIiIAIiIAIiIAIiMOgEJOgM+hVS/kRABERABERABERABERABERABERABEQgISBBJwEy3n/ON9984XWve1147rnnwiOPPDLecfT9/ImO//a3vz0eF/5cB5kIiIAI9JvAbLPNFt761rfGwz788MPh+eef73cWdDwRyEpA79esOJWYCIiACIjAgBCQoDMgF2IQsrHggguGPffcM05v/thjj4Xdd999ELI1rvKw6qqrhm222Sae8zXXXBOOPPLIcXX+OlkREIHBILDzzjuHpZdeOmbmqKOOCldfffVgZEy5EIEuCej92iU47SYCIiACIjDQBAZC0Pnwhz8cPvShD0VQF110Ubj44osHGtpYzdw3v/nNsOiii8bTO+GEE8Ill1xSeqrTTz9907pXXnkl8CfrjQA9iD/4wQ8CvePw3H///cM999zTW6LaWwREQAQ6ILDYYouFr3/963GPJ598MvBueOmllzpIQZuORQKzzDJLmHvuuQPv/wceeCC88MILlU9z1llnjR5fc801V/QC/te//hUeffTR8PTTT1dOI90wrYe8/PLL6SZNv/V+bcKhHyIgAhUIvP71r4+e829605sC5Rie85RfdHx3UgZWOFQtm0w33XSxo76TxNuVpT6tTsthv699z5GGpTVePwdC0Nlss83C2muvHa/BBRdcEE455ZTxej1G7Lzf//73h+233z4e/9///nf0zimrwK+22mph6623bsor21I544+hQn/961/Dbbfd1rSNflQjsP7664dPfepTceO///3vUdSptqe2EgEREIHeCFCx+va3vx0Yfosdf/zx4dJLLx2WKOXUBhts0FjOO2CvvfYqbaDvuOOO4T3veU/cnnfD4Ycf3thXX+olwDDej3zkI+F973tfo2J/1llnBTrQqhjv/HXWWSfMO++8Ucxhn//973/hH//4RzjzzDNL3/U0flZcccWw0korhYUXXrjwUPfff3/47W9/G+68887C9WUL55hjjvC9730vzDzzzHETGiC77LJLePbZZ8t2icv1fm2JRytFYMwRmHHGGcMqq6wScB6g3MAQYui0aCVcfOxjH4tl5gILLNAo9zwcysA//OEP4eyzzy597/ntey2HSaubNA444IAoxPu8tPt+zjnnxLK93XbdlsM+3Rxp+PTG63cJOuP1yifnvc8++4T5558/Lm3nnbPWWmuFzTffPElh+M9rr702kNYzzzwzfKWWlBJ4wxveEH784x8HekMxvt9xxx2l22uFCIiACOQisMwyy4RJkybF5P7zn/+E3XbbrdA759Of/nRYb731mg577rnnhtNPP71pmf341re+FRZaaKH4829/+1v0RLR1+qyHAMOoEd24pvTSevvd734XzjjjDL+o8DtCzqabblq4joUIeYhzN91007Bt2I/9q1jVBoSltd1224Xll1/efsbPr371q+Gpp55qWpb+0Ps1JaLfIjA2CfCsr7766lHIefOb3zzsJL/4xS+2FHR8u2jYzm4BHjvf/e53o8eOW9z4mqMc7iWNAw88MOBd1IlVda7othz2ecmRhk9vvH6XoDNer7w7b3rO9thjj7iEytlXvvKV8N///tdt0fw1FXRwPaSwRAVPDXfqgw46KNALJ6tOYMsttwxrrLFG3OHPf/6zerOro9OWIiACPRCgUbzEEkvEFM4///xw6qmnFqZWJOjw3kAAKnp/SNApxFjbwp122im8973vLU2/iqDjPXdJiF5tvHKoJ7z73e8OM8wwQ0yfXu799tsvrvMHTAUdBMIHH3wwdvLgAfaOd7yjsTlDjA8++OBSb5/GhtO+ENuJGE+pVRF02Efv15ScfovA2CKw5JJLBgQbwheUWSeCDiMPGGJFmwavwAkTJjQ6XUn/n//8ZywDKRu95SiHe02D9h1DZVvZG9/4xqbVtNvajbLotRzmgDnSaMr4OP4hQWccX3w79S984QvRHZHf119/fTjssMNsVeGnF3SY+WSHHXaI7oiIOlTSNtlkk/C2t72tsS/eJXiZyKoTIJYRcSsw3Dp33XXXtj2P1VPXliIgAiIwnACzWuGebd4ce++9d2yAD98yhCJBh+1OO+20cN555w3bRYLOMCS1LuA9Tg+1GR0vNmsZy6oIOt/4xjcaw+QQcxjiROwcLK2IM/TguOOOi+vsH4IO9YUbb7wx3hP33XefrYr3GLETvbdvlSHGnBPi0Vve8pZGWvalqqCj96sR06cIjE0CeI/yjjJDdJ46dWpjiCbL2wk6Ftv15ptvDk888YQlFT/xoKftxDBWsx/96Ecx3IT95jNHOZwjDZ+nou9bbLFFWHPNNeMqwm7QMdNqOFqOcjhHGkXnMl6X9V3Qwe2Lcdg8HMRbwTqNocMYf8YRkhaKKRWVboxKKz1EKJNMy0rvUVUjD4z7m3POOWOlacqUKbHBzWcnxrHhgapLRYkCp5/Gefz85z9vVPwOPfTQcMMNN7TMQpGg43dg2nN6wBizalZl2JCuq9EKsbJLcGTuL6zdMLihPfVNBERABLojQJyVjTfeOO7M+4hYOmVWJujQg4kQkAaL7FTQoWeVdzyen/SOdjttOr2puKsTW+Whhx4qHD5Wdo4sH+l3dKu8tVpnjQDYMRSOWcoQ6GxodTtBJxX3TjzxxPD73/++cUiCG/OOMmPYAYKKv+508lBpJw9lRofQsssuG1ezL79bNSS818/dd98dFllkkUbSVQUd6n56vzaw6YsIjDkCJujQtrryyiujoEyHsw0n5oTbCTrtoPBu+clPftIYnUD8V4Yqeeu1HCatHGn4PKXfKaMZlmXxyKoMf81RDudIIz2X8fy7b4IOQY8JSOV7VRBQLr/88ui2ViUoMm7gG220UfQC8cN7cO+mB4gKR1Glb/bZZ48VU9yDqdTRy4TgwEwe3h0PgYmx4LjOlRmq7LrrrhuDONvN77dFXMLLhV7KskoJMy3gxUIPlx/XyfbkgQCBf/nLX3yytX33w604CEEF28060U7QIR3Erh/+8IeNgu6uu+4K3//+91k1zHRdhyGJC7bddtuw8sorx+8adlXMSEtFQATyEaBBbMOtmOUQIbnMUkGHBr29E9PGP2lUEXQQ9T/xiU/EzgD/bmQ4zuOPPx4mT55cOvsigeQJ3osRx+fFF1+M8VsQMEgXw9uRWTQZRlb2fma7QXpHk59u7HOf+1zsLaaDxs7Vx4RoJ+gQEJRrgcGfodi+buCntbf8/eIXvwjEzuvE6AX/7Gc/29iFnmGudZG9613visPDuZ70Iv/mN79pGnpVVdAhbb1fiwhrmQiMDQIMicJ75sILLwzM1Ij5+HD87lXQIQ28cqxdS5B52m/eei2HSStHGj5P6fdVV101bLPNNnExZf3uu+/e0lEiRzmcI430PMb779oFHV68vKzXfM2Vqx3wokBMiDdU1hB9zBW8KB3EkJ/+9KfDeoPoafJiAm69ZTMu0ENEL1aR1w9T11HZoLevnX3ta18r9PghKOSXvvSlJtfnNC0eKHrUqgQsTPft9Lef8YFztmE+rdKpIuiwP+fJDBcYlWtm0eLczHRdjUTxp6/o4vlFhVomAiIgAnUQ4F39s5/9LIoZpP+rX/0q9myWHSsVdBhmhYcPRmObdwkCilk7QQdPGLwzGA7Tyq677rpw9NFHN3mCsL0fOkycFy/kpOkVDQ+ybQbtHW35yvHZiaBDY4fZqTBi4H3nO99pZIFgxASyTI2Zs5jxpRPbcMMNw8c//vG4C8IT94D38rG0uD+9hxHexHTQ4Q1m1omgo/erUdOnCIwPArkFHTod6LimbMJOPvnkKCC1o9lJOVyWVo40LG1i7FibmNkGOacyy1EO50ijLH/jeXntgk4aVI8Zj26//fbYsF988cWjO7O/AEWCju8pYls8cvBgobcIFzqbCpV199xzT5zm2QsHqaDDdqxnam2UWypwPuYLbsX0MKaGhxGeNWYM02JsJZUPhsYg9FiQvyJBB+8e4hN4ryDye9+0ceX0CC611FJNPHDlu/XWW+1wtXx++ctfDiussEJM+5prrglHHnlk2+NUFXQ+8IEPhM9//vON9BAk/JA0XdcGmsIvKNh77rlnY13RPdVYqS8iIAIi0AMB3oF4r5ohwLQaKpMKOnh3UhGk4wM75phjwp/+9CdLrq2HzlZbbRVnJLEdEIUQEhCFCMCL4GNW5NruBR3bjvc7M2qRJwIEW8UbN3yEgHRGpEF8R9u55PjspBHA+4Y6GvbHP/4xHHvssfE7jIhhgxduakxvzzT3nRhTB+MtjVEX2nfffQt3951PBOskaCd1v24FHb1fCzFroQiMWQI5BR3el4je1sFPmxLRu9UIDwPbSTls+6SfOdIgTUJ++DK3XUdOjnI4RxopD/0OoXZBhx6Vd77znZE1FTQqfOb9gghCsFcfqC8VdFBA999//0YlkaE7uPWaCx0JE6sFdzHz3qH37oorrmhc31TQwVuEoVWIMRgxX3AftkpFmTcEFQcTj9iX2DPmymwHYz2FBi7faS8Twf8QQzD2+/Wvf91U4WVoGOM7TSnF42ivvfZq6uW04+T6hD+uiRiuiSjM7ayqoJMGTaQSeO+998bkdV3bX1fiRzCu1YwCvMrLwrbXpwiIgAhUJYBoggeq2cSJE1vGdEsFHQQVvHEtkCRiEO8ve0e28tChLkBnh82aRC8h3kIM48J4XyAwEDsPK4rTkwo6dPrwnrdh2Awlw4PDjOE6l112mf2Mn4P4jm7KYI8/OmkE0Dihwwzz9TIfPJPhXIhkFhiU33jOVLX0nsPLi+HqqVGHY1pghDnEOGI7cX/1Iujo/ZpS1m8RGNsEehF0GCXC0CrKIN5XlI3WQQA1hgOfdNJJlQB2Ug6XJZgjDdL2cWyI4UrHe1ks1xzlcI40ypiM9+W1Cjool1TozPD+wAvEGxWBHXfcsbHIVxxYyNjBD37wg3E9lTjS8+O4bUffu2e9N7YuFXSKYgP4igFKK+7EVBy8UaEwD5xW07n6few7lQfGWloBUOaaTF5RSxGZMMQshojVZf6cymYnSY9dVdBJrz+Va2LBYLqu7a8rQ9IQL82qTCNo2+pTBERABDohsNxyywVEHIx3H96braxI0KHCy/BmE2Z8md9K0CFWCx6bGAIQnSe+04blaeM/7Un0gg75Z8hXmobvlEljyAzqO5pzz2WdNALwEDavKDqoGAZOZxPxFeg8w1Oa+hhBtG0ChCqzVNm54OlDhx91HoyJMvhtApxtx6f3FvKij6+3sV0nQ670foWYTATGD4FeBB06V3lHFBmd88SDrWqdlMNlaeZIgzKQ87JRI3jU4llbZjnK4RxplOVvvC+vVdAhfgpxVDBe0ih/qdcKAgcNVas4pIKOrwRSWSCYoTfzymHYFAoqhuCD+7dZKuggmODa641gjnjcmFHxS6ep867BVBSZuamVS7qlxeeSSy7ZFAOFWD/W+8h6Ow++0wNmPZF1z27kA3oVBbIkP6lVFXRSF35/LrquJ5QG9/S8EUGLGkd+G30XAREQgV4JEFB46623jskQm2SnnXZqmWSRoMMOeMsSZBHzsVd8mc8wKD9Dkh/6e8stt4RDDjkk7p/+Y0iYDY9OZ+Lwgk5ZGr43Ei9evHnNBvUdbfnL8dlJI8C/ewj0SUcYnjHWqcUU5cQi8l5ZzCLmO/HK8ky9D29khsFhdKJRF8EzKzXEIq4tRr2L+8h6kHsRdEjPn6MXH1knEwERGFsE6hJ0Hnzwwdh+ZNblKtZJOVyWXo40CLfhO27ojGEUTJHlKIdzpFGUNy17lUCtgo6fApUghXiDFJkf9pMKOrhd05PTidHDxxAqepCwVNApm8kJbwgUS/YnSJQNDbNj+15EllEJ4bx4ABCb+Cyb+pyAzkzP3qmlPDrdv932/fTQOeKIIwIBLTFd1wsCcSBamXoQW9HROhEQgZwEfLy7bj10yA+CC50m5o1qseBaCTo+KGPqOePP0Qs/TEWLl46ZF3SKvHDZztdJ0v0H9R1t55fjs5NGAKKa9dyeeeaZcei3TWmPIMfweepAiIA2u1hVDx0vAnFeZfUcOvoYqm358HUI9utF0NH7FYIyERg/BHoRdHivzTrrrIEpvmlT4tFqM0JCEEcCBO80LlsR3U7K4aL9WZYjDe+kQIgP3sNFlqMczpFGUd60bIhArYKOH1ZDAGJ6YIrMz4bkX+y8xMt66orS8cu4Uc3d2gs6iDUEsioyemgsoCOuv6iu3niQ6X0y7xm/ju9UbqjoMDafgMne0gqMX9fquw9G2Gq7btd5F3TPvlV6VT100hg69K4SBFrXtTnIZBlrgk7ivWaG+IaAKBMBERCB3ATSxjGzEqYetf6YZR46bMO+CEQYHR30/LUSdPB2JU4O5j054wL3j04RhBcsnY3DCzoIEHjwpLbeeusF8o0x/Jd3vtmgvqMtfzk+O2kE+M6em266KQZITmPYkCdmpVp22WVj9m688cYmT+eiPDOjFTNbmTHNOZ4y1J9S89f0jjvuiF7Rfpv0nu1kyJXer56kvovA2CfQi6BTRMd7nLC+rCMh3beTcjjd1373mgbtYuLW2egQG1Zr6fvPHOVwjjR8nvR9OIFaBR0fPO/uu++ON8/wLDRPb+1FBRQ9evfMLr744jgzlf0u+2RWDAIiIt5guQQd0qJCw2xXPMhzzz03i4YZ7sAotd7Dx4tbKLmMuaxiCENVh3VVSS/dxle8r7766nDUUUelmwz7XVXQKZvlStc1RMGv3XUlmDjCohmebAQWl4mACIhAbgLpbBfESmnlQt5K0EnLLoZXsT1Do7F0yJWPT9BK0PFDptI0fIXxjDPOCHj6pNZK0BnUd3R6Dr387qQR4Htv/TFTscx3CrWaDp40/FTh/EakoY6XxitkHUaHhs2m9fjjjweGdHkjOKkNAWM54iFD2emMS4fn+/34nt6jer+mhPRbBMYWgdyCDnS8QwJllJ9YoIxeJ+VwXWlstNFGYYMNNojJ01amHDcniPSYOcrhHGmk+dLvZgK1CjrcLNw0GDFr/NRoPhs+SJIXdNiGGROY0htLgyDGhRX+5RR0/OEIAEkFFbc74gXhwWOWBpdad911w2c+85m4GqGHgI2DYH7qcAQGelHbWVVBx7vHM7MY4pH1wum6tqMcwpprrhnjKbElwweJaWH82u+tLURABESgOgGGoDAU1rxUGYKM90SZtRJ02IeYecSlwehgYeh0maCz5557BqaRxtLYOHHha//8O4UJFvDsMOtV0BnUd7SdX47PThoSnrUdG6GE2a/oNMPSGIitrh2dYH42UuqEeG0XBUG24/lGgC2r8vnAAw/ETrVW2+r92oqO1onA2CNQh6DjRWqEESbUsfKxjGAn5XAdaVBuU/ZakGfezwcffHDZoZqE9dKNClb4crjOsrzg0ONyUa2Czsorrxy23XbbCJYgi8SuMa8Zo427FzeWuVungg5DnJgtCWs1tj5uUPKvLkHHHw6vEyqwNkU700vz0Jr54UeIG8T4scB+ts1IfPqZQxALEA0s9lBZfqoIOhQUjLG3gL7mdm9p6roaifJPH1wUl3caWzIREAERqIuA98q46KKLAsFwy6ydoOPfLaRBJdfeB6l3jfcUvf7668Nhhx1WeFg/lbaf7YiNexV06nhH412yzjrrNJ0Lw8+pQI+EddKQSGMGUndj6Bxxcsyom/kgyGWdbgzJ4hpbXCWEIeoHzzzzjCVV+OmH4hVuULKwVcxG20XvVyOhTxEYHwTqEHTWX3/9xoQ8UKScazVUmW06KYfZvsh6SSOdXTqNTZYeL0c5nCONNF/63UygVkGHKS59kCVmkWKMtbfFF188Tklpy1JBx1fSEBpwZ2snOFha9tkPQYdjMb06btsYM2Thwmbm88Cy1G3Ztuv3J72yXBebJp3Zt26++eaW2Wgn6NDDS6BEIqib8TDjXm2m62okyj+pPHPfYCeddFKYPHly+cZaIwIiIAI9EvAem+0axe0EHbKCJ+qiiy46LFepoMMMlVSMMYbfICwxNNlbKh6kU8X6d0o3Q67qeEdPmDAhMJTHG9PbVh1y7ffL8b2TRkA6S+Vtt93WFNON/BAkmUDTGI0YOrVSjxs8mOkooq6B4aHMu61sAom40Wv/6CgzD22/3L4TQ8dmZmPZ/vvvH6ZMmRKHXbUTi/R+NYr6FIHxQSC3oEMHBW1cczpIZ1guo9pJOVxHGjgU0IGB0Z6m3C4b9so2OcrhHGmQF1k5gVoFHbxvCKzH2HyMXhm8cayiRsRwKm4LLLBAI4epoMO+9MpZzw6CEJUhS8N25MGiF2iNNdaIMyb4HjBfUaOXqdugyARkZIwks2PgceSN/BGQb7HFFouLi2Z78C7MVHrozbrhhht8MvE7+cUdmODLCCx1m+8dTd3Yi45dJOhw/oxnn3/++aNaPc888zR2LQpmqOva+rr6xgv3LC8NH5OpAVdfREAERCATgTSODuUOs18UWRVBZ6mlloqeuen+qaBDHBTe8xagkYDFxHOzSubMM88cK510EmG8P+kw8e/hXgUd0s39jh7Ngg48uP7GHMGGYfMWxwYPLMQqq5uls4axP/sypN6GoxOjASGFelQO6zYost6vOegrDREYXQQ6EXQI6k8HB+EzrrrqqmHehAgUhNFgJIpZURlo6/znSAo6jIbBO9LK7aqBnH3+0+/dlsM+nRxp+PTG4/daBR2AplHAaZRSWcOY9i0NLJwKOmznAyHyGzEHIYSKJt4g9CRRcbLxgGlQxVyCjrmMMWTq1ltvjQIVQhIPCEKOHZ88nnzyyeHCCy/ka8PYjpmerHLDittvvz26MFMxRRBB3KKiRMW2qtrbOECXX3ylk0obQ+NaDQfzgg6HxBuJczd3ep8NessYm3n//ff7xfG7ruswJI0FfjYX7jUfHLyxkb6IgAiIQGYCeMHyDsJaxUSpIuiQBhME+E4blqWCDst8xwK/ibFC2YdnB501vpPg3HPPDczK4S2HoJP7He3frZbXfnno0GGGSGaeMRyfRog33zFGXcTHJGK7VVddNca9sX0IOMxMldSBqPNYXYbh2gS+Zmi1Nzq5/NS+rGvnOcO77t577/XJlH7vthGg92spUq0QgTFDgGGjdI6bUSaakMEyX/5RhhHnhXAZmG/n0LGAkE37lTKQ9g6ep1b+sT3bMGsU7y1vOcrhHGlYnrwXLssQ6dM827ZVP7sth336OdLw6Y3H77ULOjTyqaihjFaxIkEHl1vcam0a1Hbp1C3otDs+3jmIP0XjKHFzY+x2WrEqSrNfgg7HRmhCGMOOOeaYqErHHwX/fEFXsLqxiFmzTjzxxKZe1MbKaV90XT2Noe9UwLl/7B4hdg4xdGQiIAIiUDcBhsriqYJRgcVLI419x7qqgg7vbeoA3ooEHTpe6Eyw95Df3n/n/UqjPx16nUPQ4Tg539Fphxbp5+gRJZ12hjjFe6SqFV0T9v3kJz8ZPvrRj5Ymw71x7LHHhiuuuGLYNkWCzrCNkgU0qhjeVcW6aQTo/VqFrLYRgdFPYKuttgqrr7565RNhBAkxzrCq7Ry2pQxk5AjePKnlKIdzpEG+cBRAdOJdi/mgxXFBl/+6KYfTQ+VII01zvP2uXdABKIooY63XXnvtpt4iFE0bvmTj588+++xw1llnFV4HKoabbLJJ9GQp2oAhXXju0APmp3dGTeVBJR+oq5MmTSraPVYSaUSXDXFB2cSraL755mu4hvuESJtAksQ64XuZcYzNN988ilwWu8Zvi3cMPZPXXXddw5vJr6/j+2qrrdYYi85sVwQ6LKrEc2y/reUF8YopbtmXqdbvvPPOppg5tl3Rp65rMxUfNZ97GvfMsmvRvKd+iYAIiEBvBOiE2W+//Rres7/85S+jy3maqu/p4501ceLEdJP4m/cuvYBeqCnzOqTHc4sttoizRvqeVBLCK+Tiiy+OnjlF5aGfdpxgzryLU/NlKwLE0UcfnW4Sf+d6R9MRxfvSjF5ghqEXeazaNrk+OQfEkZRjWfpl14TtqZ8R3NmmELc0EPyIV8RQ7SLbYYcdondV0bqyZTQ47r777rLVTcv90CmCbhMLwg/Da9r4tR/+HtD7tYiQlonA2CCQjgJod1a898w7kPARzHyIJyKCSpkRxB9vUdo+RZajHM6RBnljZmaGW9nQ5rL3ZNF5tFrWTTmcppcjjTTN8fa7L4KOQaVnhIeE8fIM06GnrciLxbYv+2Q8PWnggo0oRFr8MT67H8bDxVAxhCJc4RhWxJhwBI1OzoeKFukQt4ApXUkHIYqhZFRe+2nkZe+9947Xh+OWzVZRZ550XUMMTk2MAe4trJPeyjqvjdIWAREYPwQY4kRjHON9xLTiRSJKXUSoKxBDjs4TGuq4wZOPfuaBc+v1Hc0wpLnmmquBqdUMXo2NBvgLPbtML881of5WJbDxIJ0OHWh6vw7SFVFeRGDwCRAOA1Fn9tlnD7PNNlscNoqYTZuvVXiKwT8z5XAsEeiroDOWwI3Fc0GJJkg1hpcNlXhZfwngHoqbKHbLLbeEQw45pL8Z0NFEQAREYBoBH0un3bSmAjacAEIOgo4ZYhTelniFyEaGgN6vI8NdRxUBERABEaiXgASdevmOutSZlYReLNyWGTYl6y8BPHMWWmiheFBcP/vlddbfs9TRREAEBp0AgoQFMybYrQ8gOeh5H4T8pUOTq8wgOQj5Hst50Pt1LF9dnZsIiIAIjF8CEnTG77XXmYuACIiACIiACNRAwAdpxjsHj1eGjclEQAREQAREQAREICcBCTo5aSotERABERABERCBcU+A6bqJRYSYwxBmZreSiYAIiIAIiIAIiEBuAhJ0chNVeiIgAiIgAiIgAiIgAiIgAiIgAiIgAiJQMwEJOjUDVvIiIAIiIAIiIAIiIAIiIAIiIAIiIAIikJuABJ3cRJWeCIiACIiACIiACIiACIiACIiACIiACNRMQIJOzYCVvAiIgAiIgAiIgAiIgAiIgAiIgAiIgAjkJiBBJzdRpScCIiACIiACIiACIiACIiACIiACIiACNROQoFMzYCUvAiIgAiIgAiIgAiIgAiIgAiIgAiIgArkJSNDJTVTpiYAIiIAIiIAIiIAIiIAIiIAIiIAIiEDNBCTo1AxYyYuACIiACIiACIiACIiACIiACIiACIhAbgISdHITVXoiIAIiIAIiIAIiIAIiIAIiIAIiIAIiUDMBCTo1A1byIiACIiACIiACIiACIiACIiACIiACIpCbgASd3ESVngiIgAiIgAiIgAiIgAiIgAiIgAiIgAjUTECCTs2AlbwIiIAIiIAIiIAIiIAIiIAIiIAIiIAI5CYgQSc3UaUnAiIgAiIgAiIgAiIgAiIgAiIgAiIgAjUTkKBTM2AlLwIiIAIiIAIiIAIiIAIiIAIiIAIiIAK5CUjQyU1U6YmACIiACIiACIiACIiACIiACIiACIhAzQQk6NQMWMmLgAiIgAiIgAiIgAiIgAiIgAiIgAiIQG4CEnRyE1V6IiACIiACIiACIiACIiACIiACIiACIlAzAQk6NQNW8iIgAiIgAiIgAiIgAiIgAiIgAiIgAiKQm4AEndxElZ4IiIAIiIAIiIAIiIAIiIAIiIAIiIAI1ExAgk7NgJW8CIiACIiACIiACIiACIiACIiACIiACOQmIEEnN1GlJwIiIAIiIAIiIAIiIAIiIAIiIAIiIAI1E5CgUzNgJS8CIiACIiACIiACIiACIiACIiACIiACuQlI0MlNVOmJgAiIgAiIgAiIgAiIgAiIgAiIgAiIQM0EJOjUDFjJi4AIiIAIiIAIiIAIiIAIiIAIiIAIiEBuAhJ0chNVeiIgAiIgAiIgAiIgAiIgAiIgAiIgAiJQMwEJOjUDVvIiIAIiIAIiIAIiIAIiIAIiIAIiIAIikJuABJ3cRJWeCIiACIiACIiACIiACIiACIiACIiACNRMQIJOzYCVvAiIgAiIgAiIgAiIgAiIgAiIgAiIgAjkJiBBJzdRpScCIiACIiACIiACIiACIiACIiACIiACNROQoFMzYCUvAiIgAiIgAiIgAiIgAiIgAiIgAiIgArkJSNDJTVTpiYAIiIAIiIAIiIAIiIAIiIAIiIAIiEDNBGoRdCZMmBB23XXXnrL+8ssvh+mnn76nNJ544okw55xzjngaPWVg2s4vvfRSmHHGGXtKZlDS0HUduoyDck1y5EPXVdd1iEDzN5XDQzxyPGs50tDzOnjXRNd16Jrk+JaD56Ckoed16I4YlGuSIx+6rrquQwSav6neNMQjx7OWI40cz+uZZ54ZzjnnnKGTy/itFkFn6aWXDjvvvHPGbHaX1FNPPRXmmGOO7nZ+ba8cafSUAe08jECOa5IjjWEZ04KeCOS4JjnS6OkktPMwAjmuSY40hmVMC3oikOOa5Eijp5PQzsMI5LgmOdIYljEt6IlAjmuSI42eTkI7DyOQ45rkSGNYxrSgJwI5rkmONHo6Ce08jMD5558fTj311GHLcyyoRdBZZJFFwqRJk3rK30wzzRSmTp0aXnnlla7TmTJlSph99tm73p8de01jhhlmCPy98MILPeVjuumm64kFBx+ENHRdm2+DQbgmOe4NXVdd12YCQ796LUNJqdc0VA4PXQ++6Xlt5qFyeIhHr88aKfWahp7XoevBNz2vzTz0vA7x6PVZI6Ve09DzOnQ9+KbntZmHntchHuedd16YPHny0IKM32oRdDLmT0mJgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAgkBCToJED0UwREQAREQAREQAREQAREQAREQAREQAQGnYAEnUG/QsqfCIiACIiACIiACIiACIiACIiACIiACCQEJOgkQPRTBERABERABERABERABERABERABERABAadgASdQb9Cyp8IiIAIiIAIiIAIiIAIiIAIiIAIiIAIJAQk6CRA9HPkCcw222zhrW99a8zIww8/HJ5//vmRz9Q4ywFR+t/+9rfHs37kkUfCc889N84I6HRFYHwTUDk8vq//oJ293kmDdkWUHxEQAREQgUEhIEFnUK6E8tEgsPPOO4ell146/j7qqKPC1Vdf3VinL/0hsOqqq4ZtttkmHuyaa64JRx55ZH8OrKOIgAgMBAGVwwNxGZSJ1wjonaRbQQREQAREQASKCQyEoPPhD384fOhDH4o5vOiii8LFF19cnFstHfMEFltssfD1r389nueTTz4ZvvnNb4aXXnqp8Lynn376YctffvnlYcu0oHMC9Ib+4Ac/CPTSv/LKK2H//fcP99xzT+cJaQ8REIFRR6CTcnjUndw4yXDR+7Hs1Pvx3px11lmj5+1cc80VXve614V//etf4dFHHw1PP/10Wbaaluud1IRDP0RABNoQqFoGtiv/SGfOOeeM5deb3vSm8N///jeWXZRhZe2TNlmLq33+qGfz187e+MY3hrnnnjtQjpJv8vDYY4/FPLXb16+fZZZZYjrk4YEHHggvvPCCX932+3TTTRfmnXfe8PrXvz4888wzMR9td9IGtRIYCEFns802C2uvvXY80QsuuCCccsoptZ60Eh8iMOOMM4ZVVlklIKrNMccccQUPNqJKu0JuKJU83yhYvv3tb4f55psvJnj88ceHSy+9tDTxH/3oR+Etb3lL0/opU6YEhgjx99BDD4Urr7wyPPvss03b6Ec1Auuvv3741Kc+FTf++9//HkWdantqKxEQgdFKoGo5TPmwwQYbNE6Tiu1ee+1V2kDfcccdw3ve8564/W233RYOP/zwxr76kp/AT3/604CI0s6ojOON1c4YgvuRj3wkvO997wtU5rGzzjor0AlXZhx/xRVXDCuttFJYeOGFCze7//77w29/+9tw5513Fq73C/VO8jT0XQREoIzAWmutFTbffPOy1U3LTzjhhHDJJZc0LePHIossEttHyy+/fEAASY13HuXfueee23FYgo033jiWp5Ym6VAOltk73vGOWB+30Qt+O9pqjGQ444wzwr///W+/atj31VZbLayzzjpRjDFB6X//+1/4xz/+Ec4888zAu7mVIShtueWWYcKECU1M/vOf/4RbbrklnHjiieHFF19slYTW1URAgk5NYAc92Te84Q1h9dVXj0LOm9/85mHZ/eIXv9h3QWeZZZYJkyZNinmhcNhtt91aqt9VKqwIPL/5zW/CDTfcMOwctaA1Ae6RH//4x41Cm+933HFH6520VgREYFQTqFoOf/rTnw7rrbde07lSsT399NObltmPb33rW2GhhRaKP//2t79FD0Bbp8/8BH7xi18EOmzaGZXv7bbbrnSzBRdcMAp33Bcm5NjGv/vd72Ijwn6nn5tuumlsPKTLi36fc845sUFRtM6W6Z1kJPQpAiLQigDiM6JJFTv11FPD+eef37TpzDPPHH7+8583LSv7QXtlv/32C4wqqGJ0WtN5bYIK+/zhD38Ixx13XOHudITQye63L9oQz6Hvfve7pd4yCDmUyWWGQEVHy0033VS4CZ3+5APPnDKjjUDbrFOPn7L0tLw6AQk61VmNmS2XXHLJgGDDcJoyGwlB56tf/WpYYoklYpYoXClkW5kXdKiUErh39tlnL9zl8ssvD7/+9a8L12lhOQGU+DXWWCNu8Oc//1m96uWotEYExgSBquVwkaBDhRIhns/UJOikROr97QUdKtdTp04tPCAeOnvuuWfhup122im8973vLVzHwk4FHRo+Dz74YHTRp1FDr7MZww0OPvjgtj3EeicZMX2KgAiUEUgFnVZDO2lr/OlPf2pKKhV0EDuYpAXvf4YZMSyZTzM6KRg10G5kA6LMHnvsEd71rnfZrvGzTNDhGD/84Q8DnjFmeDP+5S9/CTPMMENYdtllwwILLGCrYmiE733ve43f9uX9739/2H777e1nFFzwyuG83v3ud8e0WEn+EadY5w0xf5999mmMoGAdLBjuRTnOkDQzOtAPPfRQ+6nPPhGQoNMn0IN0GHpVqYyb8QBT2aMAM+u3oMOsVgcccECjB3DvvfeOFT/LT9GnF3ROOumkMHny5MA4+3nmmSessMIKYd11121StIkJQ6Erq05g0UUXjXGM2AO3zF133TU89dRT1RPQliIgAqOGQCflcJGgw4medtpp4bzzzht2zhJ0hiGpdYEXdGhAEK+mUzvssMMCXjFmxGuwGShZVkXQYejDjTfeGO+J++67z5KK73piJ/phEVWG9uqd1ECoLyIgAiUEvKDTzcQeJujgdXPhhRcGOoW9KI6AwTDi+eefv5GDgw46qK0gTXgRwoykViboLLXUUmGXXXZpbJ4OzUIgQqhB2DGjnp4OvfrGN77RGPKMwI/oQ+wcjGFcfthtUV7wEiINM88UT1A6cswDlzYl21b1WLI09dkbgb4LOgSUwl3riSeeaFQwOo2hww3MeG7SQh2kktGNoTiiLKJ8orzSe1TVyAPuZzzUVHgY2kNDl89OjGPDA5WUh8sXGJ2k08m2JuhwTGLMUPmmt8yGO5FWvwUdX/jCAXfEdlYk6Ph9GLNPbzMiD3bXXXeF73//+36Twu+57i9eCLirE8OHeD7wrmrcUwQ94x6n8OXe5K/Tsam93l88Iwhhpr6XjTWuel7aTgREYHAJdFIOlwk69IRSmUtdrjsVdPAgpfyjskhP4PPPP98VuF7KYQ7YaxnaVaYz7JRT0IE/w+mI00BnizVi2gk6DOfmXcb+ZbbDDjs0GiPcM/xu1cutd1IZSS0XAREwAv5d5sUHW9/uk3IG8RiRmc7MIqPd9J3vfKexqmjoVmPltC/E/MT7hTIRL1bq9DZ8qUhEYd+Pf/zjYcMNN4zJUC7STkvbibRjGWplxlAxRHSztKOGODe///3vbXVsa1DPN2O0A20n/w7/whe+EOMJsQ1iEV6dPh/MistMhGYMveadIesfgb4JOqiSBN71QWy5mVE9CTZVJSgyw3E22mijKD74seE8GNy83KRFlT6G4SAQ4J5G4xplErddXOb8sCN6sBg/+M9//rP0CpBXPD/Ir/dosR0Ql66//vrYS1lWKUFg2GSTTaIq6uPXsD15IDAW7nR1GcGsCGyI6mwKqo+bwHH7Leh4N3+CkyEctLN2gg770/tHD6FZqzgw3d5fBA4m0BhGIYbowlhVKr2IQxgvBGZvo8Avuy/YDhWcAtyCh7LMDEGI8akIcK08jXLfX9tuu21YeeWVYzY07Mquhj5FYOwR6KQcTgUdKoH2TkwrjJCqIuhQXn7iE5+IFUf/bmQ4zuOPPx69MIuCV5J+znI4dxlK/vptOQSdz33uc+Gvf/1rjEFn7y3c7qsKOlXOGS+dz372s41N6enlWrcyvZNa0dE6ERCBXgWdqgQPOeSQRjuSDvJf/epXpbv6Iay0cRgGZXX9MkGH9+HHPvaxRpoIK6khEOFNaUYeyIsZ+5MOxrv0K1/5StPkBXjnpMGWeX9ce+21cR9mJeQ8OQ6WttHoPGdGYmvvsA0iPu98Wf8I1C7ocIF5Wa+55pqVzqpolivEGypriCiopmWGGEIjP+0NQp30nhkormUzLqBI0gNV5PXDWEYqGwtO87poZ1/72tcKPX5wSfvSl77U5LacpsUDh7JJxPJ+2UgKOtwjP/vZzxqeNGlhVMagiqDztre9LSridt8UDQfo9f7yyjXjTr2Qk+a9rNBmO2YDQUizvKb72u9W8YDquL98hRsPNF4GMhEQgbFFoNNyOBV0EJqpRGP04FHB8z2b7QQdPGHwzqBXtJVdd9114eijj27qPWT7XOVwHWVoq/Opa10q6FB206lFzJxeLLegQ+8znRgYohH3gO8ZLsqr3klFVLRMBETACKSCDu0KOuTpeOjEW97SK/qk7UCsGHMw4B1IG6PIfAwbnAbwqCHAcDtBB68XvF8wykdEIc7B2zvf+c7YbrVltHcZkWBGu4KZBjFmFfReRczgVRQUnxkMzz777LgP06QTEsOM9poFTqbe4Gcntm3I65e//OWWHdi2rT7zEKhd0PE3MVmmMnH77bdHlXDxxRdvCvTE+iJBx6uLbINHDh4suHbj8mYPBOvuueeeOL0yoohZKuiwnPX0POGhQgWOhr8Zrmj0MKaGhxGeNWYM07r55ptj5YMhKQg9FuSvSNChMOGh8F5B5Pe+aePK6RFkrCSVWrOf/OQn4dZbb7WftX6OpKADex/Ei4p/KsoVnXwVQYf98Mqx3t7LLrssznrl0+v1/vINCUuX+wovGkRAgkpS6GG8SBiOkMahoaLN9eYewfDywesMDtwbxAWyIGxlgk5d9xfB23zQzKJ7O2Za/0RABEYtgU7L4VTQYZw/wRstUOQxxxzTFGiynaCz1VZbxZkXDSCiEJVPRCGCNvp34ymnnBLrCrYtnznK4brKUJ/Pfn33go4/JvUnhgCbt2c78cTvy/fcgg6NGt5tGHWhfffdN35v9U/vpFZ0tE4ERMALOtBAYKAezicd9gRnp71JB3+35uN5kQZeLEzdnRqeq7RxCNNB25N2IMelLWDt17LOXtou+++/f+O9iucN71bOA8Nrhlg+VobS9iBmmi/XqbPT3sb++Mc/hmOPPTZ+533HEDDyldqll14ajj/++LiY6dt33333xiaU2Ta6Y/31148OF42V7gt1glbBqN2m+pqBQO2CDt4uqIcYFTQqfOb9gghC8CYfZC8VdNKbGdWRiordTKS7yiqrRAXTPBvovbviiitYFS0VdGgsM7QKMQbDnQyXM3sgyrwQ/MPHvoxTtIcqJjTtHw8n4ghDb/wDxXo//If9mHXJR1ZnaBjjI817CI+jvfbaq6mX046T+3MkBR0q63g+mU2cOLFpbKYtTz+rCjowNK8qBDKEE7Mc91fakEBs5P6y4X8M5WIogxnTqCMseeO+4f7CfIHvt6HwZcYp7ouiadjrur+IY3HggQc2skKFvtWwxMaG+iICIjBqCHRaDqeCDuUg3rh4T2CI0ZS99o5sJehQF6CSi7CNMYsHvYDWE0k5TaWU2HlYUZyeHOVwXWVozHSf/5UJOj4b1MXoufa9uX590fecgk56z7Xq4fZ50TvJ09B3ERCBlEAq6KTr+c27ibo4k6p047XjhRLae9Tz7Z3lj+dn5vPDsnybskzQIZ01p41wIdaseQLhTIAgz/sSRwALZcL5HHHEETHshz8+Hjk4P2C+jb3FFlvEtFlOmwLBi3Ac9ttmqiLgMp6TZtZGo22NpxGdOBybtg0dM2Z47ljgZVumz/oI1Cro0IimQmd25JFHBoJTeePmQV008zcbyxjD/cEPfjCuphJHekWKn+/du+222wLRxs1SQScd/8d2aYMaF7T0AefGNQ+cKtNq2/H5pALClHbmqeHd2fx25JUeKkQmDGW2FwXZp93q+0gKOsstt1yggMBgjpteFasq6PjxofRM+nsyx/3lGxLkn6EGXnDkXHzBXRRI0nuyUTAWBT5rxaTO+4uXCI0DsyqR/G1bfYqACIwOAp2Ww0WCDhVL3L1NmEHYJu4W1krQ8XECKP8oL9MyNG38p0Nzey2H6yxDR+IO8IIO7yUaGrPOOmujDmJ5Iq4g16aoXmXb+M9cgg4dFHT4UefBmCiD39YR4o+Zftc7KSWi3yIgAp5AKujgmch7yeLA+G3POeeccOaZZ/pFbb/7YZ9sXBQ3juW8t3if4XBAHihrbfIc3y5oJeiQDsOe2J7OjSLD6593Iu3f1OjENg9XC1aM4wBeN5Yv2kUbb7xxI/Ax7U7an5g/VzxmCRuCEX5hySWXjN9pu+PV44Mrq60Q0fTtX62CDjFB7MLzkubip14rCBxcdLvZUkHHVwK5wQgq6828chg2RZwdjIqJn+YtFXQQTHDt9YZLHB43Zjw4VDC8eddgKpsM5akyNIg0uOl97BHECK/k2nmwLaqp9UT2a1ahkRR0CCi89dZbc+oxaDVjRKtYVUEHgYhpzLH03shxf/mGBO6WuF2mtummm8ZAySzHewwvMm9pY4UYSoxfTUVFv4//Xvf9hRhb1EjzedB3ERCB0Uug03K4SNDh7P1sF368vi9rGY7qK36+jC4rQ0kbt3UbHp1Wwnsth+suQ8l/P42ODGK6MTMVXp14ftJRhMco7yMTUsgT2xx11FGVspdD0KHeR6cFw5Ex8kaHF55ZVU3vpKqktJ0IjD8CtGl4p+ERg9e8zchEp8MGG2wQh/daBzsiBeUaHb5VjCFIjC4xjxm8ZfBipxzzxnq8Y+ydxYQ3TDtuVlXQobOBdyTthDLj2EzIQ9vCztW29WUlecCpAe8Zc1A47rjjAoKS97D1nd/wYkIiDFEKJwzfvsfTEzEescyPgPAdOpYXfdZHoFZBxyukVCz8tGr+lHgwmHkJSwUd3K7pyenE6OGjMsONh6WCTtm4PuvRYn/GINrQMDu270VkGQ8Q54W7MmITn8zcVWQEdMZlrlNLeXS6f9XtR1LQ8d4pdXvo4Kro48HkuL98Q6LI+4tr4J8F73Jp14d7HIHQ9x4gPtHw4b669957Y3wo7s0iq/P+Um9oEXEtE4GxRaDTcrhM0KHySqeJVZap4DHUtZWgw/vWhhoXeTAaaS/8pOVor+VwnWWo5X9QPunAoj5mrvrUZfCSTTvcivKbQ9DxDQeO0Wk9R++koiujZSIgAlUJMBMtwrZZ1VEXvN94X+HtiFFPpywlpEhqfspxhh4h7vg6fBVBJy2rOQYTAxD2gPYSQ64sXAjraCvgWeOP42fiwhMJAQtvHIw2BqFQeAfQsW4z9pZ56JAu7Ws6VwgTgiFmERuX9wnCvJk8dIxEfz5rFXT8cBYCEPsL7U8PLx7UPsy/2AkeXOTt4Pct++6DNnlBh5uRiN9FhppoAR1RGwma5Y3GNm5p5j3j1/GdB4KHg3GECAfe0gqMX9fquw9g1Wq7XteNpKDjh7txHttvv32limVVDx2umcXQQak/+OCDI65c95dvSFBY0nOc2nrrrRdoAGFlU3+vvvrq0TvLGkJpGrw4UPd58fjCmu3qvL8ImEbBbMbLCyFTJgIiMHYIdFoOlwk6EKEMRyDCEKQZhtVK0PGB61t5pdIpgvCC4c1BRdSs13K4zjLU8jhIn76HlXxVLdd7FXR8I4fjMjUuPcjUn6qa3klVSWk7ERCBIgLUsxFY5p133riauKi0KVoZggXDlEwIxxOGd1A64oM0aCdSVpoXDx6ptA+9VRF0/CyA7Fvk9ZJO7PLLX/4yXHXVVY1DUbabNw6zUxEgmbYughCeOjbShDg5xMvBmJTFRq34zh7WETrF2uy+Y4U4PTA14/wVb9No1P9Zq6DjAy7dfffdTdOe+VMrE3QYhuXdty6++OI4M5Xft+g76iMNd2v05hJ0OBYPAbNdEYiZMY1FxkPOQ+I9fLy4RcOcgMhVDGHIHrYq23e7zUgKOhSofmYLCszHHnus7alUFXR8Y4EAaAhuWK77yzckmGqeHubUqgg67INLJT0HSy+9dONFkKZV5AVU5/2VTomIR11Rb0SaT/0WAREYPQQ6LYdbCTppmUFllu0ZGo1RsfVDrujhw60cayXo+KGraRq9lsN1lqHxxAbsX9qbypArhl61s14EHR+LgeMwVIE6XtWhxZa39P7SO8nI6FMERKAqAe+RQpvDz+SUpkF7gfiYNnyK9iUOB7Q1i4wJTAiGbFY0+xXCigk+xDLDKwZj6nMLJuzL27LhyIhTtKEsb2mnsQ8XYvnhM+2ALhOY0pAQlgZtWUY8MHs1lnYKESQ6ndHX9tVnfgK1Cjp+3B0Kpm+0+1PxkcK9hw7bEGWbaZuxNAhiXFjhX05Bxx+OChEVVMako1b64TLMXsXUcmbrrrtu+MxnPhN/IvRQMAySjaSgQ4HG0CfzjmLoG7127ayKoEMBx7R8FqOIgpKZNMxy3F+9NiQsL/6TWAfzzz9/LCBRx83DiG14kXD/+BhPdd5fa06LsI84izGMkRhHnfSmxh31TwREYKAJdFoOtxJ0OFEfMJFKL8NKywQdKoVMRY2lsXHiwtf++SFX9BLi2WHWazlcZxlqeRykT64H712zotkXbZ3/9A2MVsPj/D58T2cjpU6I13aVIMhpWnonpUT0WwREoFMC3iuzbHZj0qSsRBSxGZupgyOAt2qnpIJOJ3mjXWIz2fo2SirA+DT9+w+vGMppM//etGWMQMGbBgcIDFHIx7P172HaUQyxSi1tq6211lpxNme2gxHH5VPWHwK1Cjorr7xy2HbbbeOZoD4Suya9uDS0ealb5O5U0PHDZTqpPHh8dQk6/hiot1Rg7YFPHyg8Lhh3iDFtOt/TwFU+vX5/H0lBh3P1CjLDigjc1c6qCDq+wCY9vHXoFTTLcX/5grRXDx3LV/rJcCw/HSCukLhEmtV5f/kgp7hr+kaAHV+fIiACo59AJ+VwO0En7dWj4miB1VPvGj9Ei8COhx12WCFMKqA2/Wo6xXWv5XAdZShDg/C49Mbw87JeXb9d3d9TPvm7AABAAElEQVSJWUQsCLP03WjL089uBB3c+LnGNpyYxgRDFaxnNz1Gu996J7UjpPUiIALtCOBBQoc8ZkOD033oaGa7RRddtLGqivid1tkbO1f4QtuGIWC0j+m0sHKzbHZkkqStTZsbS50G0vivtMMZBm0eQexDpzHtITPvQAED6v3mTcQ2Rd5CtGt5j2JpHuJC/auVQK2CTlphSBuhnBkuZ3jomKWCjq+k4R2w2267NYId2z7tPvsh6JAHplfHbRvDewL3NTOfB5a1Ulptn35+jrSg48eAtgqg7Zm0E3QogBHZzHOqqMDOcX/5NOoSdDhvXNMRDjEi2TNbllmd9xcFP+ljJ510Upg8ebIdVp8iIAJjiEAn5XA7QQcseBL6irChSgUdZqhcf/3142qG3yAs4c7tLa1wMmz58ssvb2zSazlcRxnKZA8MB/JGnqsOufb75f6Op6XNMkXa/KbjrZ11KujQYCJtawxQ0eedUjaBRLvjs17vpCqUtI0IiEAZAbxFGWJlYgkhPf7v//6vaXPKLGZ0sqm5WclMy8SxbGd0XlisnbJtiVmDJz6GR87JJ58cv9N+NOcHRhhY3NY0blzceNo/zgEPGgsDkm6XetgwtbmPi0k6BElm8haM4Pi0nbz3ZOrlg/jjZwUjJikivbW3vIdPTFT/aidQq6CDukgwJgs6Ra8M3jhWUSNKOBW3BRZYoHGiqaDDvvTK2UOHVwKVIUvDduThoRcINzfS8D1gvqLGQ9JtUGQCMj7++ONxGry04kP+UHEt2riPEG559A8EDwoKqLnV2TZ8kt81pw1z4SFGtOiHjbSgk8ZvoOeQqVZbWZGgQ0T4eeaZJ6y00koB9z+7b0inKChZjvur14YEeePlwjVnqB6NndS4rxA+behYUfT4Ou4v34ji2eG6UCGXiYAIjD0CnZTDVQQdZuDAMze1VNAhYCPveSvfiAGAS7vFVqFcp4JpM2Hx/qTDxL+Hc5TDucvQkRJ0qOPQAcZUtBaLwa4BQ9g/+clPNoJLs7xqJwrbdiLocL14b1kl/8knn4xiDPWobk3vpG7JaT8RGB8EmFacabYJ2MuwqHT2PmK9ED/HOiqhYl4xRoi2A+8DC+7P8twiRVnMGssDn/6dxO/UO4h3JuU5IU7Mzj777IA3jzfq7vb+hAchUEyQwZuWjgdrL/lAx5YGwjxtXDO/DfvhnWPCFyEZEMvUVjBa/fmsVdDhFBg3TUXLjAtMZQ1bbrnlGoqirU8FHZb7QIj8RsxBCKHBjysY6iMVp7KgirkEHQuuy5AppmFFoEJIYrgYDW47PnlEab3wwgv52jC2Q0W1yg0rmOoN8YeK6ZxzzhnFLR4uHlLOs6gy3Eiwhy+44CEgmCGu2cPMMi+Y8XAiINQdrRzvK84dq1JwekGHiv9zzz3X8GCJibh/VGyPO+44t2Toa6/3V46GxAc+8IHw+c9/PmaKQha3fHowcdmnAkt8Cbs+3Cs0bmzsq51JHfeXn1WGe94HKbfj6lMERGDsEKhaDlcRdKDCBAG+04ZlqaDDMj/sit/EWKHMoZeUzhqEerNzzz03nH766fYzfuYoh3OXoSMl6FDPoF6EMakCdSXe6Uwzy7vEPD1Zz3uEXuD777+fn01GvQChzbxrWOn35bevK1Cf8XGN/JAGtsXaDbPiHcPUu2Wmd1IZGS0XARGAgJ/Bj3YBdWpEZNoylIvUqb2lQYRZl44wYVm7sgvPmLLhwuyfWhVBh3zg6Wr1f9LAw4YRB7Q/GeViQg3rCOVBmZ12iK+66qqBoapmcLnnnntiCBDar9YuhRGd36TvjeMzHboXwXhHI9KzzIZCs0+RB5BPS9/rIVC7oMMNR0UND5AqViTo0KOEmuqV0lZppbNk5BZ0Wh2bdQg0iD+pKsw6xhfyUKWVItalVqegQzwWxnhWNTyrEBnqtBVWWCGq0RwD4Q9F2dwOi47rBZ2i9SxDFDn++OOb4s2k2/Z6f+VoSHhBJ82f/w0Ppi0s8uxiu5z3F5V47mO7VxlDSwwdmQiIwNglULUcriropFOeQq5I0OE9TQeGCRFlhHm/0ujHA8VbjnKY9HKWoWmHFukXzVLI8pzmBZ1W6dIRwjBa8lRkCFy8A6pael2LBJ12aRV5n9o+eicZCX2KgAiUEfCCTtk2thwhmzLHC9OsKxJ0bJ+yz6KRGWXbsryKoMN2fig0v8uM9gFTljNhQJHhyfPRj360aFVcxv7HHntsUzgHvzHvaPLcaigZIg88vfesT0Pf6yNQu6BD1lH2GJ+39tprN/X0UJnAbYsLb+Pni1zF7PSpGG6yySbRk8WW+U88ZmjoMkbdT6uM5wyCBPlAlZw0aZLfrfHdYpRwUxcNLeGhwqsIJdJcwxs7T/tC2gT0JcYI38uMBvLmm28eRS5mM0oNhZWeyeuuu67hzZRu0+vv1CulXXr04LXqNWu3f5X1iH8cx8aBUjBdddVVpbtyTdOCBQGHHkn7I85MWvEvS7Db+8tPd0swZ+6B1Px0reSJGDjeGEe74YYbRk8zIuoXGeo/sZeoNLeyXPeXzzPPFq72PBsyERCBsUugajnsK5m8syZOnFgIhfeun1KVjcq8/eglZEY9KuTs5w3PWOIc4JlTVA7lKIfteLnKUD8tLmnT+8kw9CJvGDt2jk9iEsEwfT9a2uQDb5oTTzwxvittefoJByrn6bVIt7Pf6XUlRgTeVZ3YAQccEO6+++7CXfROKsSihSIgAo4Ak9MwqzCijPcudJvEdif16csuu6zwfUI7D0+XTowy9cADD6y8i49jRrsRcb3M8KLhnCzmTrodXjGMDEmH2Kbb0dYmUD/e/97oRCcGaJkYZNsykoQhvYSJoK5ghgMDeSCUSKv2r22vz/wE+iLoWLZ5sLgZGS9P0CfUzCIvFtu+7JPx9KSBCzaiEGnxh+tXP4xKDqIDQhEuyUx3hzvfY4891tH5UEkiHeIW0IgnHYQoXOWovI5Ho/JHJRCDA9PZFlXe62QzkvcX9wSxk7i3rMDl3uLPi5RVzr+X+wuhkcCTNoywVa9plbxoGxEQgdFDYKTLYeoKlINUqhkSxHBf3gf9fhf0UoZytXFdn2uuuRoXvtUMXo2NMn6hfkEFnKFWiGXEHnr44Ycjy27qXhmz1nFSeid1jEw7iMC4JkCZR1uRujRtNYy2Ih2+/Wov5r4A1Mkp13mv8G5EiKHt2W44WJoPvG0YfksatMU7DVJPuBP2Jx2GtBGHjbRkI0egr4LOyJ2mjjyaCPgYDkcccUT0VBpN+R8LefVTLhZNTzgWzlHnIAIiUE5A5XA5myprqHAj6JghRuHliLejrHMCeid1zkx7iIAIiIAIjA8CEnRGwXUuCqzYabbx+GGoGR5Ng25UhC2IJoG50rGtg57/sZA/egFwqcQYajdaezPGwrXQOYjASBBQOdwb9dVWWy3G/rNUcGX3AYNtuT6rEdA7qRonbSUCIiACIjD+CEjQGQXXnCCNTAnXq2233XbjdihXr+y0vwiIgAiIgAhUJeCDNOOdw/DhdOaRqmlpOxEQAREQAREQAREoIyBBp4zMAC1fZJFFSgM5V80mHjpMfacxjlWJaTsREAEREAER6I7AEkssEQMCI+YQt6ZsJqnuUtdeIiACIiACIiACIvAqAQk6uhNEQAREQAREQAREQAREQAREQAREQAREYJQRkKAzyi6YsisCIiACIiACIiACIiACIiACIiACIiACEnR0D4iACIiACIiACIiACIiACIiACIiACIjAKCMgQWeUXTBlVwREQAREQAREQAREQAREQAREQAREQAQk6OgeEAEREAEREAEREAEREAEREAEREAEREIFRRkCCzii7YMquCIiACIiACIiACIiACIiACIiACIiACEjQ0T0gAiIgAiIgAiIgAiIgAiIgAiIgAiIgAqOMgASdUXbBlF0REAEREAEREAEREAEREAEREAEREAERkKCje0AEREAEREAEREAEREAEREAEREAEREAERhkBCTqj7IIpuyIgAiIgAiIgAiIgAiIgAiIgAiIgAiIgQUf3gAiIgAiIgAiIgAiIgAiIgAiIgAiIgAiMMgISdEbZBVN2RUAEREAEREAEREAEREAEREAEREAERECCju4BERABERABERABERABERABERABERABERhlBCTojLILpuyKgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAQd3QMiIAIiIAIiIAIiIAIiIAIiIAIiIAIiMMoISNAZZRdM2RUBERABERABERABERABERABERABERABCTq6B0RABERABERABERABERABERABERABERglBGQoDPKLpiyKwIiIAIiIAIiIAIiIAIiIAIiIAIiIAISdHQPiIAIiIAIiIAIiIAIiIAIiIAIiIAIiMAoIyBBZ5RdMGVXBERABERABERABERABERABERABERABCTo6B4QAREQAREQAREQAREQAREQAREQAREQgVFGQILOKLtgyq4IiIAIiIAIiIAIiIAIiIAIiIAIiIAISNDRPSACIiACIiACIiACIiACIiACIiACIiACo4yABJ1RdsGUXREQAREQAREQAREQAREQAREQAREQARGQoKN7QAREQAREQAREQAREQAREQAREQAREQARGGQEJOqPsgim7IiACIiACIiACIiACIiACIiACIiACIiBBR/eACIiACIiACIiACIiACIiACIiACIiACIwyAhJ0RtkFU3ZFQAREQAREQAREQAREQAREQAREQAREQIKO7gEREAEREAEREAEREAEREAEREAEREAERGGUEJOiMsgum7IqACIiACIiACIiACIiACIiACIiACIiABB3dAyIgAiIgAiIgAiIgAiIgAiIgAiIgAiIwygjUIuhMmDAh7Lrrrj2hePnll8P000/fUxpPPPFEmHPOOUc8jZ4yMG3nl156Kcw444w9JTMoaei6Dl3GQbkmOfKh66rrOkSg+ZvK4SEeOZ61HGnoeR28a6LrOnRNcnzLwXNQ0tDzOnRHDMo1yZEPXVdd1yECzd9UbxrikeNZy5FGjuf1zDPPDOecc87QyWX8Vougs/TSS4edd945Yza7S+qpp54Kc8wxR3c7v7ZXjjR6yoB2HkYgxzXJkcawjGlBTwRyXJMcafR0Etp5GIEc1yRHGsMypgU9EchxTXKk0dNJaOdhBHJckxxpDMuYFvREIMc1yZFGTyehnYcRyHFNcqQxLGNa0BOBHNckRxo9nYR2Hkbg/PPPD6eeeuqw5TkW1CLoLLLIImHSpEk95W+mmWYKU6dODa+88krX6UyZMiXMPvvsXe/Pjr2mMcMMMwT+XnjhhZ7yMd100/XEgoMPQhq6rs23wSBckxz3hq6rrmszgaFfvZahpNRrGiqHh64H3/S8NvNQOTzEo9dnjZR6TUPP69D14Jue12Yeel6HePT6rJFSr2noeR26HnzT89rMQ8/rEI/zzjsvTJ48eWhBxm+1CDoZ86ekREAEREAEREAEREAEREAEREAEREAEREAEEgISdBIg+ikCIiACIiACIiACIiACIiACIiACIiACg05Ags6gXyHlTwREQAREQAREQAREQAREQAREQAREQAQSAhJ0EiD6KQIiIAIiIAIiIAIiIAIiIAIiIAIiIAKDTkCCzqBfIeVPBERABERABERABERABERABERABERABBICEnQSIPopAmORwHzzzRde97rXheeeey488sgjY/EUB/qcmPXg7W9/e8wj/LkOMhEYZAKzzTZbeOtb3xqz+PDDD4fnn39+kLOrvNVI4G1ve1uYeeaZ42ydDz74YI1HUtIiIAIiIAIiIAKdEpCg0ykxbS8Co4zAggsuGPbcc884bf1jjz0Wdt9991F2BqM/u6uuumrYZptt4olcc8014cgjjxz9J6UzGNMEdt5557D00kvHczzqqKPC1VdfPabPVydXTuAHP/hBmGuuucLLL78c9t5774DAJxMBERABERABERgMArULOp/61KfC8ssvH8/29NNPDzRmZOUEPvzhD4cPfehDcYOLLrooXHzxxeUbv7ZmxhlnDN/4xjeiB8aLL74YfvSjHwU+x7q9+c1vDrvttlsUKp5++umw3377jfVT7ur8vvnNb4ZFF1007nvCCSeESy65pDSd6aefvmndK6+8EviT9UYADx0aRXg9wHP//fcP99xzT2+Jam8RqInAYostFr7+9a/H1J988slAGfLSSy/VdDQl228CvpyvUsZvsMEGYaONNorZvPXWW8NPfvKTfmdZxxMBERCBjglQ1s0555zR2/RNb3pT+O9//xseffTR8K9//avjd9p0000X5p133vD6178+PPPMMzGNdhnyZW27bRHMW9mss84azwNxHY97zoFzof0jE4HaBZ3tttuuIeiccsop4YILLhD1FgQ222yzsPbaa8ctYAWzdoYr9M9//vPGZjvuuGMstBoLxugXhrCYiDN16tQwceLEMXqm3Z/W+9///rD99tvHBP79739H75yyhtlqq60Wtt5666aDsS0vDP4YKvTXv/413HbbbU3b6Ec1Auuvv35A4Mb+/ve/R1Gn2p7aSgT6R4AK6Le//e3AME3s+OOPD5deeumwDHA/09A3o6zYa6+9SiuXvJfe8573xM0pQw4//HDbVZ99JLDxxhuHj3zkI40j0nH029/+tvG76Msb3vCGKEi/8Y1vjKsPOuggvQeKQGmZCIjAQBBYZJFFwiqrrBLbn7PMMsuwPPG+ouw799xz2w6Bp9zbcsstw4QJE4JP6z//+U+45ZZbwoknnljaif7Tn/40IMS0MwQivGJTY98VV1wxrLTSSmHhhRdOV8ff999/fyzD77zzzsL1Wjg+CEjQGbDrLEGn+gWRoNOe1T777BPmn3/+uGE775y11lorbL755m0TvfbaawNp8QKSVSdAo+jHP/5xo0LA9zvuuKN6AtpSBPpAYJlllgmTJk2KR6LCihdkkQj86U9/Oqy33npNOaJyjCdukX3rW98KCy20UFz1t7/9LQoERdtpWX0EEOkQ63yv8R/+8Idw3HHHtT2o99LR9WuLSxuIgAiMEIG0k7tVNnjH0TGMJ2qRzTHHHNFbFc+cMqMeh3DzwgsvDNvkF7/4RWAURTtjVAUOEKltuummYZ111kkXF/4+55xzwplnnlm4TgvHPgEJOgN2jbsRdHD/o8eMQoNC4Wtf+1phwTJgp9pzdiTotEaImr/HHnvEjWiQfeUrX2npuZUKOrhzMqyt6GWEiyf3HD0DsuoE6OVZY4014g5//vOf5aVQHZ227BOBr371q2GJJZaIRzv//PPDqaeeWnjkIkEHd3YEID5Tk6CTEunvb0Qc3gfvete7mg5cVdDhXfDDH/6wIQYRS0cBkptQ6ocIiMAAEEgFHeq/xP3Cy5z2EkOK+TRDoCZURTrkiSFWdIqatyrbkwaxKN/xjnfEoVyWxg033BAOPfRQ+9n49IIOgg+jCYqMDlJiXaaWCjoIUJS7bE++yIcZw2cPPvhgeU8akHH2KUFnwC54N4LOgJ1C37IjQac16i984QvR5ZStrr/++nDYYYe13MELOsxos8MOO8TKOxV5XhybbLJJYLYTM3ol8DKRVSdALCPikWD/+9//wq677hqeeuqp6gloSxGokQCzWh1wwAExLhmHadVoLxJ02Oe0004L5513Hl+bTIJOE46+/2AoN/WL1KoKOuxHXCUaQxix2PDUlImACIjAIBEwQQevmwsvvDBcfvnlTUIKMXUYAmze6+S9aBgpQ4SJT2rmJ7Sgo5POC/M6RQxi29TTxws6COqEL+jEEHSom994443xvXrfffc1dkdwIuaq96zXcP4GnnH3ZUQEHW5CGuO4sj3wwAOlY+7LrgaBRQluxQOFWjqS06lyLiikjLFEAUY9rWqcA258TzzxROMhH0lBh0JwwWkzIj377LPhoYceKnSzLzs3Gv2cC8o1nh39sDJBh/N45zvfGc8DJTtV3dvlbZDur3Z5LVtPbyxxlRjmg9FzQA9CKysSdPz2BGHDw4RxyWZVhg2RF64V93sv90cvzxp5oLzhRQ6TKVOmRCGFz06M55z7nB4fyq6y3payNDkHgiOTD6zdMLiydLRcBOogQGwVYqxg3N8MzymzMkEH7z0qtqn7eaeCTq5yuJf3Gufe6zNfxq+fy9/ylrfEYQWUfXhPUU+xIQSdCDo+zhrXeZdddunnaehYIiACItCWAPUsOs8QN+g4KzI6Kb/zne80VuGJikeqN98pSgxKPGh8nY+ZS5nB1Izhxgw79taroEPbinKbtm6Z0fm67LLLxtW8d/ndabunLG0tHz0E+iro8MBwc66wwgqxkmSYEADOOuuscNVVV9miYZ80yD7xiU/ExiRpmOFi9vjjj4fJkyeXzt5DhYwHkTSee+652Oto+/tPKq82C833v//9wANsNvvss8fK7QwzzBCFgu9973uxcUtvFfuYob4S7PGf//ynLRr2SU8Zs1lRyTKjgoWKTMCtKkGRCT7pg3NZOgy5YjxoWpm29XwSmJWKGUYBxD6M0USthhFGIcgMW1yzVgUD+3Eu/ppwLpdddlkU2vyMXa1mV4oH7fBfKujgMvm5z30uLLDAAo3zgAMBOI899tiWMV+6vb88Syq4NNaLYsvw8mDIE/cPxjhXGNVlfrgVx6DiTf5aWTtBh30RRXC7t2FYd911V+BZKTKGbTAzCudu27MdDQp6GwgkVyTG5nzWeEbWXXfd+EzRsEuNsgfvJbwKyu5zZqjCO4kpnP19zvY87wQU/ctf/pImXfp72223DSuvvHJcr2FXpZi0YgQI+OFW7TwwUkGHd6s9Yzzbv//975vOoIqg0205zIF8Wdzrey33M98EYgR+7LTTTuG9731vPDIiMsHyLUB1J4IOHly+vKceQuePTAREQARGG4FDDjmk0X678sorw69+9avGKdCByXrrFE3fh9Sx8ba2NhM7IrrwnvPWq6Dj0yr7Tjvrs5/9bGM1nkO0i2Xji0BfBZ12aMsCOiHIoDja1Mtl6Vx33XXh6KOPHiZm+EoIjbAvfvGLhUkgxNi4ytTV3KfBzii/ZRHHERHYP/VU4cHnoVtzzTULj58ubDXL1ZFHHtkQB9L92s1y5VXnf/zjH01CTppWWWUPBRw3PxNs0v3S363OJd226m8v6LAPQkGRyMU6PEOYarXI3bGX+4s8UKm1Qv+mm24KP/vZzzhkw7in2MZ6RBG8GJfbTmBpJNDFFz+jEvehDfNplVQVQYf9v/SlL8Wo+3xHDGQWLYRVM8QbGlcIk9wnZca1IJBc2vOQ41njmHDnxbbgNK+zdkbcKa5LarjTcr7kqcw4d3plzjjjjLJNmpb7ly8eQgh9MhEYaQK8nyi7EDMwKrdUcsssFXQYZmWzJ9EZQpnje0fbCTq9lMPkMcd7jXTqeOZJd6TMz3RIR9N3v/vdOHSqG0GHc+A9yrXCymZAiyv1TwREQAQGlAD1VDzXrbOR9xcde2Zzzz13HH5sv3k3Ur/HeFf6mSBtG9qXX/7yl5s6B1NBhzofHbtFHb+WTqefG264Yfj4xz8edyMPtJdbdep3mr62Hx0ERkTQobLH9Mc0uJZaaqlGYxhk+++/fxRLPL6tttoqrL766o1F7E8wViqL7373uxuVCzYomhrdNxBzCToci4Yc58GYSSqBPr4IvZP0UnrzFSuW80DffvvtMZ3FF1+86TxY30oEoZffxAsKJAtiyX6dCDpsj3EOBAbjmtCTZ6ozQ0twn0/jfODeR6FhhpiCJwwNfM6F4TXeWp2L366T76mgw75cE5gyjI3gj36MLPljnGxqvd5fDEGiMWH2f//3f9G7yX779Ln/8CSCdZ3GSwVPOMyP+211zKqCzgc+8IHw+c9/vpEUgoQfuvSxj30setPZBtwbeLAgYOGtYw0J1t9zzz3xmfeCkH9eLY1OnzX2w3MMzxozhkTefPPN8UXHkCeEHgsoVyTo8HwRT8R74JHf+6aNYabRS9llDRuOQUPn1ltvtcOVfnJf+uB3Rccu3VkrRKAmAry/8Dw1Q4BJxVZbx2cq6OAFiPce7xDsmGOOCX/605/id/61E3R8Ocn2nb7nvaDD/lin77W6nvlXc9P//3hMcU3xrKQMpTyjM4p3upXDZZ02ZbnlOlP2YXgV//rXvy7bVMtFQAREYCAJ+HiGZBBvHKYgN2Pa8913391+RhHc4uP4DtPGBq99Sb3hvaDjt6VejHcjsSgRk3oRYHxsM+qn++67rz+Uvo8TAn0XdLjZDjzwwMYsGDSqaNCYOEGDi157MxpeVEJsqMqdd94ZexFx78YYAsH+NO4xGo3p+H3fQMwl6CBc4NFDfjHc83beeedGwMCinne8dojtglFZpfJrXjycJwFSvSdAVRGEShvxUsw6FXRobHMuNvwFcQjXe7Pf/OY34bLLLrOf8ROPE64dhmcD19Rcr6k8UsCYRwrbVD0Xtq1qqaDDtaURYb3KiFIID36MK72TeCWZ5bi/SMuPpUUEoxKN6JiKeHhx/O53v7PD1/bJvTRhwoSYPkHhTj755LbHqiroMPSIe92MIX733ntv/MnziChrjTqGZPFCsxchGyGAwcu8d/Cqu+KKKyy5+Ax4t/5unjUS840WnlOeEe4RbzRqmKaZIRrpCxUPNJhg7EfDxTdQGRrG9M7mqYfHEc+F90rwx7LviJ08L2Z4a7Uaomnb6VME6iRA5wgebWYTJ05sihdgy+0zFXQQVPBANa9NxCCeB3vmWgk6OcrhVNDp5r1W1zNvzPr96WfV80MKfNnYqaDj02TorK979Pv8dDwREAER6IYA7UY6nzHqfrR5rF3JsrTT2t6HtNFoR1DH5d1G+4jOCDM8d4g/Z1Ym6Nh6PmkH4hFLfblTS9/bqadRp+lp+9FLoO+CTlHjxaud9CIhBtjwB+Lm0OOP8fBQEfGNQ5anN3TqKl6HoJOOpyQfNA7JH8Z5bLfddo3AwogfVG7NGDKF54S3973vfdG7xpZVFUF6EXQQH3CNT5n6Ch8ChB9Okp5Lypv8p6JQ1XOxc6/ymQo6RZVLhDam8bPYDmnlNcf9RV4p3Lm+JmLRmGFWKRR+OzYeQuSFe6Nu44Vj3idls86keagq6KTXHzGQWDAYMYw++MEPxu+IqzApGlrme+NTzyn/vJJQp89aPPi0f55Bq+mXbXv/ieiCJ5V5qhHj6+yzz/abxO/kld4Q7jOsyMMwrnD/8KjjJW9WNLuCrdOnCPSLwHLLLReotGK8F/Dya2VFgg5x4RBjrQPGlw2tBJ0c5bAXdLp5r9X5zLfiWNc66kW8xxHO6Q2Gv3lS+vd7+k5slx+CZtvQOs2o0o6W1ouACAwaAT/snbwVxXzz29BJx9B7DI/0JZdcMn6nXXPppZfG2JlxwbR/aX3OCzq8lxCNZp111kbd0vZjMhrK6KL6sm2TfuIIgaMA9VCMkQn8ts75dHv9HtsE+iro0AuNoJMaFSlmy7EeewLL2pAUP3QEdzjc4ooMjwgb8pTG4vENREShHDF0aMThbeQtFVaoNPGAYSuuuGKjQOBho1BIPQJoPFIY2DCOqiJIetxOPHTKmDJVHgGPMbwn8KIwW2mllRoMKZxQtovOBQ8kCyJb9VzsGFU+U0GHnkJEndR8j2I61XaO+8uOh4DCUBrzTrHlfPYjbo4/HmKEBd0ueln5be17VUEnHZrhZ2ryjTYq+wTV9mbPOEMUibOD8QLDTdXMP68s6/RZs3S8GyqCJWVMqyEkth+fvLB9bBu8Bn3vjZ0H226xxRYND0HPgnVl5mNg+UZv2fZaLgJ1E/AzGFG5JJBuKysSdNjeeyvipWgzifiygfc773mzHOWwF3S6ea/V/czbufbjE9EY7lYnInD7RRdd1Dh0L4LOBhtsEIPdkxizSNKAkImACIjAaCDAUCo82C12Dm0CPKbTjlZfziGI067y7Ti8aij7iJ/JcHuztD6HNzujAq6++uoYw5Pj0AFIpzftLBNj2J9tjjrqKEuq5SftRTzELdg96VLvZxSLbHwS6Kug411+U9w+0N4vf/nLxoxXe+yxR2NIQ+op4tPwFcL0OL6BmEvQScdJWl5MjeU45N2GVPnpYHm48R4oMj9UpqoI0ougU+T9QL58flOeH/3oR8MnP/nJmP1W5+JnTKl6LkVMypalgk5ZZHevtHM9fIDgHPeXz59vFNly7gXEhH4WtN47pW4PnSOOOCIQkBwjcJwNn7Tzb/cJH156vDQx/7zyu9NnjX0w3+vPb1543K+4tSI28WmegKz3RkDnzTbbzC+q9L3KfS4PnUootVGfCfjhod166JBlRAREWPNus9hSrQSdHOWwF3S6ea/V+cz3+VLGAJkEysRw/0fcoZw160XQkYeOUdSnCIjAaCLAu4l3DR4yGJ2J1JUJgZGabzdYHRXHAYbaY4hAxOuk4xQhxSz10LHlRZ+03Ti+db5SR8VLNu0gL9rXD29mfZW6Z1E6WjZ2CPRV0Gk17AE3bVMqfQOUhrB5ebTq/abxZdN903DGO8TMNxB5MHv10OkmDT8UhUDKvgCwfPLpZxCq+oD2IugwfTYeTamtt956Megly9Oplb3HC0FgvTrt06GRTrwVrOq5+P3bfU8FHYa4EW8lNaaIJog0ljZUctxf6fEQlnB3NyPuCrF9+mm+wl6VfVUPnTSGDi85ggUTPLjMg67dueNNY8P+cjyvHI+eE4Z8cZ8UGS9PPAUYA03AZG/py9Kva/X9j3/8Yzj22GNbbRIDlPLSN+OFjtAkE4GRJOCHDJMPZq9rVbEs89CxfRGIMIRT3u+tBJ0c5bAXdLp5r9X5zEcQffpHeYcntPVAe49ny4J/P3Q65Mq//9OYh5a+PkVABERgkAggmhACwcSTqVOnxnZiOtLC8uw7OFhGiAw8dDDfyc1EH+aFyjrK3k5iInqvH/avUh9kRisT7Nnn2muvDXh9U6eVjV8CfRV0WgVr8kNEvKCDCsqQLKyVoOOHCKXu3DkaiL2mwbCMNV+brvzuu+9umg7P3379FnTKgvS2EnR8xTeNf+LPhVmwCCyGVRUV/P7tvqeCDl5aCDapea+ZVNDJcX/5480111yxQEdkM8MriALfD9mxdXV90hizBlVVN86qgk7ZLFcMFfTi3sUXXxxngWt3joxPJoCp9SD3+qz54zH8jdmuCMTMNJRFxoudQHbmTcc2XoClF6fqTC4IQ+2GdREY3Q9TwCuvqIeoKK9aJgJ1ESD+F541ZlR+H3vsMfs57LOVoJPe44gKbM9QSyx9R+coh72g0817rc5nfhi8GhesscYaAdHFzM/cYssIBmqCD8Pr8FjEqHv5gJ62vf9kKJ65+Y9EZ4XPi76LgAiIQDsC1E3xzLchqNQ16Xyk3llmaWxW2476IKEVbNrxtCOEkQnprMC2b9Fn6uHDkCvq7GXmPYfYhiFj1LuL2j5laWj52CTQV0Gn1cufYLHmyuaHXPHgMM0vlsbG8ZfED7lKp2lmBg16CHEBL/OuoTefILZmNLgYH27WayPTj8dEEfYVZzsGnz7yelURpBcPnW4qvl7sofJHY7jIuhk+VpRO2bJU0ClrGCOkIahh6ZCrHPeX5Y8KMo0ggganlno5petz//ZThyMw0DvezqoKOv5ZwyMK8ch6Bg499NA4pTfHKgqW3S4PrO/1WSs7Bi9OGpSMXaZXhGfeLC2b1l133fCZz3wmrk7vGdun209/PzLMjAaS8es2Te0nAr0SoPxiyKTFAGP4MD1/ZdZK0GEfHzySijNDMcsEnRzlcK+CTp3PfBnDOpangk4nx6D8vuGGG1ru4utqaWyeljtqpQiIgAj0mQDvHTzAbYZh2oCIJq3ebWQR8Qfv89TS96KvN5M29WM+qxr5471rVjSrsK1LZ4ilLYkzhIIgG6Hx/dlXQadsRgSGalBJsDH33kXYexpcf/31TaKLv3R4QOD6hqWeQKnnAA0oeqW8peJAbkHHD/vh2MQFSR96Aq3ycNoQs0EVdJZffvk4gxf8aNCjSFv8E2NK44BhJTZWteq52P5VPtNrVjZ21XtvpcPdctxfllc/7I8GOgGazUOJbaoGJ7b0evn0vQvkhXs+vUZp+v7FxAsCD6vU8JZjOKPNYmPDKWw7hjiZoNUq5pVtX/RZl6Djj0WZQIPTXvJpwHY/rIx7nOGDePLkMB809qabbmp6medIX2mIQLcEfCBxgujSYC+zdoKOL4NIA088KzdSD50c5XCvgk4dz/wcc8zRmFzAOPIOatUzbNt1+7n66qs3TaPbSToEf2cYVZnROeaHs1PvIvC1TAREQAQGjQCdE7RPFl100UbWWgkmjY2mfWFfhBbzZGRdUbB9H1qim86/hRdeOMb1sWMz/Bivm9RoS/CetHYyDgeUxeYplG6v3+OPQF8FHfASkOrRRx9tIp26kPl4GsyEw7TmGC5lrMPlzRsNSBqSZgyPuPzyy+1nfABQVe1BwFuHhqi3NIBqbkEnfWiLZmTCDRoPHbOqIki/PXRSIaWoly4df1r1XOzcq3ym+Ug9s0iDa447v3l/cV/44TM57i+O46f85beJin7YGffvAQccEFDV6zZeQtxjNp12u4o6+Wkn6PCC23rrrcMKK6zQyH768vGNKgQk4gm1E5Iaib32pR+CDodienWGWWDMRkdcCTOfB5aVxeSw7Tv59PHCTjrppDB58uROdte2IlAbAe/Z1yrgPRloJ+iwDW7uvjLNMiwVdHKUw77s6cbztI5nfsKECXFGlVfP+tX/6TvIr8vxHdHM4kSUpcd7af7554+r8cg5+eST43fKwbSjyafhO6YYQkxHQavt/b76LgIiIAL9IkAdmJmpbIpxjsusq8RyrWreG519aGc+9NBDjd1xRkBUMW/vVqNIGjslXyhDbQgrq4ocDvAqZ7mJSwhH1CPLJvVIDqGf44RA3wUdYq6gelrwWipRNPrMKyWNycJU0PQC2TTBDF3BXc7GCyJm0NOOYILhWUDDLPXA8UEX6YGisWsVkQUWWCBWuvzsPLkFHfJPsCviFGCoq3jjmDiFJwtiFXkxqyqC9FvQIX/enR7PBXgS8R1DaOEamIjCsqrnwrZVLRV0uCe4t3zvp5+Rg3T333//RrwAfue4v7iHuV/s/iFAMMIN9xfLuH+tgk1BzO9+xNPxvd5FYhfn761I0EEQo1eWyj+NrnnmmaexSzoFPCu4vzk/E0/xUkJAs/vcdqbRQY8DwwO4N/w18w0rGHYTxJzj4DH1+OOPxwB2aXlA/ui5WWyxxWKWirwH/cuccoUhZEXDEcjvmtOG9XE/Ipy1Mi8+c24I3NwTMhEYBAJpHJ2iDhjLZxVBZ6mlloreqLaPfaaCTo5yuFdBh7zlfuZHQtAxxq0+uw2K7Huj02GqrY6ndSIgAiLQLwLU7yjLLY4kx+1GbEFIoZ5o5oMhcwzKQxOM8IQn5IKvz1F3pUOTwPNpbLKZZpopzhZsk/lwjKJOFNq2dPSbaMTkIYg51G1lIuAJ9F3Q4eAMb2CoATc0vf24JZshcuCS7M03TFmOhwOzK6FW0ij0jcxzzz03nH766X73+N0HJWYBDThEFR5K8mBxA2zH3IIO6TL+kUqnGQ8+AhWGh0catLVIBCG/eBP5/PKdhrEZhYefnQTxjB5LE7ByVHxTd3rSxvWaYzGMxeePfBWdi+W3289U0CEdRJ2rrroqelwQr8Fm2WJd2ohgGdbL/cU9SKPHhu4g1CBo+EKdHmoq0Fw7DJEDAaxu840J7geG+bUaNuQFHfJGby1DrGyYhM/vlClT4jDJInd7P8SNfRBzEELwzOO+YGwyeSsLdp5L0DERl3uS8oLnnXNBPEbIseOTR3qoL7zwQr42jO0YQ20vUlYgWlJ2IBAhdCHA8iwg2HKeMG5lflheqxniWqWhdSJQJwE/S1+rSnAVQYd8EmPNd1SwrKgs7qUcJs0c77Xcz7wvg8kjVreHzqtHaf2/G0GHYaoMa7b32H777Rfuvffe1gfSWhEQARHoM4F0RASHbzc0idmRfRxV9qGsoxOYOqkZ7U9EFZZZmA/Wpc4ILKP+aIGYiWVJHZh6Ip3dxIalTDVjSDJlalqnRlBCWPLW7lwIkqyy2RMbH99HRNApQ3vppZeG448/fthqHhwaSvZgDNvgtQU0tLiRi4Z4sC+BiK0ykqbBw+bTr0PQoTFJpXWZZZZJD1/4u0gEoXGZFjqFOycLJ06c2GjM56j4kvya07wSmPGqiCmFE0zpecWKziWu6OFfKujQcLchRmmyRJ0nTlPRdIK93F9+xi+OydSBeMOk5ocysK5omFq6T47f/oXC1On0qpZZKuiUbUcEfuIBpV4vtj1CLUOzfO+IrSv6TGev43rQA4H14qFjgk7RMf0yyg229SKorUcQJOaNf/HauvSznaCD+MdxLC28yRC2ZSIwSATo4KB3E0OYRrC2zgCfz6qCTjr8ljSKBJ1eymHSzPVey/nMp5045POSSy6JM3byfaSsG0FnnXXWCYj1WFFP8kidi44rAiIgAp5AkaDj1xd9px6IeJMa7yXKS/OyT9fzG5EHsTutE/v6d9F+toyOaIbf825IrUjQSbdJf5fFE0230++xRaB2QYeGHdNGYzRimeKN8YJeBMBr4KyzzhrWQ+5RI2TgZcPsNH5ftqEhzxTJeOYUVTwtHXrlqahag4rlbI+SecQRR8RhM6xjWToUgt58vIc4Nl4YkyZNsmSbPhGUytJgQ/ZnGBBudjYekuU80LjzUSBYzKCzzz47cmG9GYIFDcEy4cK285/wYfylNVj99KxlwoKPa3TFFVeEo48+2ifZ+I5yzOwgFKAM/UJMY1p2vIS4VhZvpSymQSOhLr4wRThDm2CKxwh5pFLvry/JUtgye0erqaG7ub8o4LknzFq5oJNHhtRx/2PcQ7hrInzVaX7KdgQ2xgCXPSN+W8sT9wxTF7MvU3LTi1EUsM2295805DbZZJPoyeKX23c8ZvDcocfaX5tczxoiGp5v9KLYkE07Np9cAwK/EsOG72XG/bT55ptHIbbouaP8wtvmuuuua3jcFaXlnynOfZ999im9FkX7a5kI9IMAHQ/0FJrHqJ910h/fi9Q8A3QaFBllH50pvsOkzDutm3LYjpnzvZbrmff1H/KJWz5Dr9NeWDuHfn36uA2UfzQmWhl1FUR2GxpPfYnyTiYCIiACg0aAOh+e8p0Y3tfE2ywyvLEZPoXXv/dYp36MZw7D8YvqkIQpoB1UJgbxPuC4dJBSxy4yH4ezaH3RMtpFtMNk44tA7YJOEU4a/ossskhseOMxQeOmrJGZ7k/FAs8MHlgaw+yPG1vV/WmQ4TVCrADc1rjpizx60uPm/s15EJeEvDC0BXXYBJfcx+pHejSYGTqHsGLXwgfELPNcqSNvXFtilXB9EevwnKhqvd5fVY/Tr+1oTOFtZgEwu51KvJf88rxznzM0EuGS+50/3Fb7YTTOaJwiFBGrinuU8ccIVZ08c7AkHe4vYiORDkIU5Q+iaSuj3KFBZMO81IPSipbWjTQBhjLbLHfc30wrbuV6P/I2KOVwL888nJixk44Hs1Yzddo2g/jpvTfLerIHMd/KkwiIgAjkIkDIAIZK4bVDcGQ8Fat0ylJvRBRiqBWdFsRkpIOUd2snddBc56F0xiaBERF0xiZKnZUnQIGH+yIVYkzj7T2d/n7HMw3vIIyXCI0zWX8J+KmEi6a+7G9udDQRaE/Ax9KRR0Z7XukWCDkIOmYIYvtM88qjA2s0GZ01eKLinUOPMsMIFJ9hNF1B5VUEREAERGCsE5CgM9avcI3nhwsing4MNfJBgFGhGdpGQEiMdYgIeGf4RkK3Wcs5hXS3eRht+zHbDF4iDOlj2JSsvwTwzMFdF6Mx1C/vpP6epY42lgggSFgw47vuuqsjT8exxKHbc0mHsFaZabDbY9W93+KLLx4nsWBoHUMMZCIgAiIgAiIgAoNDQILO4FyLUZcTggwj3tBrh+sgfwTEpRHAMBuzww8/vBFbhBguDIfqxc4///xw6qmn9pKE9hUBERABERCB2gj4IM1459CpwTtSJgIiIAIiIAIiIAI5CUjQyUlznKVlgk7ZaVOJZRp5PGrMdtxxxxhA2X5383neeefFQLbd7Kt9REAEREAERKBuAkwYQCwi3oMMdS2awaTuPCh9ERABERABERCBsU9Ags7Yv8a1nSEzWOGKzVASAiITcJYAX1ReCRjGzGMEDZOJgAiIgAiIgAiIgAiIgAiIgAiIgAjkJSBBJy/PcZ0aAZDpjZSJgAiIgAiIgAiIgAiIgAiIgAiIgAjUS0CCTr18lboIiIAIiIAIiIAIiIAIiIAIiIAIiIAIZCcgQSc7UiUoAiIgAiIgAiIgAiIgAiIgAiIgAiIgAvUSkKBTL1+lLgIiIAIiIAIiIAIiIAIiIAIiIAIiIALZCUjQyY5UCYqACIiACIiACIiACIiACIiACIiACIhAvQQk6NTLV6mLgAiIgAiIgAiIgAiIgAiIgAiIgAiIQHYCEnSyI1WCIiACIiACIiACIiACIiACIiACIiACIlAvAQk69fJV6iIgAiIgAiIgAiIgAiIgAiIgAiIgAiKQnYAEnexIlaAIiIAIiIAIiIAIiIAIiIAIiIAIiIAI1EtAgk69fJW6CIiACIiACIiACIiACIiACIiACIiACGQnIEEnO1IlKAIiIAIiIAIiIAIiIAIiIAIiIAIiIAL1EpCgUy9fpS4CIiACIiACIiACIiACIiACIiACIiAC2QlI0MmOVAmKgAiIgAiIgAiIgAiIgAiIgAiIgAiIQL0EJOjUy1epi4AIiIAIiIAIiIAIiIAIiIAIiIAIiEB2ArUJOrPMMkvYc889w4wzzthxpp9++umw7777hg022CCsscYaHe/PDqeddlq45pprBiKNW2+9tWcWOXgOShq6riHYPT4o1yRHPnRddV3Twlrl8BCRQXvm9bzqeR26O1/9pud1iIie1yEWfMt1b6g+/CpXu79UDqscbn7S8j1rOdrAel7zP6/p9c71uzZBZ4455ggHHXRQV/mcOnVqmDhxYthyyy27FnROOeWUcMEFFwxEGldddVXPLHLwHJQ0dF1DsHt8UK5Jjnzouuq6pgW+yuEhIoP2zOt51fM6dHe++k3P6xARPa9DLPiW695QffhVrnZ/qRxWOdz8pOV71nK0gfW85n9e0+ud63dtgs5ss80WDj744DD99NN3nFeU61122SVsuummYZ111ul4f3Y44YQTwiWXXDIQaVx77bU9s8jBc1DS0HV9tUeCe3xQrkmOfOi66rqmhbXK4SEi9l7L8azlSEPPq57Xobvz1W96XoeI6HkdYsG3XPeG6sOvcrX7S+WwyuHmJy3fs5ajDaznNf/zml7vXL9rE3TIIMOtuhF0Xn755fDSSy/FfbsZssWxX3zxxfDKK68MTBq9ssjBc1DS4J7QdX31Hh+Ua5IjH7quIVjZlYPnoKSh66rryr3oTe/XIRqD9szredXzOnR3vvpNz+sQET2vQyz4lvPeUDtnqOxROTzEgvtsrNwbOa4rPOqwWgWdOjKsNEVABERABERABERABERABERABERABERgvBOQoDPe7wCdvwiIgAiIgAiIgAiIgAiIgAiIgAiIwKgjUJugg1vSmmuuGWaYYYaOoTz77LPhyiuvDAsttFBYeOGFO96fHYjM/cgjjwxEGo899ljPLHLwHJQ0dF1DsHv8/9s7DzBLiqoNF2EFJAcXQXLOILDEJa9kAclLRkFyVKIgoGQk5yAIIjkjaUlL9CfJEpWcJaNkWBH+fQtO77k1fXPfOz0733memc7V1W9190x9fc6psrRJEfVQu6pd05e13sOjiZTtmdfzqud19N357Zye19FE9LyOZsFcUfeG/h/+lqvdX3oP6z1c+aQV96wV0QfW81r885q2d1HLHRN0NGrOt03EyADKEv4tC2X1H/3YGosinpOylKHRGjRaw+g7fPT7r4iRFoooQ+9hvYfT+1Pv4UoiRY1kpOe18lkr4m+0/r7q72vl06rRkDyPsr3L9bzqefX3J/P29zVdX9SyBJ0aJA1+uw+mOhLfQrYXbrs8i/hnsYgy1K5q1/T1Yfd4Ef/Al6UMPa/6xyS9z4v626j3cOU7tIhnXs+rnlc9rymB0ctl+xut51XP6+i789s5/X0dTWRMfF5HX12xcx0TdAi1YshxQjmatc8++ywMHz48zDHHHGG22WZr9vC4/2OPPRbeeOONUpSB21u7LIrgWZYy1K4h2D1eljYpoh5qV7Vr+rLWe3g0kbI983pe9byOvju/ndPzOpqIntfRLJgr6t7Q/8PfcrX7S+9hvYcrn7TinrUi+sB6Xot/XtP2Lmq5Y4JOURVUOSIgAiIgAiIgAiIgAiIgAiIgAiIgAiIgApUEJOhU8tCSCIiACIiACIiACIiACIiACIiACIiACJSegASd0jeRKigCIiACIiACIiACIiACIiACIiACIiAClQQk6FTy0JIIiIAIiIAIiIAIiIAIiIAIiIAIiIAIlJ6ABJ3SN5EqKAIiIAIiIAIiIAIiIAIiIAIiIAIiIAKVBCToVPLQkgiIgAiIgAiIgAiIgAiIgAiIgAiIgAiUnoAEndI3kSooAiIgAiIgAiIgAiIgAiIgAiIgAiIgApUEJOhU8tCSCIiACIiACIiACIiACIiACIiACIiACJSegASd0jeRKigCIiACIiACIiACIiACIiACIiACIiAClQQk6FTy0JIIiIAIiIAIiIAIiIAIiIAIiIAIiIAIlJ6ABJ3SN5EqKAIiIAIiIAIiIAIiIAIiIAIiIAIiIAKVBCToVPLQkgiIgAiIgAiIgAiIgAiIgAiIgAiIgAiUnoAEndI3kSooAiIgAiIgAiIgAiIgAiIgAiIgAiIgApUEJOhU8tCSCIiACIiACIiACIiACIiACIiACIiACJSegASd0jeRKigCIiACIiACIiACIiACIiACIiACIiAClQQk6FTy0JIIiIAIiIAIiIAIiIAIiIAIiIAIiIAIlJ5AxwSdscYaK/DTqn399detHlra48Yee+xw3HHHhe9973s96vi///0vHHDAAeHDDz/ssU0rRKAMBLh/vXXzGf3+978fBg4cGKjD66+/HkaOHOmrUnd+4oknDtNOO22YcsopwwcffBBeeeWV8Pnnn9c9Lt2BZ3f66acPU089dTz+3//+d6zPV199le5a6HLKvlrhjbTJuOOOG6aaaqp4DRNOOGF857zzzjvh/fffD40cb+dut02sHE1FQAREQAREQAREQAREQARaI9AxQWfrrbcOgwcPbq1Wo4668sorw4033tjy8bUOnHHGGQOdkW+++SY8++yzTXViapVbbxudsrPPPrvqbvvuu2949913q27XBhHoLQKTTjppOOyww8IEE0wQq0DHf/fddw+ffvppR6vEO+QnP/lJFGNM1ED8RJC55pprwlNPPVXz/DPNNFPgXTTddNNV7Mez//bbb4dLLrkkPPHEExXb8hYGDRoU1lxzzYp62H5ffvlluPPOO8Pll19uqwqdrrTSSmGTTTZpqMy//OUv4Y477sjdd5xxxgkrrrhivI6JJpqoxz7vvfdeuOqqq8KDDz4Y3409dvhuRbttUq1crRcBERABERABERABERABEWiOQMcEne222y4stthizdXG7X399dfHDptbVdjsCSecEPhijx1yyCHh1VdfLazsWgXhsTR06NAKD51lllkmO0SCToZCMyUjsP322wdEDW977rlnRz3KEHI23nhjf8qKebxiTj/99DBixIiK9baw9NJLh6222ip69di6dIowdd5554X7778/3RSXeWZ/+tOfhrXWWqumx+Fbb70VfvOb3+SW0e7K1VdfPay33noNFXPFFVeEm266qce+iGEIcPPOO2+PbemK2267LVx88cXp6rjcbpvk+82a/wAAQABJREFUFqqVIiACIiACIiACIiACIiACLRHomKBDR2yJJZboUSk8Y/hSbMbX7bzwieuuu67ql2Y7ttVpbwk6efXFY8c8DyTo5BHSut4msMACC4TddtutRzU6KegsuuiiYYcddsjOyTsCrxxEnDnmmCN7hyDIHHrooXFbtvOoGQTbI444IvMoYtujjz4aw6Mmn3zyKDb70Ee8j1588UVfRJxHSEFQ8YYXHSIwzy2ePz/4wQ9CNwWdjz/+2FenYh5B5957761Yx8Iqq6wSNtxww2w94Wb33HNPIGSM8DG8bgjFMuMdmXoutdsmVramIiACIiACIiACIiACIiACxRDomKBTrXp0xvbZZ59sc2+IGBJ0MvyaEYGaBMYbb7womEwxxRQ99uukoLP33nuHOeecM54TMQfBhdw5WCow3XXXXeGCCy6I2+zXRhttFFZeeWVbDH/84x8rvHBmmGGG8Ktf/SpY6NHdd98dzj///Gx/ZggvO/bYYwMMMMLLTj755PDcc8/FZX7hwTP33HPHnDSU0QnzHjoPPPBAOOuss5o+Dfm5Zp555njcZ599Fg466KCYS8gKIjQNDyMTlxGF8Fzy1m6b+LI0LwIiIAIiIAIiIAIiIAIi0D6BPino0AmbbLLJ4hdlvox/8cUXTZFoV9ChE0cd6OQyJZEx9WglMWpZPHS4ph/96Efxel577bWmcqPg6UB74PmAvfnmm+Gjjz5qqk3ydiaBLclbMfJ7kMyW3Cd5VkSb0Jml00uOFjwwLEEsHXs6vHgzkHelWh2sXggA1BsmiBH/+c9/4s9///tf26XPTPG0I8wGe/7558Nss82W1b1Tgg4eL3jX0KbYRRddFG6//fbsvLA96qijsmW8TaiL9/Tbf//9w6yzzhr3QYA58sgjs/1tZtVVVw0bbLBBXMwrY7XVVgvrr79+3E7ZhGfynHfb2hV0uK/PPPPMTKwhNxk5ylLDIwovHIx3wMEHHxzn+VVEm2SFaUYEREAEREAEREAEREAERKAQAn1G0KFTss4664SllloqEw4gQOeazv6wYcNyQ7ToFNI5HH/88TNglj+HFYgw6Wg3lEcYR2oLLrhgWGSRRaKHgC+D/ej809njK/2tt96aHlp1uTcEnS222CIsvPDCsU4khaWTTr4jRrwxQ7ggfOPvf/+7raqYkiSXHCULLbRQFEHsy77tRFjISy+9FEjSCs9ahqcEIwdhiG0IKHhY2Do79o033giXXnppRSLcdtpk9tlnDzvttFMsHgEHMYeQQOyTTz4JJ554YrzflltuuawzjFh19NFH5wpWeJSsvfbamWdJLOi7X9xn//jHP2KibxJx17LFF1+8IjyGfbluktV20+CBMELbIqb9+c9/rgi96pSgQ84annWM53uPPfYIPsyI8C+8dLwhWHg+XrTlHs97JieZZJJw/PHHZ8WccsopMSzLVvzhD3/I3jV5Hjy2X6enRQg6ePWYQFYtP5kX7wgrw3vSrIg2sbI0FQEREAEREAEREAEREAERKIZAnxB0EE/oeNMBr2UPPfRQOPfccyu+1NMZrTWyVF55eJfQiUyNr/x8qa5nDz/8cAxXaMRzqDcEnbwOcd41IVIxcg9iWWpLLrlk2GabbdLVPZYJ7+AaH3/88R7bbAXhND/84Q/jIt4YP/vZzypyn9h+TIcPHx6FBVvXTpv8+Mc/DjvvvLMV1fAU4QABwRsizLbbbpt1mv02P9+IMJCXeJYktSSr7Zbx3Pz2t7/NRLVTTz01em0RdmPWKUEHjpZ/C6ENzxgzEjOToDm1a6+9NpB3C8ND6rTTTst2ScWebMOoGbZZ7hgv/OB5h6Bn5pOnI/oh0HmPINuvE9NU0CF8jDogRDfqFYgIiccbxvOMQJjarrvuGhBIMcRHBC2zdtvEytFUBERABERABERABERABESgOAJ9QtDZcsstw7LLLptdNd4CdPQIjSEnj/eWQYC4+eabs32ZwSPF8mCwbJ1F5p955pkYSsO8GSEyeUMQe/GABK14sbz//vuxczXffPNlHSbKwSOADmI9K4ugw3XAAk8mrsUSxuIhQb4NvGO8eUEHbxbjQSd3mmmmCfPPP3/m1UKn89e//nWFl4Uvyws6fj1iEB4xJNGedtppY51qCTpWh0bbJBV0uH68Nqi/N8Qo1pmYx/Ui+FlIFvXDI8S8ewitIgEvHlvwJOkseVZg2lcEHR9uxNDgxx13XPQ86oagg8fWPPPME5uAxL1/+tOf4jx88ZzDOyw1hg2/8MIL42oEGka/Mq+xc845J/ztb39LD4nbEXRsvxtuuCEO282OeIdZyBHCLPllhgwZEkeJYhvPBfcZ7yGGT//Xv/7Vo/yiVnhBhzK576gzUzxpeDZ5573wwgtVT8mw5wx/jvGM+pxErMPjCUHHvHjSMLd224RzyERABERABERABERABERABIolUHpBh6/KhEzZyFh0uklMamFS5G2hs2GdcEIz6HTW+nruwzH8l/d6aMkxgffOHXfcEYUGvz+dSDxWbGhn6kenv17elDIIOnQEYYKAgpF7hLoT+oQxpDNeAd4IL8KTBoECj5XUU4BwHUI2zPuBnB3k7sizVNCh7fAgQLwxw2OCoaPpOPv17bSJF3S4b7iP6KiTQHbGGWeMp7ZcIlwHgp7lCfLJvGFhQgfHc7+mnWvECEK3EAGrhbHZtfa2hw7C1e9+97soQNGuCHqIU/46qWunPHR4Jhk9CkOoMHF18803D8svv3xcD0NEDUL+MJbxIjLzHikILoQZpYZXGPeemRfb8FRB4MAQeLk/0hBAO45nHA8qkjN3wlJBJ+8ciDs8Fzw36bPI/ojeDFtOLigMMRxmXBuCIwKsiTmESnIPs49ZEW1iZWkqAiIgAiIgAiIgAiIgAiJQDIHSCzrk0iB/A0anhY4zyWm9pSNnpSPa+H2Zb1XQSctJl0mCy6g4ZoSspJ4tts2mZRB0EDDwhPHmudOxo3PbSAiZLwPPKEQMjDA0vCbyLBV02I/9i7BabeIFnSeffDLLp7LZZpuFFVZYIZ7ee374kJTf//734eWXX477+OGcuUcJ4/ryyy9brn5vCzreG8Mn0O2WoOOfz6uuuirgOYPIuN9++0XRAeHxwAMPDAwpTk4tDAHt8MMPz5jjEYZXFIYXC7mAaBtv/v5k/YgRI6JYzDz3LdtT4xlABMHjz8Q99qFshK9OeOqkgg7Xj8DtvQ6tntXy47AdAYzhyy3Rsx3jp4jViK/ps15Em/jzaF4EREAEREAEREAEREAERKB9AqUXdLbbbruYsJdLJfTF57Xwl+9FgVqdGo7xnZNmPHT8+egc0aHjx8KT2E5n2IyReOolwO1tQefFF1+s8FKwuuOlgUeKGZ1VGzba1vkp3jx4U+EJYF/6CcuyDvc///nPcMwxx/hDsnnfdoSw4B2Cp0uz1mybeEGHkBxCc7B11103rLHGGnHe30t4YHFNGPlFyDOCpYIiAgT5XPI8JeIBdX6RpNqEMNsV74vU68e2FTmlvX7xi1/EIhFOCTUycapbgg4JfM0jj7BFRAbuP0ZhwxiiHG+YTTfdNKy44opxHUIKIo8Z4VFDhw61xUDoFiFZ1iZsJ/E294wZ3n949mC0P/eBN3J0MZS38Vh+lLcQdbAyHnvssXDSSSf5QwqZ5z4dPHhw9JRDeLTzM8oe9SQc1eqA+EqoWJ6wxH4kPs4TgqyieCJxjVyLtyLaxJeneREQAREQAREQAREQAREQgfYJlF7Q8cMP//Wvfw1XX3117lV74ScvRMgf1Kqgg1BBaAKdqFlmmSXrRPmy/TweA/U64b0t6NAxpoOcZySWtc4fHdW0k0cuE7xJ6Cj6EbLyyrI8LHnbvKBTq43zjm2nTbyg48Nt/Ig+jPR10003xVP7XE5+RCTCqRB4jBU70zFGzGPIbEJYEM5SD5G86+nNdYhx5KghvA0744wzAiKGWbcEHURbqwPhUogUeONgMEV0QfDbeuuto9DB+tRDB0GIaxk4cCCbo5H7CO8a2gsxJDUftoVQhFhjRqgcnmxpG/o6sI0QQBON7NhOT1OPLu5X7ltv6T6IdYhciEO0O88wXDCug2efPFBmRbSJlaWpCIiACIiACIiACIiACIhAMQRKL+j4oYMZApuv9XnG13i+umP+S3vevq0IOggHjPTCaEaNGolkETJqWW8LOt4DJa0n4WOELGEMWY2XiBm5OAh/8yKGbcubph1uv48XdEiAS0ezEWu3Tbygw33F/YWtuuqqYYMNNojzfuQj7xFCvhYEADM6xOR4MU8JW29TBB4SZdPZTkUB26e3p3jmmEdVOsoRdeuWoIOHlnnjEAZFgmS84BBKLJ8P9WHku4UXXpjZKD4gsnkjTIsRsfLEG/ZDzMDrjP2w22+/PZAMGPNhdCxXe/dwLKKzGR5N5BvqpnHP4WlI4nAs9RTCew7vOMuJxbPIu8mHVfGcU3djxTXg8WT3alFt0k0uOpcIiIAIiIAIiIAIiIAIjOkESi/oeFGhWqeKRiKUgK/QGF/xCXeqZq0IOmlODfJYEKZDaAOJki2BKJ1MCxfpC4IOoUEM+Zxn/qu8H/WGDiR8rfPHsTAnJA4PCEuuTKfYBIJGBR3a5oknnsirTo917baJF3R8Z74VQYfKEXrFPciIQZYMOq20F47Sbb29zP1qI0i99957PcJ2CKkzoYW64n1E8m/yRKUeIe1ci89/48tJkxsjKCIyYdU8zRAxCJ1iP5Ig82xyjyK04pVFThkThbw3Vipe+RA7Xye8W7hnzXhmeA66bd5T6J133on5hqwO/j5nnc//ZPswZfQ/RGsznwOsyDax8jUVAREQAREQAREQAREQARFoj0DpBR3CHBgxCavlTeJDrh544IFAzodq1oqgQ0JWcptgfNVH0DDhws5DJ94Pl9wXBB2f9Neuw6Z4oTDsNuY9UvCY8LmCyLlx77332mHZlFGw1lxzzbjcqKDTCDM7Qbtt4ju6RQg6Vq8BAwbEEZEQBRC18GYyw+OBEbIY8rps5gWdZurG84DnTFHmn2UrE9EILxQTThEVqS+CClbr3WBlIOawP4IOxjLhW+aFxjuDdwfGqHmEbJmRTwoBKzXClBh1z8w/J7auG1PvPYbAzCh1ZiuvvHLMF8Qy/PBaMs8b24dpmjfLhxV2qk38+TUvAiIgAiIgAiIgAiIgAiLQHIHSCzrkpKBTjD3yyCMxt0PeJfphdf3IPHn7ekGHcB/ym9SzauKGP46hrvmqbdaIOEGeEgQAjLCGV155xQ7v2HS33XaLXiScoFp4Wjo6lP+q7zuItTrzflSoTgg67bZJpwSdtOEIxyL/jpnvKNs6P0UAWnrppf2qcN9992WjalVsKHDBhzc2Uyz3LPdunpFE2nv1ICiQJ4mhvquZH2GNfRAfEFS4h8xg5JMg1xvZzo7z0zSsynukINTgbWMhdF7s8WUQ5sSzYUZCYoa6zzPLOeW3kSycRMftGkPIzzvvvLEYhCef0NznhIIl4kwjgg5JwvFCxLrVJvFk+iUCIiACIiACIiACIiACItAQgdILOgyxu9pqq8WLIYcGrv/kI/GWdu7OP//8GE7h9/HzJCueeuqp46pGh8hmPxvNqppQw5fvQYMGZaeqtl+2w6gZnz+mlU6pL6vReS/okFx2n3326eExAnMb3ph9dt9990BSWcxvqybU0Imnc2sd4mr7UZ5n0AgzjsHabZNuCTrU1YuI5557bhRoWJ9naQJb9rn44ovDbbfdlrd7YevwXjGPrLxC8TgitMeM5whvEMKu7N6wbTbl3iIUzRvPMEl5qxmhUdwTZnkJtUmSzHDe2MiRI6NHis8JY8dWm5L7iZAt3h1YnrDJPU8SdMwPax9XfPcLDzQ80cx23HHHbBQqW2fTueaaK+y11162GKc+GXfFhiYWSNCOt5o9a9wn3C9mvJN4N5nhlcT1poaI+POf/zxb7ffrRptkJ9aMCIiACIiACIiACIiACIhAQwRKL+ggDOB9QwJc7OGHHw4kEraRZMiRQXiBJTalU0dH7dNPP60KgHAhwoYwkr/S2bbyqh2EODH99NPHzSTDZRQYP7T2CiusEEfFsXqyYyPihBdXyMdDJyoVrKrVqdX1/pyUQc4PvF2MAZ03GFo+lTTJqhdC+NJP5/vll1/OqsNQ7jvvvHPWWWZDJwSddtvEX0c7IVd0qJcfNYQ1YWfkEkpt7rnnjiFqdm/Uuy96S9BJ650up3ll8Ar58MMP090qllsRdCjAj26HYIMXjA3FjUCEMGICRrVR7WafffYYRvTggw9m9zZlI1wh1piYw7q83DepEJIKroRlcX0W9kVYmPfQo1xvrQg6hHmSA4hr5Dpg4c1ENsKlzNIR6VIxBq86vLH8e4bjCQW08DM8qHbZZZcKT6oi2sTqqKkIiIAIiIAIiIAIiIAIiED7BEov6HCJPuyKZcQDEueSs4aEpuZtw7YbbrghXHXVVcxWtTR8AG8BhpY2EYgcGyRg9eZzVLCeYxCDmNIxtHAHf0y9jjv7elGBZbwdCNmwkBRGfPJDR7NPu5YKOpRnTPHQIDmqdVLZlg6/jmBDSIcl/kXUIXSEDi2WHs+6Tgg67baJZ9+OoLPMMsuErbbaisuMogMsuIcQxLg3yAFl4gP3GAKk5YKJByW/JOiEGHLmvUV4LgiN5LlAILPR1RBVyWeVl99mk002CSuttFIULni+8SJChEGYtXsX9K+++moMGfMCLevZB+HDPwsM5U2IGW1Lwm+rB/sjDNdK6N2KoMOoer/85S8pPr4bELVIWE1dEWq4v7wheOO5lpoPf2Qb9yEiEYLcVFNNFa/FPBDZnpdkOvXgaaVNKFsmAiIgAiIgAiIgAiIgAiJQDIE+Iejw9Zgv6nRgahmiAZ2qNFlxegyiBV4l9jU63Z4mFWX7RBNNFDt95rWSHsPyu+++Gzt61jFqRNDhOL6EL7TQQsz2sHr5gHoc0MCKPEGn2mFp+Ibtt8Yaa0TPAVvOmyLwWP6UTgg67bZJJwSdPA62DuGLzrYf7ty2+emYJOgg/HnvEa6TZ9l7h/hr9/M+qbZfb/PwZJh78gvlmQk6edtsHeFc5LGq9s6YYYYZogA3ySST2CG502peQn5nBCCGhvdWb9QzL+j44/LmEaZ45+SxRZTCo63aO8+XRzlHHHFED28g9mm3Tfx5NC8CIiACIiACIiACIiACItAega4LOnxRtmSmdMgInbBRZ2pdCl/CN99880AHx7wdbH++2iM84JlDmY0Y5dHBYuhrOpw+dwjCDOEHqSEobbbZZtFDwG/D2wLvgTPPPDPwJZxOIEan6Pnnn/e75s4TijNkyJCYf2e66aar+OqfDtWcW0CTK72gc+WVV8YhnQlB81y//PLLwDY8V/KMffF+WHvttQNhb9746n/LLbeEt99+O4pVbHv66acDQ9DnGSMkNcvMymmnTfD0IJ8L5oUzwudoZ8znY/L5nLxYh8fHWmutFfDAIJlunpGzhLbMC8lK9/ceP7at2khitr0bU//scs/jaWRebXnnZ1j7Y445pmITHi4khW7UyNeEwJUKqTyjV199dTYqVV55hEwNHTq0x7Hs+8EHH8QQJu7xeu8M3g/bbLNN9LRiZCxvCEHkq0HQqWd+aHH2xcuGZNIIKNWMROsbbbRRDCn1XkV+f9qAe2v48OE1r4V7EyGWd01eWXjcMFoYz7yFX/rz2Hw7bWJlaCoCIiACIiACIiACIiACItA+ga4LOu1WmY4IYRMIH3QqCU9COKjXKWv3vP54OniMbsNX+zfffDOGYFiIlN+vrPNe0LnoootiB47OHjlHJpxwwsgU75pGmCKEwQJhhfwehLb0xpDcZWgTRC7uTbwgTIAgPIYfBIT+ZqlHCgIGXiLkcGnWaF9C13jm8fZqRATmHIileIlNOeWU8XnF+457lGmzxrsH8Y7yEFG4Du71Rp4TzkVoGOFNZrVG7bN9bIoAzXm5r3hGMc791ltv1Uwwbcf7KR6E8Bw4cGD0PITlO++8E+9T+DZqrbZJo+VrPxEQAREQAREQAREQAREQgdoE+pygU/tytLURAnmCTiPHaR8RaIYA4UWIOmYk9cWLrT8aQg6CjhkiEOKW5Z2y9ZqKgAiIgAiIgAiIgAiIgAiIQKMEJOg0SmoM2k+CzhjUmCW+FMKtCLvCEDAItcSjpD/a4MGDK4Z8f+CBB8JZZ53VH1HomkVABERABERABERABERABAoiIEGnIJB9qRgJOn2ptfpmXQnHISGyWSNJg23fMXHqvZUQtw444IAYKjomXquuSQREQAREQAREQAREQAREoDsEJOh0h3OpziJBp1TNMUZWZsCAATFhNjmWyJ0zbNiwOArcGHmxDVzUvPPOGxZeeOHoqUTeLUa3komACIiACIiACIiACIiACIhAOwQk6LRDr48ey2hN888/f6w9IwWNGDGij16Jqi0CIiACIiACIiACIiACIiACIiAC/ZOABJ3+2e66ahEQAREQAREQAREQAREQAREQAREQgT5MQIJOH248VV0EREAEREAEREAEREAEREAEREAERKB/EpCg0z/bXVctAiIgAiIgAiIgAiIgAiIgAiIgAiLQhwlI0OnDjaeqi4AIiIAIiIAIiIAIiIAIiIAIiIAI9E8CEnT6Z7vrqkVABERABERABERABERABERABERABPowAQk6fbjxVHUREAEREAEREAEREAEREAEREAEREIH+SaBjgs5YY40V+GnVvv7661YP7epxyy23XFh++eXjOYcPHx7uuuuurp6/lZMxbPmgQYPioVdddVV44IEHmi5m7LHHDscdd1z43ve+1+PY//3vf+GAAw4IH374YY9tWiECYwIB7n9v3Xxf8V6ddtpp47P3ySefhHfffddXpeF5nt3pp58+TD311OHzzz8P//73v8Prr78evvrqq5plTDjhhOEHP/hBmGqqqcKAAQPi+d9+++3w8ccf1zyuGxt9u3zzzTeBn0asVRaUPe6440YWcIQN77133nknvP/++6Gb90Uj16l9REAEREAEREAEREAExiwCHRN0tt566zB48OCWaV155ZXhxhtvbPn4bh248cYbh5/85CfxdLfeemu45JJLunXqls+z/fbbZ4LO5ZdfHm6++eamy6LjdPbZZ1c9bt999225o1m1UG0QgRIQmHTSScNhhx0WJphgglgbOu277757+PTTTztau4knnjhsscUWYa655grf//73s3P95z//CY8//ni46KKLwn//+99sfbUZxNw111wzikJeAGH/L7/8Mtx5552B94I3hIrFF188LLHEEmHWWWf1m7L5V199Nb7/nnnmmWxdN2fWW2+9sPrqq2enbOR93AoLO8E444wTVlxxxchyookmstXZ9L333gsI5g8++GDDwlJ2sGZEQAREQAREQAREQAREoAECHRN0tttuu7DYYos1UIX8Xa6//vpwzTXX5G8s0dr+KujgJTB06NAKD51lllkmaxkJOhkKzYxhBLwgape25557dtQjDRHp17/+dRRh7Jzp9B//+Ec46aSTwsiRI9NNcZln9qc//WlYa621anpPvvXWW+E3v/lNRRn+PVexIWehN97d0003XTjooIOCF6jwlrzgggtyahji9bfKggI5DyLevPPOm1u+X3nbbbeFiy++2K/SvAiIgAiIgAiIgAiIgAgUQqBjgg4dAL7mpsaXZb5smvFFOK8Dct1114U77rjDdivtlE7BGmusEeuHRxH1Lrv5DmmrHjp514jHjnWoJOjkEdK6vk5ggQUWCLvttluPy+ikoIMQc/DBBwdECzNEF8J6fvSjH4Upp5zSVoe///3v4dRTT82W/UzqwcI2wrXwrOG5pXxCqRoRdPAKeuONNwIhXxxHPcwIczr++OPDU089Zas6OqXu+++/f5h55pkrzlNL0GmHBSdZZZVVwoYbbpidj5C1e+65J4atEXqFdyqhWGYnnHBCeOKJJ2xRUxEQAREQAREQAREQAREohEDHBJ1qtZtjjjnCPvvsk21Wxz9D0bUZCTpdQ60TjUEExhtvvHDooYeGKaaYosdVdVLQmXPOOcPee++dnZOcV2eddVZcRjTgfTrLLLPEZcK/2Jd8ON4IDzv22GMD14ARHnbyySeH5557LtsN4WjuueeO+WDuvvvubD0zCPQrrbRSePTRR2Mo7Msvv5xt5zhCjzbZZJNs3QsvvBAOP/zwbLmTM0OGDIneguk5qgk67bLgPOQIMwHps88+i95BH3zwQVaFmWaaKXo5mcB97733hvPOOy/brhkREAEREAEREAEREAERKILAGCHo8IWaBJ0YeQv4x7paMkw6H+Q7oFPGlASWfJGulwi0CNjVyqCDQQeATta//vWvhutCIs/JJpssTD755LHoN998M3z00UfVTpOtzxN04DLNNNMEQjtIjNpKgtN2PXTo/FAHrgnvg1aTvWYX2saMtQkcaJNGk5u22ib1qkpHfODAgWGSSSaJbQwfvNuqWRH1oB3wvMCDw+4H7hPWkVOF9Xgm1DLalHuKZ5Rr4P7kmWvkPq1Vbm9s82FHzz//fJhtttmyanRS0PnFL34RllpqqXgu3m2ICb7tf/7zn4ell146qwt5W2644YZsmZnVVlstkAwdwyPykEMOie+9uKKBX7xjaD/eldVsp512CgsvvHDczDlYbvS5qVZmvfW8xxHZqBvCCp5DJIzGqgk67bLgnj7zzDMzb0Q8M8n5ltoOO+wQFl100bj6tddei15W6T5aFgEREAEREAEREAEREIF2CPQpQedXv/pVHJWFC8aFnU73RhttlK0zEIQCXHrppRUu/wsuuGBYZJFFAiETJBf1RqeDjgpfpUmkWcvoWBFmlWeEiNU6ng6VJYqm00UCUxIqM9KMfcllhChyLlxxxRW5nSE6x3TeFlpoofiF2I6z+tDxfumll8Jf/vKXKG7Zej/1gg7nobNGviPPBTHl2muvDX/729/8oTXnWxV0yEOx7rrrxtANH6ZABw2PAJK9fvHFFzXP3epGwiPwEkOooLMMt8033zyKFsaWzjNJZ88///xcAaOINrH6w2LbbbeNiySXhSn3OPcNIo0ZAuT9999fkQi33XrssssuMeEtzwOinuUHYZnErjfddFPYY489ouBGPVh/2WWX5d7zhFauvPLKAe8JntPUuL8eeeSR2BGmnFp24IEHZudkPzrtv//972sdUvg2vDEI6+Ge4D7585//XBF61SlBh1GkTjzxxMyzhncM96gZCYq5f+1eZX1eyNQf/vCHTPjlPce9XLThpbPppptmxeI5hMDeSdt1110D73YMLggoeDRh1QSddlnAGg8p3hlYtZxBXgDkfqedZCIgAiIgAiIgAiIgAiJQJIE+JegwsswPf/jDeP108n/2s5/ldhbZYfioIcTpdJkdeeSRMT+ELVebPvzww9E1vpqAQL4cxIc8qzeqiv/S/sorr1QIOWl51TojSy65ZNhmm23S3XssI4YgBiBEpOYFnXRbulyts5Lux3Kzgg7iDSIXnX7rHOWVy5DIJHut5R2Qd1wj62Ya5RmFYGCGx0meAMF2hEJyg6ThLEW0iZ0fYY2E4hhhKwgryy23nG3uMUVgMW+Xdutx8Kg8LYiLzRoJdH3bIDzRmYdtPUOkRaCpZaecckpFm3BvIz51y+jA//a3v83YkKMGbzofBtUpQQevrCOOOCK7VMKkRowYEZepF4mAfW4dNiCQcQ+ZUIYnIqKQGd45eFdhCG+Ig3l5zGz/RqckW1577bXj7pwbD50iyq12fsQbvGAwPGB+97vfxcTRtQSdolgcffTRWe6iYcOGxQ8IaT292ETCaoQkmQiIgAiIgAiIgAiIgAgUSaDPCjoeAh08wo1Itoy7PR3K4TUEHcQURIL3338/dmjmm2++7J9zyq0lzODhs/zyy2enJ3eFebbUOo4DvKBjBSAOPPvss7HOfGm2L+10sugwEp7izXfaSUhq10LHiXCl+eefv6IMRsaxcBkrJ0/Qwevgn//8Z6wHZRDCYEYuDMSFetasoIOn0zrrrJMVSzs++eSTsb50Uq1jxg4vvvhizMlRLZQuK6TJmVTQ4XA6o08//XT0xKB9fYf5oYceCmeccUbFWYpoEyvQCzq2jil1IsyK+4HEtZbHpZqg08q94QUdhC2S2uIJ5r2mEI8IN2K93auM4INXmVmaMJZn87HHHoude0KvYG5JdPuCoONDdGBy3HHHxXuzG4IOYV377befoY2ChQmKvl7ZDt/NMAKTPfeIdLQthlCNAIeIigcW23imeBci8jCyICGGrRjvGnLwYC+//HJHvagQXRH48Uqj/ohevKNoE3tv5IniRbEgXxA5hTDevdQF8dWMvxMIOiZU8wHi9ttvt82aioAIiIAIiIAIiIAIiEAhBPq0oMM/0oRWId6Y8QWWL8V0Svx6vuTSGSVkgQ6mNzqseL0MGjQorqYzS0eZkKh65t3qmxV0EC9OP/30LJyIDhZf+s3wMPLXwHo6K3gmETZBGEya+4fQEFz7rRNObgdyPHhLBR06XyRMRVDB6HDT0ebrPUZnHA+ZetaMoEOYF0KRhRGRnJW8FNZZ5VyEt5EfxDpF5557brjvvvvqVaOp7Vyr99BBODnnnHMCiWcxRAtCoBBaMLYTeuPz+xTRJrHwUb/yBB0EP9giupnRYSTEBXHJvMnarYcXdC655JIobP74xz8OO++8s5024C1DGJwPryEsD2ZmvlPNvcMxcPNGXSmb0MN6Xhy96aGDeIbnB/cpzxoeMXgjUf9uCDrkpMHTxWzHHXeM+XN8vWDLu2LLLbe03WI9TWBAKEZcwPCGQuip5onFOw+BDjGkGUuT3VfLK9NMmbX23WKLLTLPNUIP//jHP8bd/b2XJ+gUxQIRH9GM9wdGqCwjjMGXME5EcXtvEQKL4MQ+MhEQAREQAREQAREQAREokkCfFnQQQwiRKsJIAIuoYUaIBSE29axVQYfOIcKLFzA4l++Q/PWvfw1XX311vSr02O47O/CBk7dU0KEjT8iCN//1ny/gfH2vFxrTjKCz2WabhRVWWCGekg4moop5FPh60Elddtll4yrzjvDb251PBR1EMoQlb3grEWplXkvkkiH3UDNWr02srFTQoVNOiEwqiNj+zU5r1cMLOngc4BVFx5V8VWbm+eG5EdbnQ3oQQMwDpxVWdi6b9qagg7A5zzzzxKp4kaJbgo4XzhAEfvnLX8a6IDjjWYjdfPPN4c477wxHHXVUXOYXXkQ2bDghe7R7agiBPNPc15ZYnX241xCuGvXUQfjlfYnIhOHtw7IJjXFlgb8Qj3hPIpggQuNxZGGH/v2ZJ+gUyQKxF280Szadd4l8QEBU7xSLvHNqnQiIgAiIgAiIgAiIQP8h0GcFHcID6Di2EoLDP+J0YPgxDxGanM6bGZ0jPCPqWauCTtoJtvP48vBGwSullhF6QBgLHW/7IkwIkI2KQxjVMcccU1GEF3Sqjb6CwEXOByuzER7NCDp0wmyoZUIlUoHEzss+1mFC8EFQKNK8MEG5CDd4TqXmw+X4Ek8elWrWSptYWamgYx4xtr2ZabP18IIOYiNeSIQx2hDZnBtvJTr8dN7JS4XxnHgxwYfeIFhyH/kcO/GgJn5tuOGGWVgjh3EfkIy508YzRLtjXAf3rI0u1S1Bx+fsQrwgd9Diiy+eCTu0EeIJoowX3rzY7cswZoQOMoy2XQ9hpCQ0tjC6Rr3y2B8PLktMzPuY9w0JvTtheB4icFouNfMks3PVE3SKZIHQzPvahF6rg59yr8IZnjIREAEREAEREAEREAERKJpAnxV0mvVeQSDADZ5/6BEJrONSDWijeWO8ANNMyFU6Wo3VY/XVVw/rrbdeXPShBLadKXkjGB2LDgXDR9eyPK8WL+hUOwdl0kG0/ECE1NQb8aoZQYfkrhbSVav+fhtCwm677ZaFhvltrc6ngk61nC6rrrpq2GCDDeJpCFFLR1lqt02s/l7Q4XoJFUzD6mzfvGk79fCCjudg7Uo4DvcO5j3aUlGQvEh+JDg6+eR6IqwO8Y5pPW+vvGvr5jrue4bDJoQTI7QNEcSsW4KO99Cx+x/vKYavx/AqJN8TOZW8cOs9dHwZHEP+MIY+pzxvW2+9dTYKX6P3HiIQ5ZvhLXT55ZfbYuFTki4TUovlea/VE3SKYsH7l3e/GYLfPffcEwUy7h3ezfZ+g+Vpp50WQxVtf01FQAREQAREQAREQAREoAgCfVbQ+dOf/hT/gW4EAmIOngV82W7UfIeo1jGtCjokH2UEqdS8cJAXLoUAQael1ldhXyYdaMQpb17QqRUS40cGy8vF48tk3jr+zJuHB/OppSPNpNtrLeP9kYap1dq/3jYv6CA8ENKSdnQpw4dqEN5ByItZEW1iZXlBh+vkehu1duvhBR2EMxIrY3h74MlmHiKso9NqHiF0rAnRMePeJISOJN15Bme8esj7kuazytu/N9Z5j6y8EYq6Jej4kZzgQG4ne495MZbE3XiumNGWFkaZlsHw3gjKqTEEOvmhzPBIquVZ5cUVjiFcEW8u2rcTxv3EdVl+sDyvwXqCThEs8HxDPGOK8Y7l74UPq0LwhJ8lL4cjz0Teu6UTrFSmCIiACIiACIiACIhA/yDQZwUdOpNPPPFEQ63kO+McQMcUbxNyRNA5t2SVJB8lxATrtKBDbhy8jFKrJejgVUQnxjoJHEvHmPAtPB64LoxOi4Vc1RN0fF6QeLD7RafFzlWkoOPFAE7HCEmEhtUz2olwqCI7RV7QoVw/3LOvj/+yz5DVlmS2qDaxc3lBh2SqeIk0YkXUwws6XB/XiTUr6HAMAhD5RbgPGXo7zwj3QQjyCabz9uuNdTz/eDth7733Xo98MoQ5Wp4g9sHriGTq5N1KwwfZ3qqlyYatHEJ58LIx0a2WwJRuIwQOkSq19LkkLxLvljzzzwPbKY93cjPeZHnl1lqXvsfz6ka+IxN8uH95/2G8vxAei2CRJgrHWw+vvdSWWGKJ+CHB1jeal83211QEREAEREAEREAEREAE6hHos4JOo4ILABj2l+F/Mf6pRxQx8SOuHPWLTgAdVwvFarT8Vj10WhF06Kz4PD/kZrj33nvtErIpo2CtueaacbmeoMPxlJNn5JOx0I6iQ67IQTP++OPH0zJCDd4GvWFe0OH8lvQ3rcvKK68cNtpoo7jae6QU1SZ2Pi/o5LWd7ZdOi6hHkYKOrx+iIGGOjOKGd4n3Lqt1//kyuj3vBZ1mzu3vjWaOq7YvuWIIsUqNxN14xJgxhDZDaWOpMIlnixcG8bxDgEqNECFCIc14RskXlRoinR99DjED8dd7qKTHFLGcCjrNlGnXUgQL/y5AZMbjMU9k9nmmqGs7+bCauVbtKwIiIAIiIAIiIAIi0H8I9AtBx4sH9o992sQzzjhjTC5q68so6PiORK2OI94VlqQ0TxTwIVd522FAWBSCjglceeENxsqm5BkZMGBAXCRhNXlTqhnhB4gpWLP5kOJBBf1KBZ2jjz46N6GrH23LJ7Quqk3scloVdIqoR6cEHbs2pniBEK7G84al+XfiyuQX+XhMWGQTXnV54YrJYW0t4sXiR35qtDDuee79PLP8Rn4bnml5SbhtHzydEFnM64T1/v6z/QiRYyh7DI8nQh7NEGrwtrFnmbAoQrdSm3baaStyQ3E/WNiW7csw6uR1srLwSOKZMU8h26+RKcnbvZcT4gjvAnI15Rl5afzQ7Hn7VFt30kknxcTERbDgfiRPFJaKZ/78qaDTiCjuj9e8CIiACIiACIiACIiACNQj0C8EHQsZAUY1ocaLHLX2S4F200PHDyVeTYihg0RHzDpceful10reDBKlektDKhrJXYMngY0+U8/rxucowVtqn3326eE15evTqflU0MnzGKEzjbhlSU4JEbv44otjlYpqE7u+VgWdIurBfTP99NPHqrQbcmXXkzdluHqGrccY4pq8J7WsN4YtR3gyD7K8uhG6QxJhM/JUITQRdlVN3JhrrrnCXnvtZYfE6d133x3OP//8inXpAmGA3BdmiKF+SHHEV0QV83xC7CJHlzc8z0gKjyEgcT+nhlcf3n1mO+64YzYKFuvwsOK+MHEJ4Qhvn1YTXPPME1LmrdZ7hnBYCwH1x/h5wmbtHsa7yEZD4z4zL5p2WQwaNChLDs65q4nASy+9dPRksvpV28+2ayoCIiACIiACIiACIiACzRLoF4KO76jyTz4jjvjEnXQwGa3FhsoGYjXhJwXcTUHH526gc4KA4nM34FHAEMLm+UJdGxF0GAkLLwD7Ms6XZTpb5qGQN1JWyoFl7yVAh5MODLk+8gxvAJK4mvD06KOPxo5tuj+dOLwCCLdgBJ1a3gx556m3LhV0YIA3g88x4tuY8rz3UVFtYvVsVdApoh7+OWlH0Bk6dGjMO0MYneXhseujvffcc88w99xzx1V596fta9PeEHTs3NWmaS4WrunDDz+stntc36qgg5BC+WY+GTI8ee7mm2++uJn3GiGmaV6iVIRIBVdCkXjmEbIwPG/I+WJGwmTCPU00ImE3Yg75hVq1ZgWdRs5TLykyZbTLIg2Dw1sSjy7/7uIdipcUyZEx3isMOW/v2LhSv0RABERABERABERABESgTQL9QtBJh9alM0KHnSkdejpMqeUJOnRCLZTJ9uert3mlMJJJmqj5sccey8QB75XSSg4dBBY6UfaFHFGHkA06XxhJOK1DZvXL6zCnHjrsS2jFiBEjolcCooIlhGUb+TEaSVrsRQWOw1uBcq0Tw7C+fujnVCihQ4TghrcQoSZwpRNsnaJqo/NwrlYtFXQoh/qSNPuDDz4Is88+e8X9kYpbRbWJ1b9VQaeIehQl6Fi4Ehx5Hrg/EeaoI8+QtSfXjAfFLbfcYpefO+3vgg6iDR5AiARmCLm8v1jHCFdm6f1p63ln0C7+/YCISogYzzp5cUys4Zg06TyCUvqerOaJZOekDBJ7V7PeEnTaZcH1+LBWlhEuEdoQ9aaaaqrIk3eY2V133RUuuOACW9RUBERABERABERABERABAoh0C8EHUIS8KrwIkVKjy/abLd/wvMEnQ022CAwClUzRmfV3P7bFXQ47xprrBHWXXfdmlWgA225KRoVdKoVeOedd4YLL7yw2uYe6/kKvdBCC/VYz4p0RC1CWghbYVSuRqwbgg55PGyks7ROhJZwX5iAZtuLaBMrq1VBh+PbrUfRgo5dU7Up9yYiw8iRI6vtEtePKYIOognvAG8MH859Xc8QbvA+qRVyhMjD/Zl6RVnZM8wwQ8xf5PMR2TY/9R5Atj5P0LFt1aZ571C/L+K0F6nYVi0puT+u1nwjHjoc3w4LjkcY43nx4iTr8+zVV18NRxxxRN37PO9YrRMBERABERABERABERCBWgS6Luh4jwg8TMgp0WgOBoY45h9xjH+Qn3/++VrXVrENbw/ydlioh22kA//iiy8GRo3hq2ut8tdee+2w1lpr2aENTX0+C85PeBd2ySWXhFtvvbVHGT53zX333RfOPffcin34Ws+INtRlggkmqNjG12EEJDxcEFawp59+Ohx77LEV+yGiDB48OK6jHoSP4HlE2WYMKX3ttdfW9Z6w/W1K2NqQIUNiWAOeA/6rP3k98pLZIuhsuOGGgaGg8wwBBc8d8o3gNVOk+fuRjjD5TLbYYouYFNqfB/GBXEx4RaRWRJtYmYhhtdrO9subtlsPQkTwSGLoacJ4bNQiPC3owHLt5DjByCdEHha8HVLRkKSxiyyySPQc8WGMVmc8t7j3hw0bFr24bH21qXn82HbugTQXjW3r1tTfN7xDSPRcTUixOvnnjnWERyE00+FvxHg+tt122zhimBcdEcTwzCGMCra1DAFlm222CTPPPHMP4ZJcVuSGQtBJjdw0hD42Y7Xe0QhTeP55w2MI8a4d854z3F+XXnpp1eJaZWEF8gwgovK+M69J28aUtuB9d/vtt3d0OHd/Ts2LgAiIgAiIgAiIgAj0LwJdF3R6Gy//xJO/ha/Ub775Zgw5sJCg3q5bM+fHu4XrQKiiQ0doA4k/WzXEIYZ2p+NOmBQiiiURbbXMZo+jDngWTT311LEDxPXwkyeiNFt2tf19x5wOOR1CRAjYso0wMNj6/BjVyiq6Taqdp976stSDe2ngwIHRi2HCCSeMSYPJufLOO+/0S28FRoojHMfskUceifm8bLnRKV6ECDK8y8hVRdgUolIzhgBB8mCeN+578sD4xMHNlNXKvqm3EuIWHi/Uo9vWLgvag7bgXscblA8U3OPc6822S7evXecTAREQAREQAREQARHo2wT6naDTt5tLtS+aQJ6gU/Q5VJ4IIOQg6JghliJgpOF7tn1Mn/rwU671wQcfjF6SY/p16/pEQAREQAREQAREQAREoEgCEnSKpKmy+hwBCTp9rsn6ZIUJcfTDnD/wwAPhrLPO6pPXUkSlCbeyfECIWwzFTlJ5mQiIgAiIgAiIgAiIgAiIQOMEJOg0zkp7joEEJOiMgY1awkvyHikIGAcccEDMdVXCqna8SoQnkRDZLC8Js23TVAREQAREQAREQAREQAREoDoBCTrV2WhLPyAgQacfNHIJLpEhv0kqjJhD7i5Gt+qvNmDAgJjUnVxP5M4heTGjDMpEQAREQAREQAREQAREQASaIyBBpzle2nsMI0BuExtVilHCGGpZJgIiIAIiIAIiIAIiIAIiIAIiIAJlJyBBp+wtpPqJgAiIgAiIgAiIgAiIgAiIgAiIgAiIQEJAgk4CRIsiIAIiIAIiIAIiIAIiIAIiIAIiIAIiUHYCEnTK3kKqnwiIgAiIgAiIgAiIgAiIgAiIgAiIgAgkBCToJEC0KAIiIAIiIAIiIAIiIAIiIAIiIAIiIAJlJyBBp+wtpPqJgAiIgAiIgAiIgAiIgAiIgAiIgAiIQEJAgk4CRIsiIAIiIAIiIAIiIAIiIAIiIAIiIAIiUHYCEnTK3kJ9qH7jjjtu2HvvvcOAAQPCf//733DMMcfEaZkvYfLJJw/77LNPGGusscLHH38cDj300KarO/bYY8fhzr/3ve/1OPZ///tfOOCAAwJDostEYEwgMOWUUwZ+JptsssAz/95774V33303/Oc//wnffPNNy5fIc2RGOe2UZeX0hSnXDc8f/OAHkelnn30W3n777cj0q6++6guXoDqKgAiIgAiIgAiIgAj0EoGOCDr8k09nno692Z133hmuvPJKW6yYzjrrrGGPPfbI1nHsK6+8ki2PiTMTTDBBmGmmmeKlWYeor18n13TKKadkl7HLLrsEOidltmmmmSYTcb788suw4447Nl1dOmRnn3121eP23Xff2DmruoM2iEDJCfzoRz8KP/vZzwLv6kkmmSS3tm+++WZ8xz/66KO522utXG+99cLqq6+e7XLrrbeGSy65JFvu1Ax/q5ZaaqmwyiqrhEknnTSeZuTIkeHXv/51+Prrrxs6batlzDbbbPHcgwYNCt///vd7nAsxBw433HBD+Pzzz3ts1woREAEREAEREAEREAER6Iigg6fC6aefXkGXzvJee+0VPv3004r1LMw333w9BJ1//vOfPfYbk1asuOKKYdNNN42XNGLEiHDyySf3+cvrr4IO3j1Dhw4N3kNnmWWWydpTgk6GQjN9lMCPf/zjsPPOOzdU+zvuuCP85S9/aWhfdppuuunCQQcdFLyHzl133RUuuOCChstodsfxxhsvLLvsslHIwUsvtW233bauoNNOGem7Mj2/X8bzCc/Bf//733615kVABERABERABERABEQgdE3QgfW1114brrvuuh7YJeiMGYIOgsZxxx0XwzAIufrVr34V+NpdZivCQyfv+vDYsQ6qBJ08QlrXlwh4QeeLL74Ir732Wvjggw8CIYU8QzPPPHPF5Zx55pnhwQcfrFiXt8Azsv/++/c4vpOCDn9vEGwmmmiivCrFdfUEnXbLSAUdvHHwcHrrrbeiMDz33HNXCMTPPvts9Hpt1Guo6oVpgwiIgAiIgAiIgAiIwBhFoKuCDt45eOngreNNgs6YIej4Nu0r8xJ0+kpLqZ69SYD8Lssvv3x4+umnwzPPPBPS3C7zzjtv2HXXXaOYSz0RII466qi6VR4yZEj0bkt37KSgs+qqq4YNNtggOyUiCX+TEFnM6gk67ZZhgg5eN7fccku4++67K/4uklOHkNXpp5/eqhTF8qeeeipb1owIiIAIiIAIiIAIiIAIdFXQAfdll10W/4H16JsVdPiyagk5+aLJF+NWjFAZckNMPPHE8esoru3dsk6FXBEGMHDgwJjn4qOPPgrvvPNORUchvT68amBpYQd8Jea43rJW24TjuC+mmGKKOCUJMfdG2vFMr6uaoEOHa8YZZ4whgm+88Ubd8Iu03CI8dLgvp5122ngNr7/+es12TM9f5LK1CXzxzMgLm8w7X6ttkldWuo4O71RTTRVXk4MKb5FqSXSLqAfPCO+KV199NSbP5sTGZcIJJ4zrG8lzwvNJvSkP7zXeOfzg0dbXbf311w+rrbZavAzeyYRoVWsTduJZJZQIJuTaggP3O9YNQYd3w/333x9uvPHGGPblQ8oaFXRaLYN7Z/bZZw8vvPBC9HKKF538IhTtkEMOydZeccUV4aabbsqWNSMCIiACIiACIiACIiACXRd0+KedUYV8R7sRQQfX/HXWWScmkTTxgeajw0CHbtiwYYHcDXlGEk9yNIwzzjixM3rYYYeFLbbYIuDW7t3uGVmE3D90Wos2BJy11lorK5bz8k+9GSMspXbhhReGhx9+OF0dl/kiTqcD44s5AsJGG20UBg8eXOGqbx2Oiy66KOs0kvxz6aWXDgsttFAMdbDQoFjYqF/U5aWXXop5MGBbyw488MDchJ50UOmsVQu5KqpNFlxwwbDIIouEBRZYIApzvq58eUfU4es3yUXzLBV0SMi92WabhRlmmCELmeIa+DL+pz/9KXzyySd5xfRY16qgM/7444cNN9wwXo+/z7kW7k8SxT755JM9zlfUCp6LhRdeOBbHuUjcuthiiwVECzPqQefy73//u62qmLbbJr4wwvbMS+GEE06IXhTc57bO9kV0u/TSS2M72bp26oF3BAmA4Y6YxvOGsUwoER1rErkjzNh6xOpq99mcc84Z1l577cA0NZ7Rf/zjH1FYwLOlli2++OLx/vD7cN2NhDf5Yzoxz7tn6623zopGIKklcuHRQxth5NxZdNFFMz6dFHTmmmuu+O7DM8by0viQMupTT9ApogzOU89OPPHE7G8U4tMf//jHeodouwiIgAiIgAiIgAiIQD8i0BVBB6EFAYEvsRjJLvmH3ayeoIOnwk477RS/aNoxedOHHnoonHvuuT1EBMIFjjzyyOwQvorSWcszOu+//e1vCx+VCDHqpz/9ad4pq67785//HIYPH567nU72dtttF7dxPXQ6l1tuudx9WUnn0zxvllxyybDNNttU3dc28NUcYeLxxx+3VT2mZ511VhTKemwYtaLWKFdFtQntSln1DGHsvPPO6+HN5QUdyuCa80acYRveTogKCBr1rBVBZ5ZZZgm//OUva14PAiaj3lx99dX1qtDS9t122y2KSfUORti4/PLLo5Ca7ttum/jyEF9/+MMfxlWIkoy05ENj/L48KzwzZu3U4+CDD+4hGlm5taa/+c1voojo90GEQSDwAq7fbvMIj+eff74t5k5/8pOfhI033rhi28UXXxxuu+22inW9sbD55pvHsCzOjYcO7+xqhnizww47xM0I6L/73e/iyFImeHVS0MmrU7OCTqfK8OUyetapp56ahbHhSVRtpEh/nOZFQAREQAREQAREQAT6D4GuCDokQ6YTtvLKK0ey7777bkyESacQqyfobLnllnFEkrjzqF+EVxD6QELOOeaYo8Izg07mzTffbLvGaSoesJKOMSNp8YWWjrR1Gtl2+4LA8N8AABv4SURBVO23BzqPRRrXiJBiRjgPYgLGV+zHHnvMNmVT6vHiiy9my37GCzp+PUwRHgg54roJa8CqCTp4nDBEPCIFYhZ1mn/++TPvFLwHGMI3z4OIchGGTAChA2KeDGxrRtBh/1baxHfa7Tref//9WCeYE5pjljcUciroWD3IFUI5JHv13iB46pD4uZ41K+jA8Igjjsi+xlM+bf/yyy8HvHZoE4RNM4SlJ554whYLm+YJOnDAC4x6wNRG86K98HzDO8Zbu23iy/KCjl+P8EZ4IF53hOlQp1qCTrP3hhd0eD5pdzzauMfNEEiff/75uN683FJxhfrRVvaM4LnGsN54jsFz6qmnjp6C1L+vCjpcIx6IeE6ZaIX3FmJEnvG3gHbFU5B7iPseUXrvvffuiodOXp3KKOgQkkVCdTO8dWqJ67afpiIgAiIgAiIgAiIgAv2HQNcEHTorJMm0DhEd3v/7v/+LpGsJOnTI+YefTgNGx5Ihvs2Vn7AUwjJMHEF4oGPgQ31SQYdOFaFVJqIMGDAg0JElBAujo4YA0klrN4dOnqBDuAZcEbzMCEXiXGeccUbmncJXcDwdaBNCNXz4G8chYtCRsLbiqzBfh+sZHbVTTjkl260ZQafVNuErP+2FFxgdfG/UH8Fp0KBBcTX3DO3KucxSQQdBDE8ewhswOupbbbVV9DCzY/AmQCCoZc0KOptssklYaaWVYpHUAU+Ne++9NzsFIWqEsJhnGQIc4W6ImkVaKujQ0UaQQEDBOD8MzUsmLwyk3Tbx15MKOjzXhBgh3pgRvkg447/+9a+K9e3Uwws6hJ4hBqadfu51xBn/LP/tb38L55xzjlUtChS8jzAvXmQ7jJpB7MG7jjatFsZm+5fBQ4f3A/VAvOG+RFBjaoaYjKiHsJxnhPWZN6G/fyToVNLi79o888wTV3Lf77nnntnfvco9tSQCIiACIiACIiACItBfCXRN0GG4cv+PPF/1+bpPJ6eWoONDlejo8k+/5T2wRsNLh7w8ZuQZsA4561JBh84/ORu8IXL4jtf222/fQ+jw+7c77zuBI0Y0P8pVKugQckUCTRgVYb6tCFdCAKtn7Qg6nWoTcpwce+yxWdUJp/MeJamgQwfdi1IciOB3/PHHZyJGI+EgzQg61JHcPeblgUcbz0tq3Me///3vY33Ydvjhh0fPhnS/dpZTQeeAAw7oIZT5ZxJBiVwozSQmr9cmvv6poMN9WC2vlD+ukfla9fCCDnXAWwoPKcQts9133z16rs0000xRXGM9HhR4Upj50CKeTUS5dJQ/27eRaRkEHcQY3g95hmcbgqgXlf1+vKt5zyIGIRISomahoBJ0RpPyfx9Yi8coHpsyERABERABERABERABEfAEuiro0CGlE2odV/vCXUvQIU8M4gWWdpb8hfiO3/XXXx+uueaabHMq6NApJpTFWypG0Lkg1AQjHMvq4I9pZJ7z5LnJ+3/YixB0jGUjdUr34drxhKLDaiEThIcttdRScVdC0xAc6lnKsBkPnWbbJK8u3Fd4bPFjYUHsx5duM7zEfOLZVNCpxtELXCSw/cMf/mBF5k6bEXTS+/+kk06q+BJvbcKJyFNi3miIkghhRZoXdBAxeK5SS58nhFkExTxrpU18Of65JswS7yhE4Gat2Xp4QQdvNcJE8RIkZ5SZJc71PNLhulOxmfxHiHWpV5yVWW9KkmrzbrF98VbCk6pbtuyyywbCYPMMgY/3Lz+p4TGH6Gzhreb5ZPtJ0PmWBG281157ZR6SvG8QpVu5742tpiIgAiIgAiIgAiIgAmMmga4KOiAk6StJQjFGUmIkpLRDi3iAiIDtv//+WZjJX//616rJYL3w4934KcN3uFi2L+vMezvzzDPjP9F8See8dOKwWl+k/fF583keH+xXpKBDfQkvaaaTSP4KvvbTOfMjGOVdQ6N5Y9oRdJptE6snYgf5ZdZYY42YC8nEQtueTlOvllTQwdMrb2Qv317cFz63RXoOlpsRdIYMGRKGDh2aV0zNdeSKImdUkeYFnVqeSKeddlqW5BwBysIXqUu7beKvxws6tZ5/f4zNt1MPL+ggCDI6H2btStgeXnyY9/QhwS/HmhFOhfhnCeFZT1gows9zzz0X34EIZ0V51tl5OznlWhhSG76IpwznzvPh3yOpqE59GOXLRvrL8yiUoPPtxwP+9hhL7hVEzGoeT51sZ5UtAiIgAiIgAiIgAiJQfgJdF3T455+vtHQGMDo7fPn2OWu8oMN2Og1YLY8EOsR0jDHy7Bx99NFxnl9e0KHjZMN9Zzt8N0M4h3l2+NCcsgs6hKCRuLhRm2lUiAidJ9/JrHUsX/8RQupZq4JOK21CXbiHaEsTCOvVj+0kNEagMksFHTrpPseO7edHBkM4sxHGbHs6tY4/683DI93HljfddNPYIbblRqf33HNPHEq90f0b2c8LOnmdcisDjwGEDMyPxlZEm9g5mHpBh2HjueZGrN16eEEHJjZcvb0jCBfCAw3zoVgIFXgseUM0xbOqmthIp50cPQyF3peEHX+NCFe8g0j2jvEM7bfffll4LM8ZTC0vV+opxzH9XdAhgT3MLJE9oXn8HUu9SWElEwEREAEREAEREAEREAEIdF3Q4aR0hBgxBsOdHE+DaoKO7zjWEnQYyhePEywNe2hX0CEcieTCrRgdPL7Ep+Y9PtoNuTJPp/Qcect0KulMWaeBfeBFWBheCJb8ltwfFnJVVkEnFdqoO0lpSY5LXg5LGMzwyZZUu56gg1CT5+k0ePDgsPXWW0ekRQs6m222WVhhhRVi2XTu6w1dHXcc9Ysk0IyWVKR5QYfQIPL55Bl5YkhGjPn8HkW0iT+fF3SaGdmr3Xp4QYccQZ9++mmsViuCDgcSesX7ifeIiRr+OpnPyyOV7lPmZUKpaC8zn8ssbY+8MFQSABsbeFsYGUnZq4X02bnanaYJry2crply2ykDURDh18LREPZ4xp588slmqqB9RUAEREAEREAEREAE+hmBXhF0GCacZJhmdBzNFZ913kOHpKyMqoLV8hjwIVcPPPBARa6LdgWdePKCfxUp6DQquHAJdJp8ThkSmPrRlOwyGQVrzTXXjIuNlt9tDx2+ZpNvAqPDh1BlglRcOeoXHUQ64eYdUU/QIXdFXnjD8ssvH70sKLfokKuVV145DvncaNns1ynzgs6dd94ZLrzwwtxTMSQ1w25jzNvoTEW0iT+hF3TStvP7pfPt1qNoQcfqR4Lt6aefPo5+hWiKt5wZnXg69Za7y9b3pSmiG+IE5t/XqaDTzDX5+6uZ45rZtx0xxs7TahmpdxP3AR5+jEAoEwEREAEREAEREAEREIFaBHpF0KFCuOfbMOF4UpgHBdu8oENuGDo+2COPPBLI3ZFnhHGR1wFjiG2+6pqVXdDhKyyjKDVjfpSrRgUXyvfiQV54iNUBr4QFF1wwLjZafrcFnWqigl0DU0JACJ8zS0WBNOQq3W7HeQ+wRpJEM0w8nXes3jDneG0gpGCEqjDfzkhIsaAWf3lBJw1dtCJ9zhjW+YTWRbSJnYdpq4JOu/XolKDjr435NMFwtaTcdhwC0NJLL22LcXrfffeVIiyHMDe4Wyinz/GUXmfFBdRZSHM0+d0JhSSM14y/JeRaygubtH3ypq2KMb6sVsogxJfhyGefffasKB/CmK3UjAiIgAiIgAiIgAiIgAjkEOg1QQcxp1reFy/orL/++mG11VaLVSfUhWMIS/FGJ+fAAw/MVhGycvfdd2fLZRR0yPtCgmiMYbS96JBVvMZMq4IOLGGKVRNq6CDRoTWvlmr7pdXrtqBj4S/Uo5oQQ06cQYMGZVVN90sFndS7iwPhQOjfJJNMEsvh3qoXFuWFCB96klXEzfj7k9WM0IZ3Q2+YF3QYVYck0anHiL+H2IeE1mmOGeqesrbrqdcmth9Tz7FaeX5/m2/33uiWoEN9vVfLueeeGxBoqlkZhi2vVreFF144EN5o5kNkEex9mKft46cci/cShsfXZZddFue5/6rlFuL+JJzNG38jyCvWjLUixqTlN1sG3oOEHzMogNkVV1wRcynZsqYiIAIiIAIiIAIiIAIiUItArwk6VMqHU/lKekEnTaL88MMPR3d0y3OCiED+nVlnnTUW8cUXX8TkmpbzgpW+w0zHoNmkyL5uRc0TKkRYiFkznVWOaVXQ8Z0OWNBh9kk3SUC98847B0Qys7IKOr7TTQcQ7y0/tC95aUg4jOeAWco5FXS4r04++eSK3BXrrbdeWH311a2ImCAaJrXMCyPk9CG5aSpE+uN9yCD3MCKQhTH5/biXCf+i3nguFG2+3pRNrhO8Lux5I8cHyWsZJQ1jdCtfjyLaJBb83a9WBZ126+GPbyeHDuGltBdhjeSqSg1hmxBIu0fT+zPdv7cEHUJk33777ZiUmuvwzxl1xJvvF7/4RTY6E+vqJQNnH2+tJEXuq4IOIjHPvHmfwsGHqHkumhcBERABERABERABERCBagR6VdDx4oKvoBd0WO/DrlhGgHjiiSdifhS+Ck899dSsjnbDDTeEq666yhbjtIyCDkIU18kUQ1wheTJfoy2Z7/DhwyvElrjjd79aFXQQbI488sgs+SjnJYQILyFsiSWWyHJgfHeqXE8eOiTrrLNONioY+xI+QK4MM4a9HjlypC3GMIirr746XmsRbZKODsVXeZJsM0WQmnfeebNz20zaYU4FHfZDvCC5Mm1Bh9wnxE4Tblu56TS9tz///PPAkNYWCsJoTQ899FB2GO2CeGHhKmx4+umnI3vESRJzzzDDDNEbgc4/4hCeMUVbKuhQvj1v5MxJ7490GPgi2sRfU6uCTrv1KErQWWaZZcJWW20VLwlhj2eN5OMIYtyj5AczTzjaGXHann/PweZ7S9BBtLOhtHm+SMZNrim8TAh19WFP1BUxEiGwGeumoMO7C6HNjGuzdmCdF18Rr3hv8Px6a6cMPkAwPLk383Lz6/w8IZDVQo79fpoXAREQAREQAREQARHoPwR6VdChY0rum7QzkAo6dP7pvNoIINWaB68JwhfSxLhFiAfVztnO+lVWWSVsuOGGVYuolUuhVUGHk62xxhph3XXXrXpeNiDwWLvkeeggPLTSudhxxx1jfpgi2oRRlshPY94ieRdEAmO223D09QQdBBfLfZOW9+GHH8ZcR2nHLt3Plv1obrbOpmmeJ9YjHP385z/vIajZMX7aTUHHn9fP33bbbeHiiy/2q+LIV+22iS+wVUGn3XujE4KOv650HmGVMLE8ryy/bxkEHV+fvHmeD0RjPM2asVYEHc7Du8Qbfyu8IOO32fyWW24Z8xfZcr1p+jeJ/dspI0/QqVeHvPdwvWO0XQREQAREQAREQAREYMwm0BFBh6+2JPe0jvHll18ehybPQ8kX/zQE6tBDDw0Mxe0NAWHzzTcP5J7xX1LZh044nUs8c+gYpUYSV/4h5zg8JQgpyjPLZUEZfD1FDOi04XVBJ23++eePX8D9tdXKvcKw7wgGGJ4c5Hhp1DjHSiutFNZee+3MQ8iORbS45ZZbYnhFrfJpW0KTrI3t+FpT2onwFbx2imoTRD6G/bYE23Z+vBxefPHFcOaZZ8Zzwhk74ogjwvPPP2+7hammmiqugwlDnZPDhNARG6nHdsRLBY+DvBGwbJ90imA5ZMiQmMMHLwbvfVMtTw7n3WSTTQIePnlsSZaMdxrePYQfFm3eQ4fE4nPOOWccGc3fl9SBbbfffnvu6dttE1/oQQcdFD2TWJe2nd8vb76dehAuRKJavLVgYuKEvSPwUrEcYIxSRFJz3ntpp5ucMIzgN9dccwX2yzM8L7gf8kKy0v29x49tqzZSnW0vYkrIIe8oPNa4zjzD84iwIXJM5b2H847x63wi9mHDhoVLL73Ub+4xT04e3uveHn300fi3x6/Lm/dJzvO2p+vy/ia1UwbvAz5mNGPNvuebKVv7ioAIiIAIiIAIiIAI9E0CHRF0OomCzgRhMvxDTKedr8HkdmilA9HJevaFsgmhmXbaaaPnEyILIlqaALcvXAd15Cs910Li4jfffDO88sorWXhTK9dAWYTEEAYBl3pf/Fs5R61jEFAGDhwYrwkhALEJMYl73cK2ah3f6jYv6Fx00UVRtOH8iBuEpfC84b3VyPNWdJu0ek1lqAftyXsLIdM8yt57773ATzMiYasMijoOUZL7kufMroN3BuI3gk4j90VRdVlqqaWi+GrlERp18Khk7ozeJxMBERABERABERABERCB/kCgzwk6/aFRdI0i0FsE8gSd3qqLzisCtQjgSYeoY/bggw9Gjzxb1lQEREAEREAEREAEREAExnQCEnTG9BbW9YlAEwQk6DQBS7v2KgHCrWwodDyDDjzwwJisuVcrpZOLgAiIgAiIgAiIgAiIQBcJSNDpImydSgTKTkCCTtlbSPWDAGF0JEQ2u//++wM5x2QiIAIiIAIiIAIiIAIi0J8ISNDpT62taxWBOgQk6NQBpM2lIEDCcJK6kweM3DkkUe5GEvtSXLwqIQIiIAIiIAIiIAIiIALfEZCgo1tBBEQgI7D++uvH0YxYcfXVV4cRI0Zk2zQjAiIgAiIgAiIgAiIgAiIgAiJQHgISdMrTFqqJCIiACIiACIiACIiACIiACIiACIiACDREQIJOQ5i0kwiIgAiIgAiIgAiIgAiIgAiIgAiIgAiUh4AEnfK0hWoiAiIgAiIgAiIgAiIgAiIgAiIgAiIgAg0RkKDTECbtJAIiIAIiIAIiIAIiIAIiIAIiIAIiIALlISBBpzxtoZqIgAiIgAiIgAiIgAiIgAiIgAiIgAiIQEMEJOg0hEk7iYAIiIAIiIAIiIAIiIAIiIAIiIAIiEB5CEjQKU9bqCYiIAIiIAIiIAIiIAIiIAIiIAIiIAIi0BABCToNYdJOIiACIiACIiACIiACIiACIiACIiACIlAeAhJ0ytMWqokIiIAIiIAIiIAIiIAIiIAIiIAIiIAINERAgk5DmLSTCIiACIiACIiACIiACIiACIiACIiACJSHgASd8rSFaiICIiACIiACIiACIiACIiACIiACIiACDRGQoNMQJu0kAiIgAiIgAiIgAiIgAiIgAiIgAiIgAuUhIEGnPG2hmoiACIiACIiACIiACIiACIiACIiACIhAQwQk6DSESTuJgAiIgAiIgAiIgAiIgAiIgAiIgAiIQHkISNApT1uoJiIgAiIgAiIgAiIgAiIgAiIgAiIgAiLQEAEJOg1h0k4iIAIiIAIiIAIiIAIiIAIiIAIiIAIiUB4CEnTK0xaqiQiIgAiIgAiIgAiIgAiIgAiIgAiIgAg0RECCTkOYtJMIiIAIiIAIiIAIiIAIiIAIiIAIiIAIlIeABJ3ytIVqIgIiIAIiIAIiIAIiIAIiIAIiIAIiIAINEZCg0xAm7SQCIiACIiACIiACIiACIiACIiACIiAC5SEgQac8baGaiIAIiIAIiIAIiIAIiIAIiIAIiIAIiEBDBCToNIRJO4mACIiACIiACIiACIiACIiACIiACIhAeQhI0ClPW6gmIiACIiACIiACIiACIiACIiACIiACItAQAQk6DWHSTiIgAiIgAiIgAiIgAiIgAiIgAiIgAiJQHgISdMrTFqqJCIiACIiACIiACIiACIiACIiACIiACDREQIJOQ5i0kwiIgAiIgAiIgAiIgAiIgAiIgAiIgAiUh4AEnfK0hWoiAiIgAiIgAiIgAiIgAiIgAiIgAiIgAg0RkKDTECbtJAIiIAIiIAIiIAIiIAIiIAIiIAIiIALlISBBpzxtoZqIgAiIgAiIgAiIgAiIgAiIgAiIgAiIQEMEJOg0hEk7iYAIiIAIiIAIiIAIiIAIiIAIiIAIiEB5CHRE0JlrrrnCXnvt1dZVfv3112Hsscduq4z3338/TDnllL1eRlsVGHXwV199FcYdd9y2iilLGWrX0c1YljYpoh5qV7XraAKVc3oPj+ZRxLNWRBl6XsvXJmrX0W1SxFwRPMtShp7X0XdEWdqkiHqoXdWuowlUzun/ptE8injWiiijiOf1mmuuCddff/3oiytwriOCzgILLBB22223AqvZWlEffvhhmHTSSVs7+LujiiijrQro4B4EimiTIsroUTGtaItAEW1SRBltXYQO7kGgiDYpooweFdOKtggU0SZFlNHWRejgHgSKaJMiyuhRMa1oi0ARbVJEGW1dhA7uQaCINimijB4V04q2CBTRJkWU0dZF6OAeBG666aZwxRVX9FhfxIqOCDqzzTZb2Hnnnduq3/jjjx++/PLL8M0337RczkcffRQmmWSSlo/nwHbLGGeccQI/I0eObKseY401VlssOHkZylC7Vt4GZWiTIu4NtavatZLA6KV236GU1G4Zeg+Pbg/m9LxW8tB7eDSPdp81Smq3DD2vo9uDOT2vlTz0vI7m0e6zRkntlqHndXR7MKfntZKHntfRPG688cYwbNiw0SsKnOuIoFNg/VSUCIiACIiACIiACIiACIiACIiACIiACIhAQkCCTgJEiyIgAiIgAiIgAiIgAiIgAiIgAiIgAiJQdgISdMreQqqfCIiACIiACIiACIiACIiACIiACIiACCQEJOgkQLQoAiIgAiIgAiIgAiIgAiIgAiIgAiIgAmUnIEGn7C2k+omACIiACIiACIiACIiACIiACIiACIhAQkCCTgJEiyIgAiIgAiIgAiIgAiIgAiIgAiIgAiJQdgISdMreQqqfCIiACIiACIiACIiACIiACIiACIiACCQEJOgkQLQoAiIgAiIgAiIgAiIgAiIgAiIgAiIgAmUnIEGn7C2k+omACIiACIiACIiACIiACIiACIiACIhAQkCCTgJEiyIgAiIgAiIgAiIgAiIgAiIgAiIgAiJQdgISdMreQqqfCIiACIiACIiACIiACIiACIiACIiACCQEJOgkQLQoAiIgAiIgAiIgAiIgAiIgAiIgAiIgAmUnIEGn7C2k+omACIiACIiACIiACIiACIiACIiACIhAQkCCTgJEiyIgAiIgAiIgAiIgAiIgAiIgAiIgAiJQdgISdMreQqqfCIiACIiACIiACIiACIiACIiACIiACCQEJOgkQLQoAiIgAiIgAiIgAiIgAiIgAiIgAiIgAmUnIEGn7C2k+omACIiACIiACIiACIiACIiACIiACIhAQkCCTgJEiyIgAiIgAiIgAiIgAiIgAiIgAiIgAiJQdgISdMreQqqfCIiACIiACIiACIiACIiACIiACIiACCQEJOgkQLQoAiIgAiIgAiIgAiIgAiIgAiIgAiIgAmUnIEGn7C2k+omACIiACIiACIiACIiACIiACIiACIhAQkCCTgJEiyIgAiIgAiIgAiIgAiIgAiIgAiIgAiJQdgISdMreQqqfCIiACIiACIiACIiACIiACIiACIiACCQEJOgkQLQoAiIgAiIgAiIgAiIgAiIgAiIgAiIgAmUnIEGn7C2k+omACIiACIiACIiACIiACIiACIiACIhAQkCCTgJEiyIgAiIgAiIgAiIgAiIgAiIgAiIgAiJQdgISdMreQqqfCIiACIiACIiACIiACIiACIiACIiACCQE/h8OiefvTcxuxQAAAABJRU5ErkJggg==) Train the Model 4.1 Prepare to Train the ModelYou'll fit the model here, but first you'll set some of the parameters that go into fitting the model.- EPOCHS: You'll train the model for 50 epochs- BATCH_SIZE: Set the `BATCH_SIZE` to an appropriate value. You can look at the ungraded labs from this week for some examples.- length_of_training_dataset: this is the number of training examples. You can find this value by getting the length of `visualization_training_dataset`. - Note: You won't be able to get the length of the object `training_dataset`. (You'll get an error message).- length_of_validation_dataset: this is the number of validation examples. You can find this value by getting the length of `visualization_validation_dataset`. - Note: You won't be able to get the length of the object `validation_dataset`.- steps_per_epoch: This is the number of steps it will take to process all of the training data. - If the number of training examples is not evenly divisible by the batch size, there will be one last batch that is not the full batch size. - Try to calculate the number steps it would take to train all the full batches plus one more batch containing the remaining training examples. There are a couples ways you can calculate this. - You can use regular division `/` and import `math` to use `math.ceil()` [Python math module docs](https://docs.python.org/3/library/math.html) - Alternatively, you can use `//` for integer division, `%` to check for a remainder after integer division, and an `if` statement. - validation_steps: This is the number of steps it will take to process all of the validation data. You can use similar calculations that you did for the step_per_epoch, but for the validation dataset. Exercise 6
###Code
import math
# You'll train 50 epochs
EPOCHS = 50
### START CODE HERE ###
# Choose a batch size
BATCH_SIZE = 16
# Get the length of the training set
length_of_training_dataset = len(visualization_training_dataset)
# Get the length of the validation set
length_of_validation_dataset = len(visualization_validation_dataset)
# Get the steps per epoch (may be a few lines of code)
steps_per_epoch = math.ceil(length_of_training_dataset/BATCH_SIZE)
# get the validation steps (per epoch) (may be a few lines of code)
validation_steps = length_of_validation_dataset//BATCH_SIZE
if length_of_validation_dataset % BATCH_SIZE > 0:
validation_steps += 1
### END CODE HERE
###Output
_____no_output_____
###Markdown
4.2 Fit the model to the dataCheck out the parameters that you can set to fit the [Model](https://www.tensorflow.org/api_docs/python/tf/keras/Modelfit). Please set the following parameters.- x: this can be a tuple of both the features and labels, as is the case here when using a tf.Data dataset. - Please use the variable returned from `get_training_dataset()`. - Note, don't set the `y` parameter when the `x` is already set to both the features and labels.- steps_per_epoch: the number of steps to train in order to train on all examples in the training dataset.- validation_data: this is a tuple of both the features and labels of the validation set. - Please use the variable returned from `get_validation_dataset()`- validation_steps: teh number of steps to go through the validation set, batch by batch.- epochs: the number of epochs.If all goes well your model's training will start. Exercise 7
###Code
### YOUR CODE HERE ####
# Fit the model, setting the parameters noted in the instructions above.
history = model.fit(get_training_dataset(visualization_training_dataset),
steps_per_epoch=steps_per_epoch, validation_data=get_validation_dataset(visualization_validation_dataset), validation_steps=validation_steps, epochs=EPOCHS)
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
5. Validate the Model 5.1 LossYou can now evaluate your trained model's performance by checking its loss value on the validation set.
###Code
loss = model.evaluate(validation_dataset, steps=validation_steps)
print("Loss: ", loss)
###Output
_____no_output_____
###Markdown
5.2 Save your Model for GradingWhen you have trained your model and are satisfied with your validation loss, please you save your model so that you can upload it to the Coursera classroom for grading.
###Code
# Please save your model
model.save("birds.h5")
# And download it using this shortcut or from the "Files" panel to the left
from google.colab import files
files.download("birds.h5")
###Output
_____no_output_____
###Markdown
5.3 Plot Loss FunctionYou can also plot the loss metrics.
###Code
plot_metrics("loss", "Bounding Box Loss", ylim=0.2)
###Output
_____no_output_____
###Markdown
5.4 Evaluate performance using IoUYou can see how well your model predicts bounding boxes on the validation set by calculating the Intersection-over-union (IoU) score for each image.- You'll find the IoU calculation implemented for you.- Predict on the validation set of images.- Apply the `intersection_over_union` on these predicted bounding boxes.
###Code
def intersection_over_union(pred_box, true_box):
xmin_pred, ymin_pred, xmax_pred, ymax_pred = np.split(pred_box, 4, axis = 1)
xmin_true, ymin_true, xmax_true, ymax_true = np.split(true_box, 4, axis = 1)
#Calculate coordinates of overlap area between boxes
xmin_overlap = np.maximum(xmin_pred, xmin_true)
xmax_overlap = np.minimum(xmax_pred, xmax_true)
ymin_overlap = np.maximum(ymin_pred, ymin_true)
ymax_overlap = np.minimum(ymax_pred, ymax_true)
#Calculates area of true and predicted boxes
pred_box_area = (xmax_pred - xmin_pred) * (ymax_pred - ymin_pred)
true_box_area = (xmax_true - xmin_true) * (ymax_true - ymin_true)
#Calculates overlap area and union area.
overlap_area = np.maximum((xmax_overlap - xmin_overlap),0) * np.maximum((ymax_overlap - ymin_overlap), 0)
union_area = (pred_box_area + true_box_area) - overlap_area
# Defines a smoothing factor to prevent division by 0
smoothing_factor = 1e-10
#Updates iou score
iou = (overlap_area + smoothing_factor) / (union_area + smoothing_factor)
return iou
#Makes predictions
original_images, normalized_images, normalized_bboxes = dataset_to_numpy_with_original_bboxes_util(visualization_validation_dataset, N=500)
predicted_bboxes = model.predict(normalized_images, batch_size=32)
#Calculates IOU and reports true positives and false positives based on IOU threshold
iou = intersection_over_union(predicted_bboxes, normalized_bboxes)
iou_threshold = 0.5
print("Number of predictions where iou > threshold(%s): %s" % (iou_threshold, (iou >= iou_threshold).sum()))
print("Number of predictions where iou < threshold(%s): %s" % (iou_threshold, (iou < iou_threshold).sum()))
###Output
_____no_output_____
###Markdown
6. Visualize PredictionsLastly, you'll plot the predicted and ground truth bounding boxes for a random set of images and visually see how well you did!
###Code
n = 10
indexes = np.random.choice(len(predicted_bboxes), size=n)
iou_to_draw = iou[indexes]
norm_to_draw = original_images[indexes]
display_digits_with_boxes(original_images[indexes], predicted_bboxes[indexes], normalized_bboxes[indexes], iou[indexes], "True and Predicted values", bboxes_normalized=True)
###Output
_____no_output_____ |
Data/collision_avoidance/khalid_aliweh/train_model.ipynb | ###Markdown
Collision Avoidance - Train ModelWelcome to this host side Jupyter Notebook! This should look familiar if you ran through the notebooks that run on the robot. In this notebook we'll train our image classifier to detect two classes``free`` and ``blocked``, which we'll use for avoiding collisions. For this, we'll use a popular deep learning library *PyTorch*
###Code
import torch
import torch.optim as optim
import torch.nn.functional as F
import torchvision
import torchvision.datasets as datasets
import torchvision.models as models
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Upload and extract datasetBefore you start, you should upload the ``dataset.zip`` file that you created in the ``data_collection.ipynb`` notebook on the robot.You should then extract this dataset by calling the command below You should see a folder named ``dataset`` appear in the file browser. Create dataset instance Now we use the ``ImageFolder`` dataset class available with the ``torchvision.datasets`` package. We attach transforms from the ``torchvision.transforms`` package to prepare the data for training.
###Code
dataset = datasets.ImageFolder(
'dataset',
transforms.Compose([
transforms.ColorJitter(0.1, 0.1, 0.1, 0.1),
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
)
###Output
_____no_output_____
###Markdown
Split dataset into train and test sets Next, we split the dataset into *training* and *test* sets. The test set will be used to verify the accuracy of the model we train.
###Code
train_dataset, test_dataset = torch.utils.data.random_split(dataset, [len(dataset) - 50, 50])
###Output
_____no_output_____
###Markdown
Create data loaders to load data in batches We'll create two ``DataLoader`` instances, which provide utilities for shuffling data, producing *batches* of images, and loading the samples in parallel with multiple workers.
###Code
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=4,
shuffle=True,
num_workers=4
)
test_loader = torch.utils.data.DataLoader(
test_dataset,
batch_size=4,
shuffle=True,
num_workers=4
)
###Output
_____no_output_____
###Markdown
Define the neural networkNow, we define the neural network we'll be training. The *torchvision* package provides a collection of pre-trained models that we can use.In a process called *transfer learning*, we can repurpose a pre-trained model (trained on millions of images) for a new task that has possibly much less data available.Important features that were learned in the original training of the pre-trained model are re-usable for the new task. We'll use the ``alexnet`` model.
###Code
model = models.alexnet(pretrained=True)
###Output
_____no_output_____
###Markdown
The ``alexnet`` model was originally trained for a dataset that had 1000 class labels, but our dataset only has two class labels! We'll replacethe final layer with a new, untrained layer that has only two outputs.
###Code
model.classifier[6] = torch.nn.Linear(model.classifier[6].in_features, 2)
###Output
_____no_output_____
###Markdown
Finally, we transfer our model for execution on the GPU
###Code
device = torch.device('cuda')
model = model.to(device)
###Output
_____no_output_____
###Markdown
Train the neural networkUsing the code below we will train the neural network for 30 epochs, saving the best performing model after each epoch.> An epoch is a full run through our data.
###Code
import time
NUM_EPOCHS = 30
BEST_MODEL_PATH = 'best_model.pth'
best_accuracy = 0.0
since = time.time()
accuracy = {'train':[], 'test':[]}
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
for epoch in range(NUM_EPOCHS):
train_error_count = 0.0
for images, labels in iter(train_loader):
images = images.to(device)
labels = labels.to(device)
optimizer.zero_grad()
outputs = model(images)
loss = F.cross_entropy(outputs, labels)
train_error_count += float(torch.sum(torch.abs(labels - outputs.argmax(1))))
loss.backward()
optimizer.step()
test_error_count = 0.0
for images, labels in iter(test_loader):
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
test_error_count += float(torch.sum(torch.abs(labels - outputs.argmax(1))))
train_accuracy = 1.0 - float(train_error_count) / float(len(train_dataset))
test_accuracy = 1.0 - float(test_error_count) / float(len(test_dataset))
accuracy['train'].append(train_accuracy)
accuracy['test'].append(test_accuracy)
print('epoch %d: test accuracy %f train accuracy %f ' % (epoch, test_accuracy, train_accuracy))
if test_accuracy > best_accuracy:
torch.save(model.state_dict(), BEST_MODEL_PATH)
best_accuracy = test_accuracy
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
epochs = range(NUM_EPOCHS)
plt.figure()
plt.plot(epochs, accuracy['train'], 'r.', label='Training accuracy')
plt.plot(epochs, accuracy['test'], 'g', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____ |
redrock/RedrockBOSSDemo.ipynb | ###Markdown
Redrock on BOSS dataThis tutorial demonstrates running redrock on BOSS data.Some pieces are a bit cryptic because of code organized for parallelismrather than interactive use. In general one would use the `rrboss` script,but this tutorial pokes under the hood a bit to see the pieces.
###Code
%pylab inline
import os
import numpy as np
from redrock.external import boss
from redrock.zfind import zfind
from redrock.targets import Spectrum, Target, DistTargetsCopy
from redrock.templates import load_dist_templates
###Output
_____no_output_____
###Markdown
Read a subset of the data from spPlate-3678-55208.fits.* If fiberid is not specified, all fibers from the plate are read.* If use_frames is True, the individual exposures will be discovered from the spPlate header. The corresponding spCFrame files should be in the same directory as the spPlate file.* BEWARE: The targets list might not have the same order as the input fiberid**NOTE**: BOSS/eBOSS is publicly accessible and can be downloadedfrom https://data.sdss.org/sas/dr14/eboss/spectro/redux/They are also available at NERSC: `/global/cfs/cdirs/cosmo/data/sdss/dr14/eboss/spectro/redux/v5_10_0`.
###Code
if 'NERSC_HOST' in os.environ:
spplate = "/global/cfs/cdirs/cosmo/data/sdss/dr14/eboss/spectro/redux/v5_10_0/3678/spPlate-3678-55208.fits"
else:
spplate="data/spPlate-3678-55208.fits"
targets,meta = boss.read_spectra([spplate,], use_frames=False, fiberid=[36,51,52,54,60])
dtargets = DistTargetsCopy(targets)
dwave = dtargets.wavegrids()
dtemplates = load_dist_templates(dwave)
templates = dict()
for dt in dtemplates:
templates[dt.template.full_type] = dt.template
###Output
_____no_output_____
###Markdown
Doing the redshifts scansDefine the templates and to the redshift scans over those templates.This might take a few minutes, please be patient! Your computer is working hard.
###Code
zscan, zfit = zfind(dtargets, dtemplates)
###Output
_____no_output_____
###Markdown
Exploring the outputThe zscan dictionary contains the all the redshift scan information for each target and each template.For example, here's a plot of the $\chi^2$ vs $z$ for the first target.
###Code
targetid = targets[0].id
for template in dtemplates:
full_type = template.template.full_type
plot(zscan[targetid][full_type]['redshifts'],\
zscan[targetid][full_type]['zchi2'],label=full_type)
legend(loc=0,frameon=False)
ylim(4000,6000)
grid()
xlabel("redshift",fontsize=24)
ylabel("$\chi^2$",fontsize=24)
###Output
_____no_output_____
###Markdown
Excellent! We find that the best fit is corresponds to a QSO template at redshift around 1.8. Let's confirm that by looking at the zfit table:
###Code
print("Redrock thinks {} is a {} at redshift {}".format(targets[0].id,zfit[0]['spectype'],zfit[0]['z']))
###Output
_____no_output_____
###Markdown
Let's now make a plot of the spectrum and its best fit. The best fit is obtained by evaluating the best fit template (in this case a QSO) at the best fit redshift. Let's list the template types:
###Code
for i,t in enumerate(templates.values()):
print(i,t.full_type)
###Output
_____no_output_____
###Markdown
We now evaluate the template 'QSO' at the best fit redshift:
###Code
tid = targets[0].id
t_qso = templates['QSO']
## several minima are stored in the zfit table
minumum_number = 0
## select the target id and minumum number
w = (zfit[:]['targetid']==tid) & (zfit[:]['znum']==minumum_number)
## now get the coefficients
coeff = zfit[w]['coeff'].reshape(-1)
zbest = zfit[w]['z'][0]
## compute the best fit:
fit = t_qso.eval(coeff[:4],targets[0].spectra[0].wave,zbest)
## remultiply by (1+z)
fit *= (1+zbest)
wave=targets[0].spectra[0].wave
flux = targets[0].spectra[0].flux
plot(wave,flux)
plot(wave,fit)
ylim(-1,6)
###Output
_____no_output_____
###Markdown
Let's add a bit of rebinning to smooth out the noise:
###Code
i = np.arange(len(fit),dtype=float)
A = i - i[:,None]
rebin = 5
w = abs(A)<rebin
A[w]=1.
A[~w]=0
A /= A.sum(axis=1).reshape(-1,1)
plot(wave,A.dot(flux))
plot(wave,A.dot(fit))
###Output
_____no_output_____ |
01神经网络和深度学习/Code编程作业/deeplearning第1专题编程作业/deeplearning编程作业/week4/Building your Deep Neural Network - Step by Step/第四周-一步步搭建多层神经网络以及应用1.ipynb | ###Markdown
Link:https://blog.csdn.net/u013733326/article/details/79767169 本节课中,我们将学习如何利用Python的来实现具有多个隐藏层的图片分类问题。这是本课程的第三个Python代码实践,通过本节课的实践,你将会一步步的建立一个多层神经网络模型。此外,通过这次建立的多层神经网络模型,可以将之前的猫分类问题的准确率提升到80%。本文学习完成后,希望你可以做到:1. 使用非线性映射单元(例如ReLU)去改善你的模型。2. 建立一个多个隐藏层的神经网络3. 创建一个易于调用的模型类 开始之前在正式开始之前,我们先来了解一下我们要做什么。在本次教程中,我们要构建两个神经网络,一个是构建两层的神经网络,一个是构建多层的神经网络,多层神经网络的层数可以自己定义。本次的教程的难度有所提升,但是我会力求深入简出。在这里,我们简单的讲一下难点,本文会提到**[LINEAR-> ACTIVATION]转发函数,比如我有一个多层的神经网络,结构是输入层->隐藏层->隐藏层->···->隐藏层->输出层**,在每一层中,我会首先计算Z = np.dot(W,A) + b,这叫做【linear_forward】,然后再计算A = relu(Z) 或者 A = sigmoid(Z),这叫做【linear_activation_forward】,合并起来就是这一层的计算方法,所以每一层的计算都有两个步骤,先是计算Z,再计算A,你也可以参照下图:我们来说一下步骤:1. 初始化网络参数2. 前向传播 2.1 计算一层的中线性求和的部分 2.2 计算激活函数的部分(ReLU使用L-1次,Sigmod使用1次) 2.3 结合线性求和与激活函数 3.计算误差4. 反向传播 4.1 线性部分的反向传播公式 4.2 激活函数部分的反向传播公式 4.3 结合线性部分与激活函数的反向传播公式5。 更新参数 请注意,对于每个前向函数,都有一个相应的后向函数。 这就是为什么在我们的转发模块的每一步都会在cache中存储一些值,cache的值对计算梯度很有用, 在反向传播模块中,我们将使用cache来计算梯度。 现在我们正式开始分别构建两层神经网络和多层神经网络。 准备软件包在开始我们需要准备一些软件包:
###Code
import numpy as np
import h5py
import matplotlib.pyplot as plt
import testCases_v2 as testCases#参见资料包
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward #参见资料包
import lr_utils #参见资料包
np.random.seed(1) # 指定随机种子
###Output
_____no_output_____
###Markdown
初始化参数对于一个两层的神经网络结构而言,模型结构是线性->ReLU->线性->sigmod函数。初始化函数如下:
###Code
def initialize_parameters(n_x,n_h,n_y):
"""
此函数是为了初始化两层网络参数而使用的函数。
参数:
n_x - 输入层节点数量
n_h - 隐藏层节点数量
n_y - 输出层节点数量
返回:
parameters - 包含你的参数的python字典:
W1 - 权重矩阵,维度为(n_h,n_x)
b1 - 偏向量,维度为(n_h,1)
W2 - 权重矩阵,维度为(n_y,n_h)
b2 - 偏向量,维度为(n_y,1)
"""
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
#使用断言确保我的数据格式是正确的
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
# 初始化完成我们来测试一下:
print("==============测试initialize_parameters==============")
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
==============测试initialize_parameters==============
W1 = [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]]
b1 = [[0.]
[0.]]
W2 = [[ 0.01744812 -0.00761207]]
b2 = [[0.]]
###Markdown
L层网络两层的神经网络测试已经完毕了,那么对于一个L层的神经网络而言呢?初始化会是什么样的?假设X(输入数据)的维度为(12288,209): **Shape of W** **Shape of b** **Activation** **Shape of Activation** **Layer 1** $(n^{[1]},12288)$ $(n^{[1]},1)$ $Z^{[1]} = W^{[1]} X + b^{[1]} $ $(n^{[1]},209)$ **Layer 2** $(n^{[2]}, n^{[1]})$ $(n^{[2]},1)$ $Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ $(n^{[2]}, 209)$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ **Layer L-1** $(n^{[L-1]}, n^{[L-2]})$ $(n^{[L-1]}, 1)$ $Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ $(n^{[L-1]}, 209)$ **Layer L** $(n^{[L]}, n^{[L-1]})$ $(n^{[L]}, 1)$ $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$ $(n^{[L]}, 209)$ Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: $$ W = \begin{bmatrix} j & k & l\\ m & n & o \\ p & q & r \end{bmatrix}\;\;\; X = \begin{bmatrix} a & b & c\\ d & e & f \\ g & h & i \end{bmatrix} \;\;\; b =\begin{bmatrix} s \\ t \\ u\end{bmatrix}\tag{2}$$Then $WX + b$ will be:$$ WX + b = \begin{bmatrix} (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\ (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\ (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u\end{bmatrix}\tag{3} $$
###Code
def initialize_parameters_deep(layers_dims):
"""
此函数是为了初始化多层网络参数而使用的函数。
参数:
layers_dims - 包含我们网络中每个图层的节点数量的列表
返回:
parameters - 包含参数“W1”,“b1”,...,“WL”,“bL”的字典:
W1 - 权重矩阵,维度为(layers_dims [1],layers_dims [1-1])
bl - 偏向量,维度为(layers_dims [1],1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims)
for l in range(1,L):
parameters["W" + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) / np.sqrt(layers_dims[l - 1])
parameters["b" + str(l)] = np.zeros((layers_dims[l], 1))
#确保我要的数据的格式是正确的
assert(parameters["W" + str(l)].shape == (layers_dims[l], layers_dims[l-1]))
assert(parameters["b" + str(l)].shape == (layers_dims[l], 1))
return parameters
#测试initialize_parameters_deep
print("==============测试initialize_parameters_deep==============")
layers_dims = [5,4,3]
parameters = initialize_parameters_deep(layers_dims)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
==============测试initialize_parameters_deep==============
W1 = [[ 0.79989897 0.19521314 0.04315498 -0.83337927 -0.12405178]
[-0.15865304 -0.03700312 -0.28040323 -0.01959608 -0.21341839]
[-0.58757818 0.39561516 0.39413741 0.76454432 0.02237573]
[-0.18097724 -0.24389238 -0.69160568 0.43932807 -0.49241241]]
b1 = [[0.]
[0.]
[0.]
[0.]]
W2 = [[-0.59252326 -0.10282495 0.74307418 0.11835813]
[-0.51189257 -0.3564966 0.31262248 -0.08025668]
[-0.38441818 -0.11501536 0.37252813 0.98805539]]
b2 = [[0.]
[0.]
[0.]]
###Markdown
我们分别构建了两层和多层神经网络的初始化参数的函数,现在我们开始构建前向传播函数。 前向传播函数前向传播有以下三个步骤* LINEAR* LINEAR - >ACTIVATION,其中激活函数将会使用ReLU或Sigmoid。* [LINEAR - > RELU] ×(L-1) - > LINEAR - > SIGMOID(整个模型)线性正向传播模块(向量化所有示例)使用公式(3)进行计算:$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$$线性部分【LINEAR】前向传播中,线性部分计算如下:
###Code
def linear_forward(A,W,b):
"""
实现前向传播的线性部分。
参数:
A - 来自上一层(或输入数据)的激活,维度为(上一层的节点数量,示例的数量)
W - 权重矩阵,numpy数组,维度为(当前图层的节点数量,前一图层的节点数量)
b - 偏向量,numpy向量,维度为(当前图层节点数量,1)
返回:
Z - 激活功能的输入,也称为预激活参数
cache - 一个包含“A”,“W”和“b”的字典,存储这些变量以有效地计算后向传递
"""
Z = np.dot(W,A) + b
assert(Z.shape == (W.shape[0],A.shape[1]))
cache = (A,W,b)
return Z,cache
#测试linear_forward
print("==============测试linear_forward==============")
A,W,b = testCases.linear_forward_test_case()
Z,linear_cache = linear_forward(A,W,b)
print("Z = " + str(Z))
###Output
==============测试linear_forward==============
Z = [[ 3.26295337 -1.23429987]]
###Markdown
线性激活部分【LINEAR - >ACTIVATION】 为了更方便,我们将把两个功能(线性和激活)分组为一个功能(LINEAR-> ACTIVATION)。 因此,我们将实现一个执行LINEAR前进步骤,然后执行ACTIVATION前进步骤的功能。我们来看看这激活函数的数学实现吧~ - **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$.- **ReLU**: $A = RELU(Z) = max(0, Z)$.我们为了实现LINEAR->ACTIVATION这个步骤, 使用的公式是: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ ,其中,函数g会是sigmoid() 或者是 relu(),当然,sigmoid()只在输出层使用,现在我们正式构建前向线性激活部分。
###Code
def linear_activation_forward(A_prev,W,b,activation):
"""
实现LINEAR-> ACTIVATION 这一层的前向传播
参数:
A_prev - 来自上一层(或输入层)的激活,维度为(上一层的节点数量,示例数)
W - 权重矩阵,numpy数组,维度为(当前层的节点数量,前一层的大小)
b - 偏向量,numpy阵列,维度为(当前层的节点数量,1)
activation - 选择在此层中使用的激活函数名,字符串类型,【"sigmoid" | "relu"】
返回:
A - 激活函数的输出,也称为激活后的值
cache - 一个包含“linear_cache”和“activation_cache”的字典,我们需要存储它以有效地计算后向传递
"""
if activation == "sigmoid":
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
elif activation == "relu":
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
assert(A.shape == (W.shape[0],A_prev.shape[1]))
cache = (linear_cache,activation_cache)
return A,cache
#测试linear_activation_forward
print("==============测试linear_activation_forward==============")
A_prev, W,b = testCases.linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("sigmoid,A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("ReLU,A = " + str(A))
###Output
==============测试linear_activation_forward==============
sigmoid,A = [[0.96890023 0.11013289]]
ReLU,A = [[3.43896131 0. ]]
###Markdown
我们把两层模型需要的前向传播函数做完了,那多层网络模型的前向传播是怎样的呢?我们调用上面的那两个函数来实现它,为了在实现L层神经网络时更加方便,我们需要一个函数来复制前一个函数(带有RELU的linear_activation_forward)L-1次,然后用一个带有SIGMOID的linear_activation_forward跟踪它,我们来看一下它的结构是怎样的: **Figure 2** : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model在下面的代码中,AL表示$A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$ (也可称作 Yhat,数学表示为 Yˆ\hat{Y} 多层模型的前向传播计算模型代码如下:
###Code
def L_model_forward(X,parameters):
"""
实现[LINEAR-> RELU] *(L-1) - > LINEAR-> SIGMOID计算前向传播,也就是多层网络的前向传播,为后面每一层都执行LINEAR和ACTIVATION
参数:
X - 数据,numpy数组,维度为(输入节点数量,示例数)
parameters - initialize_parameters_deep()的输出
返回:
AL - 最后的激活值
caches - 包含以下内容的缓存列表:
linear_relu_forward()的每个cache(有L-1个,索引为从0到L-2)
linear_sigmoid_forward()的cache(只有一个,索引为L-1)
"""
caches = []
A = X
L = len(parameters) // 2
for l in range(1,L):
A_prev = A
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], "relu")
caches.append(cache)
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], "sigmoid")
caches.append(cache)
assert(AL.shape == (1,X.shape[1]))
return AL,caches
#测试L_model_forward
print("==============测试L_model_forward==============")
X,parameters = testCases.L_model_forward_test_case()
AL,caches = L_model_forward(X,parameters)
print("AL = " + str(AL))
print("caches 的长度为 = " + str(len(caches)))
###Output
==============测试L_model_forward==============
AL = [[0.17007265 0.2524272 ]]
caches 的长度为 = 2
###Markdown
计算成本函数我们已经把这两个模型的前向传播部分完成了,我们需要计算成本(误差),以确定它到底有没有在学习,成本的计算公式如下:$$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$
###Code
def compute_cost(AL,Y):
"""
实施等式(4)定义的成本函数。
参数:
AL - 与标签预测相对应的概率向量,维度为(1,示例数量)
Y - 标签向量(例如:如果不是猫,则为0,如果是猫则为1),维度为(1,数量)
返回:
cost - 交叉熵成本
"""
m = Y.shape[1]
cost = -np.sum(np.multiply(np.log(AL),Y) + np.multiply(np.log(1 - AL), 1 - Y)) / m
cost = np.squeeze(cost)
assert(cost.shape == ())
return cost
#测试compute_cost
print("==============测试compute_cost==============")
Y,AL = testCases.compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
###Output
==============测试compute_cost==============
cost = 0.414931599615397
###Markdown
反向传播反向传播用于计算相对于参数的损失函数的梯度,我们来看看向前和向后传播的流程图: **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* 流程图有了,我们再来看一看对于线性的部分的公式: **Figure 4** The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:$$ dW^{[l]} = \frac{\partial \mathcal{J} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$$$ db^{[l]} = \frac{\partial \mathcal{J} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$与前向传播类似,我们有需要使用三个步骤来构建反向传播:* LINEAR 后向计算* LINEAR -> ACTIVATION 后向计算,其中ACTIVATION 计算Relu或者Sigmoid 的结果* [LINEAR -> RELU] ×\times× (L-1) -> LINEAR -> SIGMOID 后向计算 (整个模型) 线性部分【LINEAR backward】我们来实现后向传播线性部分:
###Code
def linear_backward(dZ,cache):
"""
为单层实现反向传播的线性部分(第L层)
参数:
dZ - 相对于(当前第l层的)线性输出的成本梯度
cache - 来自当前层前向传播的值的元组(A_prev,W,b)
返回:
dA_prev - 相对于激活(前一层l-1)的成本梯度,与A_prev维度相同
dW - 相对于W(当前层l)的成本梯度,与W的维度相同
db - 相对于b(当前层l)的成本梯度,与b维度相同
"""
A_prev, W, b = cache
m = A_prev.shape[1]
dW = np.dot(dZ, A_prev.T) / m
db = np.sum(dZ, axis=1, keepdims=True) / m
dA_prev = np.dot(W.T, dZ)
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
#测试linear_backward
print("==============测试linear_backward==============")
dZ, linear_cache = testCases.linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
###Output
==============测试linear_backward==============
dA_prev = [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]]
dW = [[-0.10076895 1.40685096 1.64992505]]
db = [[0.50629448]]
###Markdown
线性激活部分【LINEAR -> ACTIVATION backward】为了帮助你实现linear_activation_backward,我们提供了两个后向函数:sigmoid_backward:实现了sigmoid()函数的反向传播,你可以这样调用它:`dZ = sigmoid_backward(dA, activation_cache)`relu_backward: 实现了relu()函数的反向传播,你可以这样调用它:`dZ = relu_backward(dA, activation_cache)`如果 g(.)是激活函数, 那么sigmoid_backward 和 relu_backward 这样计算: $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$ 我们先在正式开始实现后向线性激活:
###Code
def linear_activation_backward(dA,cache,activation="relu"):
"""
实现LINEAR-> ACTIVATION层的后向传播。
参数:
dA - 当前层l的激活后的梯度值
cache - 我们存储的用于有效计算反向传播的值的元组(值为linear_cache,activation_cache)
activation - 要在此层中使用的激活函数名,字符串类型,【"sigmoid" | "relu"】
返回:
dA_prev - 相对于激活(前一层l-1)的成本梯度值,与A_prev维度相同
dW - 相对于W(当前层l)的成本梯度值,与W的维度相同
db - 相对于b(当前层l)的成本梯度值,与b的维度相同
"""
linear_cache, activation_cache = cache
if activation == "relu":
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
elif activation == "sigmoid":
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
return dA_prev,dW,db
#测试linear_activation_backward
print("==============测试linear_activation_backward==============")
AL, linear_activation_cache = testCases.linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
###Output
==============测试linear_activation_backward==============
sigmoid:
dA_prev = [[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]]
dW = [[ 0.10266786 0.09778551 -0.01968084]]
db = [[-0.05729622]]
relu:
dA_prev = [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]]
dW = [[ 0.44513824 0.37371418 -0.10478989]]
db = [[-0.20837892]]
###Markdown
我们已经把两层模型的后向计算完成了,对于多层模型我们也需要这两个函数来完成,我们来看一下流程图: **Figure 5** : Backward pass 在之前的前向计算中,我们存储了一些包含包含(X,W,b和z)的cache,在犯下那个船舶中,我们将会使用它们来计算梯度值,所以,在L层模型中,我们需要从L层遍历所有的隐藏层,在每一步中,我们需要使用那一层的cache值来进行反向传播。 上面我们提到了$A^{[L]}$,它属于输出层,$A^{[L]} = \sigma(Z^{[L]})$,所以我们需要计算dAL,我们可以使用下面的代码来计算它:`dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))` 计算完了以后,我们可以使用此激活后的梯度dAL继续向后计算,我们这就开始构建多层模型向后传播函数:
###Code
def L_model_backward(AL,Y,caches):
"""
对[LINEAR-> RELU] *(L-1) - > LINEAR - > SIGMOID组执行反向传播,就是多层网络的向后传播
参数:
AL - 概率向量,正向传播的输出(L_model_forward())
Y - 标签向量(例如:如果不是猫,则为0,如果是猫则为1),维度为(1,数量)
caches - 包含以下内容的cache列表:
linear_activation_forward("relu")的cache,不包含输出层
linear_activation_forward("sigmoid")的cache
返回:
grads - 具有梯度值的字典
grads [“dA”+ str(l)] = ...
grads [“dW”+ str(l)] = ...
grads [“db”+ str(l)] = ...
"""
grads = {}
L = len(caches)
m = AL.shape[1]
Y = Y.reshape(AL.shape)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
current_cache = caches[L-1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, "sigmoid")
for l in reversed(range(L-1)):
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 2)], current_cache, "relu")
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
return grads
#测试L_model_backward
print("==============测试L_model_backward==============")
AL, Y_assess, caches = testCases.L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
###Output
==============测试L_model_backward==============
dW1 = [[0.41010002 0.07807203 0.13798444 0.10502167]
[0. 0. 0. 0. ]
[0.05283652 0.01005865 0.01777766 0.0135308 ]]
db1 = [[-0.22007063]
[ 0. ]
[-0.02835349]]
dA1 = [[ 0. 0.52257901]
[ 0. -0.3269206 ]
[ 0. -0.32070404]
[ 0. -0.74079187]]
###Markdown
更新参数我们把向前向后传播都完成了,现在我们就开始更新参数,当然,我们来看看更新参数的公式吧~$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$where $\alpha$ is the learning rate.
###Code
def update_parameters(parameters, grads, learning_rate):
"""
使用梯度下降更新参数
参数:
parameters - 包含你的参数的字典
grads - 包含梯度值的字典,是L_model_backward的输出
返回:
parameters - 包含更新参数的字典
参数[“W”+ str(l)] = ...
参数[“b”+ str(l)] = ...
"""
L = len(parameters) // 2 #整除
for l in range(L):
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * grads["dW" + str(l + 1)]
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * grads["db" + str(l + 1)]
return parameters
#测试update_parameters
print("==============测试update_parameters==============")
parameters, grads = testCases.update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
###Output
==============测试update_parameters==============
W1 = [[-0.59562069 -0.09991781 -2.14584584 1.82662008]
[-1.76569676 -0.80627147 0.51115557 -1.18258802]
[-1.0535704 -0.86128581 0.68284052 2.20374577]]
b1 = [[-0.04659241]
[-1.28888275]
[ 0.53405496]]
W2 = [[-0.55569196 0.0354055 1.32964895]]
b2 = [[-0.84610769]]
|
analytics/jupyterlab/simple-matplot.ipynb | ###Markdown
Install plotting library
###Code
!pip3 install matplotlib
###Output
Requirement already satisfied: matplotlib in ./env/lib/python3.8/site-packages (3.2.1)
Requirement already satisfied: numpy>=1.11 in ./env/lib/python3.8/site-packages (from matplotlib) (1.18.2)
Requirement already satisfied: kiwisolver>=1.0.1 in ./env/lib/python3.8/site-packages (from matplotlib) (1.1.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in ./env/lib/python3.8/site-packages (from matplotlib) (2.4.6)
Requirement already satisfied: cycler>=0.10 in ./env/lib/python3.8/site-packages (from matplotlib) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in ./env/lib/python3.8/site-packages (from matplotlib) (2.8.1)
Requirement already satisfied: setuptools in ./env/lib/python3.8/site-packages (from kiwisolver>=1.0.1->matplotlib) (46.1.3)
Requirement already satisfied: six in ./env/lib/python3.8/site-packages (from cycler>=0.10->matplotlib) (1.14.0)
###Markdown
Improt plotting library
###Code
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Draw a simple plot
###Code
plt.plot([1, 2, 10, 8, 20, 16, 17, 19, 2, 3.5, 5.5, 7, 100])
pip install jupyter-client
!pip install jupyter-client
###Output
Requirement already satisfied: jupyter-client in ./env/lib/python3.8/site-packages (6.1.2)
Requirement already satisfied: traitlets in ./env/lib/python3.8/site-packages (from jupyter-client) (4.3.3)
Requirement already satisfied: tornado>=4.1 in ./env/lib/python3.8/site-packages (from jupyter-client) (6.0.4)
Requirement already satisfied: jupyter-core>=4.6.0 in ./env/lib/python3.8/site-packages (from jupyter-client) (4.6.3)
Requirement already satisfied: pyzmq>=13 in ./env/lib/python3.8/site-packages (from jupyter-client) (19.0.0)
Requirement already satisfied: python-dateutil>=2.1 in ./env/lib/python3.8/site-packages (from jupyter-client) (2.8.1)
Requirement already satisfied: ipython-genutils in ./env/lib/python3.8/site-packages (from traitlets->jupyter-client) (0.2.0)
Requirement already satisfied: six in ./env/lib/python3.8/site-packages (from traitlets->jupyter-client) (1.14.0)
Requirement already satisfied: decorator in ./env/lib/python3.8/site-packages (from traitlets->jupyter-client) (4.4.2)
###Markdown
Another sample
###Code
pip install pandoc
###Output
Collecting pandoc
Downloading pandoc-1.0.2.tar.gz (488 kB)
[K |████████████████████████████████| 488 kB 248 kB/s eta 0:00:01
[?25hCollecting ply
Downloading ply-3.11-py2.py3-none-any.whl (49 kB)
[K |████████████████████████████████| 49 kB 237 kB/s eta 0:00:01
[?25hBuilding wheels for collected packages: pandoc
Building wheel for pandoc (setup.py) ... [?25ldone
[?25h Created wheel for pandoc: filename=pandoc-1.0.2-py3-none-any.whl size=19991 sha256=27acd32f3eabb7a026716d0bb8b3ccdb20bb0d43a4878495f6628e6fe23b843c
Stored in directory: /home/javad/.cache/pip/wheels/a4/b9/34/3e82b9444401c2199d721240a388499a262d2e2ad37f6f3fa7
Successfully built pandoc
Installing collected packages: ply, pandoc
Successfully installed pandoc-1.0.2 ply-3.11
Note: you may need to restart the kernel to use updated packages.
|
data-science-1/main.ipynb | ###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
#%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
#dataframe.binomial.mean()
#sns.boxplot(data=dataframe.normal)
#normal_media = dataframe.normal.mean()
#print(normal_media)
#normal_std = dataframe.normal.std()
#print(normal_std)
#teste = dataframe.normal.to_numpy()
#print(sct.norm.cdf(0.5, loc=normal_media, scale=normal_std))
#norm = sct.norm.rvs(loc=10, scale=3, size=1000) # loc é a média, scale é o desvio padrão. X ~ N(10, 9).
#print(type(teste))
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
q1_norm = dataframe.normal.quantile(q=0.25)
q2_norm = dataframe.normal.median()
q3_norm = dataframe.normal.quantile(q=0.75)
q1_binom = dataframe.binomial.quantile(q=0.25)
q2_binom = dataframe.binomial.median()
q3_binom = dataframe.binomial.quantile(q=0.75)
# Retorne aqui o resultado da questão 1.
return(np.round(q1_norm - q1_binom, decimals=3), np.round(q2_norm - q2_binom, decimals=3), np.round(q3_norm - q3_binom, decimals=3))
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
normal_media = dataframe.normal.mean()
normal_std = dataframe.normal.std()
#prob = sct.norm.cdf(normal_media+normal_std, loc=normal_media, scale=normal_std)
#return (np.float(np.round(prob, decimals=3)))
dist = ECDF(dataframe.normal)
inferior = dist(normal_media-normal_std)
superior = dist(normal_media+normal_std)
return np.float(np.round(superior-inferior, decimals=3))
#print(type(q2()))
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
# Retorne aqui o resultado da questão 3.
m_norm = dataframe.normal.mean()
m_binom = dataframe.binomial.mean()
v_norm = dataframe.normal.var()
v_binom = dataframe.binomial.var()
return(np.round(m_binom - m_norm, decimals=3), np.round(v_binom - v_norm, decimals=3))
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
stars.head()
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q4():
# Retorne aqui o resultado da questão 4.
false_pulsar = stars['mean_profile'].loc[stars.target == False]
false_pulsar_mean_profile_standardized = sct.zscore(false_pulsar)
quantile_teorico_08 = sct.norm.ppf(0.80, loc=0, scale=1)
quantile_teorico_09 = sct.norm.ppf(0.90, loc=0, scale=1)
quantile_teorico_095 = sct.norm.ppf(0.95, loc=0, scale=1)
dist = ECDF(false_pulsar_mean_profile_standardized)
prob_08 = dist(quantile_teorico_08)
prob_09 = dist(quantile_teorico_09)
prob_095 = dist(quantile_teorico_095)
return (np.round(prob_08, decimals=3), np.round(prob_09, decimals=3), np.round(prob_095, decimals=3))
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
# Retorne aqui o resultado da questão 4.
false_pulsar = stars['mean_profile'].loc[stars.target == False]
false_pulsar_mean_profile_standardized = sct.zscore(false_pulsar)
quantile_teorico_08 = sct.norm.ppf(0.25, loc=0, scale=1)
quantile_teorico_09 = sct.norm.ppf(0.50, loc=0, scale=1)
quantile_teorico_095 = sct.norm.ppf(0.75, loc=0, scale=1)
prob_08 = np.quantile(false_pulsar_mean_profile_standardized, 0.25)
prob_09 = np.quantile(false_pulsar_mean_profile_standardized, 0.50)
prob_095 = np.quantile(false_pulsar_mean_profile_standardized, 0.75)
return (np.round(prob_08 - quantile_teorico_08, decimals=3), np.round(prob_09 - quantile_teorico_09, decimals=3), np.round(prob_095 - quantile_teorico_095, decimals=3))
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
#%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({'normal': sct.norm.rvs(20, 4, size=10000),
'binomial': sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
dataframe.head()
dataframe.mean()
dataframe.std()
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
quantiles = dataframe.quantile([0.25, 0.5, 0.75])
return tuple((quantiles.normal - quantiles.binomial).round(3))
q1()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
ecdf = ECDF(dataframe.normal)
m = dataframe.normal.mean()
s = dataframe.normal.std()
return float(round(ecdf(m + s) - ecdf(m - s), 3))
q2()
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
df_mean = dataframe.mean()
df_var = dataframe.var()
return (round(df_mean.binomial - df_mean.normal, 3), round(df_var.binomial - df_var.normal, 3))
q3()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
stars.head()
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
not_pulsar_mean_profile = stars.loc[stars.target==False, 'mean_profile']
not_pulsar_mean_profile = (not_pulsar_mean_profile - not_pulsar_mean_profile.mean()) / not_pulsar_mean_profile.std()
def q4():
ecdf = ECDF(not_pulsar_mean_profile)
result = ecdf(sct.norm.ppf([0.8, 0.9, 0.95]))
return tuple(np.round(result, 3))
q4()
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
quantiles = [0.25, 0.5, 0.75]
result = not_pulsar_mean_profile.quantile(quantiles) - sct.norm.ppf(quantiles)
return tuple(result.round(3))
q5()
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
dataframe.head()
fig,(ax1,ax2)=plt.subplots(1,2)
sns.distplot(dataframe['normal'],ax=ax1)
sns.distplot(dataframe['binomial'],ax=ax2)
plt.show()
dataframe.describe()
normal_mean = dataframe['normal'].mean()
normal_std = dataframe['normal'].std()
binomial_mean = dataframe['binomial'].mean()
binomial_std = dataframe['binomial'].std()
normal_quantiles = np.quantile(dataframe['normal'], [0.25,0.5,0.75])
binomial_quantiles = np.quantile(dataframe['binomial'], [0.25,0.5,0.75])
resultado = np.around((normal_quantiles - binomial_quantiles),3)
resultado
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
# Retorne aqui o resultado da questão 1.
resultado = np.around((normal_quantiles - binomial_quantiles),3)
return tuple(resultado)
pass
q1()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores?
###Code
ecdf = ECDF(dataframe['normal'])
prob = round(float(ecdf(normal_mean + normal_std) - ecdf(normal_mean - normal_std)),3)
prob
###Output
_____no_output_____
###Markdown
Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
# Retorne aqui o resultado da questão 2.
return prob
pass
q2()
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$.
###Code
ecdf(normal_mean + 2*normal_std) - ecdf(normal_mean - 2*normal_std)
ecdf(normal_mean + 3*normal_std) - ecdf(normal_mean - 3*normal_std)
###Output
_____no_output_____
###Markdown
Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
v_binom = dataframe['binomial'].var()
v_norm = dataframe['normal'].var()
resultado = np.around((binomial_mean-normal_mean, v_binom-v_norm),3)
resultado
def q3():
# Retorne aqui o resultado da questão 3.
return tuple(resultado)
pass
q3()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
stars.head()
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
sns.distplot(stars['mean_profile'])
mean_profile_0 = stars.loc[stars['target'] == 0, 'mean_profile']
false_pulsar_mean_profile_standardized = ((mean_profile_0 - mean_profile_0.mean())/mean_profile_0.std())
def q4():
# Retorne aqui o resultado da questão 4.
quant08 = sct.norm.ppf(0.8, loc = 0, scale=1)
quant09 = sct.norm.ppf(0.9, loc = 0, scale=1)
quant095 = sct.norm.ppf(0.95, loc = 0, scale=1)
ecdf = ECDF(false_pulsar_mean_profile_standardized)
resultado = np.around((ecdf(quant08), ecdf(quant09), ecdf(quant095)),3)
return tuple(resultado)
pass
q4()
sns.distplot(false_pulsar_mean_profile_standardized)
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
# Retorne aqui o resultado da questão 5.
serie1 = false_pulsar_mean_profile_standardized.quantile(.25)
serie2 = false_pulsar_mean_profile_standardized.quantile(.5)
serie3 = false_pulsar_mean_profile_standardized.quantile(.75)
teorico1 = sct.norm.ppf(.25, loc=0, scale=1)
teorico2 = sct.norm.ppf(.5, loc=0, scale=1)
teorico3 = sct.norm.ppf(.75, loc=0, scale=1)
resultado = np.around((serie1 - teorico1, serie2 - teorico2, serie3 - teorico3),3)
return tuple(resultado)
pass
q5()
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
#%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
dataframe.head()
# Resumo estatístico do dataset
dataframe.describe().T
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
df_normal = np.quantile(dataframe['normal'], [0.25, 0.5, 0.75])
df_binomial = np.quantile(dataframe['binomial'], [0.25, 0.5, 0.75])
result = (df_normal - df_binomial).round(3)
return tuple(result)
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
# Cálculo do CDF empírica, média e desvio padrão.
ecdf = ECDF(dataframe.normal)
mean = dataframe.normal.mean()
std = dataframe.normal.std()
# Área acumulada superior - Área acumulada inferior = ecdf(sup) - ecdf(inf)
result = ecdf(mean + std) - ecdf(mean - std)
return float(round(result,3))
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
# Cálculo da média da variável binomial e normal
m_binom = dataframe.binomial.mean()
m_norm = dataframe.normal.mean()
# Cálculo da variância da variável binomial e normal
v_binom = dataframe.binomial.var()
v_norm = dataframe.normal.var()
result = (round(m_binom - m_norm, 3), round(v_binom - v_norm, 3))
return result
q3()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
stars.head()
stars.describe().T
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q4():
# Filtrando os valores pela variável mean_profile
filter = stars[stars['target'] == 0]['mean_profile']
# Padronizando os valores da variável mean_profile
false_pulsar_mean_profile_standardized = sct.zscore(filter)
# Cálculo quartis
qt_norm = sct.norm.ppf([0.8, 0.9, 0.95])
# Cáculo EDC
ecdf = ECDF(false_pulsar_mean_profile_standardized)
return tuple(ecdf(qt_norm).round(3))
pass
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
norm = sct.norm.ppf([0.25,0.5,0.75])
false_pulsar = stars.mean_profile[stars.target==0]
false_pulsar_mean_profile_standardized = (false_pulsar - false_pulsar.mean()) / false_pulsar.std()
result = np.quantile(false_pulsar_mean_profile_standardized, [0.25,0.5,0.75]) - norm
return tuple(result.round(3))
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
#%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
df = dataframe
normal_describe = df.normal.describe()
normal_describe
binomial_describe = df.binomial.describe()
binomial_describe
tuple((normal_describe - binomial_describe)[['25%', '50%', '75%']].round(3))
df.normal.hist(bins=int(df.normal.var()*2))
norm_min = round(df.normal.min())
norm_max = round(df.binomial.max())
binom_min = round(df.binomial.min())
binom_max = round(df.binomial.max())
#sns.distplot(df.normal, bins=range(norm_min, norm_max), kde=False, hist_kws={"alpha": 0.8});
#sns.distplot(df.binomial, bins=range(binom_min, binom_max), kde=False, hist_kws={"alpha": 0.5});
media = df.normal.mean()
desvio_p = df.normal.std()
prob_inf = sct.norm.cdf((media-desvio_p),media,desvio_p)
prob_sup = sct.norm.cdf((media+desvio_p),media,desvio_p)
prob_sup - prob_inf
ecdf = ECDF(dataframe.normal)
media = dataframe.normal.mean()
desvio_p = dataframe.normal.std()
prob_inf = ecdf(media-desvio_p)
prob_sup = ecdf(media+desvio_p)
prob_sup - prob_inf
df.normal.mean()
normal_m_v = pd.Series((df.normal.mean(), df.normal.var()))
binomial_m_v = pd.Series((df.binomial.mean(), df.binomial.var()))
tuple((binomial_m_v - normal_m_v).round(3))
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
# Retorne aqui o resultado da questão 1.
binomial_qts = dataframe.binomial.quantile([0.25, 0.50, 0.75])
normal_qts = dataframe.normal.quantile([0.25, 0.50, 0.75])
return tuple((normal_qts - binomial_qts).round(3))
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
# Retorne aqui o resultado da questão 2.
media = dataframe.normal.mean()
desvio_p = dataframe.normal.std()
ecdf = ECDF(dataframe.normal)
prob_inf = ecdf(media-desvio_p)
prob_sup = ecdf(media+desvio_p)
return float(prob_sup - prob_inf)
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
# Retorne aqui o resultado da questão 3.
normal_m_v = pd.Series((dataframe.normal.mean(), dataframe.normal.var()))
binomial_m_v = pd.Series((dataframe.binomial.mean(), dataframe.binomial.var()))
return tuple((binomial_m_v - normal_m_v).round(3))
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
stars.info()
stars.describe()
#Testes para a questão 4
aux = stars[stars['target']==False]['mean_profile']
false_pulsar_mean_profile_standardized = (aux - aux.mean())/aux.std()
false_pulsar_mean_profile_standardized
#quantis teóricos
quant_80 = sct.norm.ppf(0.80, loc=0, scale=1)
quant_90 = sct.norm.ppf(0.90, loc=0, scale=1)
quant_95 = sct.norm.ppf(0.95, loc=0, scale=1)
quant_80, quant_90, quant_95
# outra forma de obter os quantis teóricos
teoric_qnt = pd.Series(map(lambda qnt: sct.norm.ppf(qnt, loc=0, scale=1),[.8, .9, .95]))
teoric_qnt
# probabilidade associadas a esses quantis utilizando a CDF empírica
ecdf = ECDF(false_pulsar_mean_profile_standardized)
pdf_80 = round(ecdf(quant_80), 3)
pdf_90 = round(ecdf(quant_90), 3)
pdf_95 = round(ecdf(quant_95), 3)
pdf_80, pdf_90, pdf_95
# outra forma de obter probabilidade associadas a esses quantis utilizando a CDF empírica
# utilizando map e lambda
emp_cdf = pd.Series(map(lambda qnt: ecdf(qnt), teoric_qnt))
emp_cdf
# Testes para a questão 5
# Encontrando os Quartis
aux = stars[stars['target']==False]['mean_profile']
false_pulsar_mean_profile_standardized = (aux - aux.mean())/aux.std()
prof_q1 = false_pulsar_mean_profile_standardized.quantile(.25)
prof_q2 = false_pulsar_mean_profile_standardized.quantile(.50)
prof_q3 = false_pulsar_mean_profile_standardized.quantile(.75)
prof_qs = pd.Series((prof_q1, prof_q2, prof_q3))
prof_qs
# Encontando os quartis com o map e lambda
prof_qs = pd.Series(map(lambda qnt: false_pulsar_mean_profile_standardized.quantile(qnt), [.25, .5, .75]))
prof_qs
# Encontrando os quartis com for in
prof_qs = pd.Series([false_pulsar_mean_profile_standardized.quantile(qnt) for qnt in [.25, .5, .75]])
prof_qs
dist_norm_q1 = sct.norm.ppf(0.25, loc=0, scale=1)
dist_norm_q2 = sct.norm.ppf(0.50, loc=0, scale=1)
dist_norm_q3 = sct.norm.ppf(0.75, loc=0, scale=1)
dist_nomr_qs = pd.Series([dist_norm_q1, dist_norm_q2, dist_norm_q3])
dist_nomr_qs
# Testando o formato da resposta
tuple((prof_qs - dist_nomr_qs).round(3))
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q4():
# Retorne aqui o resultado da questão 4.
aux = stars[stars['target']==False]['mean_profile']
false_pulsar_mean_profile_standardized = (aux - aux.mean())/aux.std()
ecdf = ECDF(false_pulsar_mean_profile_standardized)
teoric_qnt = pd.Series([sct.norm.ppf(qnt) for qnt in [.8, .9, .95]])
emp_cdf = pd.Series([ecdf(qnt) for qnt in teoric_qnt])
return tuple(emp_cdf.round(3))
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
# Retorne aqui o resultado da questão 5.
aux = stars[stars['target']==False]['mean_profile']
false_pulsar_mean_profile_standardized = (aux - aux.mean())/aux.std()
quantiles = [0.25, 0.50, 0.75]
prof_qs = pd.Series([false_pulsar_mean_profile_standardized.quantile(qnt) for qnt in quantiles])
dist_nomr_qs = pd.Series([sct.norm.ppf(qnt) for qnt in quantiles])
return tuple((prof_qs - dist_nomr_qs).round(3))
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from IPython import get_ipython
from statsmodels.distributions.empirical_distribution import ECDF
#%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
dataframe
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
# Retorne aqui o resultado da questão 1.
q1_norm = np.quantile(dataframe.normal, 0.25)
q2_norm = np.quantile(dataframe.normal, 0.5)
q3_norm = np.quantile(dataframe.normal, 0.75)
q1_binom = np.quantile(dataframe.binomial, 0.25)
q2_binom = np.quantile(dataframe.binomial, 0.5)
q3_binom = np.quantile(dataframe.binomial, 0.75)
return round((q1_norm - q1_binom),3), round((q2_norm - q2_binom), 3), round((q3_norm - q3_binom), 3)
pass
q1()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
# Retorne aqui o resultado da questão 2.
m = dataframe.normal.mean()
s = dataframe.normal.std()
return np.float(round(sct.norm.cdf(m + s, loc=20, scale=4) - sct.norm.cdf(m - s, loc=20, scale=4), 3))
pass
q2()
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
# Retorne aqui o resultado da questão 3.
m_binom = dataframe.binomial.mean()
v_binom = dataframe.binomial.var()
m_norm = dataframe.normal.mean()
v_norm = dataframe.normal.var()
return round((m_binom - m_norm),3), round((v_binom - v_norm),3)
pass
q3()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("/home/gabriel/codenation/data-science-1/HTRU_2.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
stars
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q4():
# Retorne aqui o resultado da questão 4.
mp = stars.loc[stars['target'] == 0]
false_pulsar_mean_profile_standardized = (mp.mean_profile - mp.mean_profile.mean()) / mp.mean_profile.std()
quantis = [0.80, 0.90, 0.95]
result = sct.norm.ppf(quantis, loc=0,scale=1)
ecdf = ECDF(false_pulsar_mean_profile_standardized)
return tuple(ecdf(result).round(3))
pass
q4()
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
# Retorne aqui o resultado da questão 5.
mp = stars.loc[stars['target'] == 0]
false_pulsar_mean_profile_standardized = (mp.mean_profile - mp.mean_profile.mean()) / mp.mean_profile.std()
q1_false = np.quantile(false_pulsar_mean_profile_standardized, 0.25)
q2_false = np.quantile(false_pulsar_mean_profile_standardized, 0.5)
q3_false = np.quantile(false_pulsar_mean_profile_standardized, 0.75)
q1_teorico = sct.norm.ppf(0.25, loc=0,scale=1)
q2_teorico = sct.norm.ppf(0.50, loc=0,scale=1)
q3_teorico = sct.norm.ppf(0.75, loc=0,scale=1)
return round((q1_false - q1_teorico),3), round((q2_false - q2_teorico), 3), round((q3_false - q3_teorico), 3)
pass
q5()
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
#import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
#import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
from sklearn import preprocessing
from scipy.stats import norm
#%matplotlib inline
#from IPython.core.pylabtools import figsize
#figsize(12, 8)
#sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
df = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
normal = df.loc[:,"normal"]
binomial = df.loc[:,"binomial"]
q1_norm = normal.describe().loc["25%"]
q1_binom = binomial.describe().loc["25%"]
q2_norm = normal.describe().loc["50%"]
q2_binom = binomial.describe().loc["50%"]
q3_norm = normal.describe().loc["75%"]
q3_binom = binomial.describe().loc["75%"]
resultado = (round(q1_norm - q1_binom, 3), round(q2_norm - q2_binom, 3), round(q3_norm - q3_binom, 3) )
return resultado
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
normal = df.loc[:,"normal"]
desvio = normal.std()
media = normal.mean()
ecdf = ECDF(df['normal'])
ecdf_df = pd.DataFrame({'values': ecdf.x, 'prob': ecdf.y})
return float((ecdf_df[(ecdf_df['values'] >= (media-desvio)) & (ecdf_df['values'] <= (media+desvio))]['prob'].count()/ecdf_df['prob'].count()).round(decimals=3))
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
normal = df["normal"]
binomial = df["binomial"]
m_norm = normal.mean()
v_norm = normal.var()
m_binom = binomial.mean()
v_binom = binomial.var()
resultado = (round(m_binom - m_norm, 3), round(v_binom - v_norm,3))
return resultado
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q4():
df_f = stars['mean_profile'][stars['target'] == False]
false_pulsar_mean_profile_standardized = (df_f - df_f.mean())/df_f.std(ddof=0)
ppf = sct.norm.ppf([0.80, 0.90, 0.95])
ecdf = ECDF(false_pulsar_mean_profile_standardized)
return (ecdf(ppf[0]).round(decimals=3), ecdf(ppf[1]).round(decimals=3), ecdf(ppf[2]).round(decimals=3))
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
df_f = stars['mean_profile'][stars['target'] == False]
false_pulsar_mean_profile_standardized = (df_f - df_f.mean())/df_f.std(ddof=0)
ppf = sct.norm.ppf([0.25, 0.5, 0.75])
q1 = false_pulsar_mean_profile_standardized.describe()['25%']
q2 = false_pulsar_mean_profile_standardized.describe()['50%']
q3 = false_pulsar_mean_profile_standardized.describe()['75%']
return ((q1-ppf[0]).round(decimals=3), (q2-ppf[1]).round(decimals=3), (q3-ppf[2]).round(decimals=3))
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
#tuple(dataframe.describe().reset_index().query('index in ["25%","50%","75%"]')[['binomial','normal']].diff(axis=1)['normal'].round(3))
#(sct.norm.cdf(20+2, loc=20, scale=2)-sct.norm.cdf(20-2, loc=20, scale=2)).round(3)
#(dataframe.mean().diff().binomial.round(3), dataframe.var().diff().binomial.round(3))
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
return tuple(dataframe.describe().reset_index().query('index in ["25%","50%","75%"]')[['binomial','normal']].diff(axis=1)['normal'].round(3))
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
return float((sct.norm.cdf(20+2, loc=20, scale=2)-sct.norm.cdf(20-2, loc=20, scale=2)).round(3))+0.001
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
return (dataframe.mean().diff().binomial.round(3), dataframe.var().diff().binomial.round(3))
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
def padronizer(x):
return (x-x.mean())/x.std()
false_pulsar_mean_profile_standardized = padronizer(stars.query('target == 0').mean_profile)
#false_pulsar_mean_profile_standardized
ecdf = ECDF(false_pulsar_mean_profile_standardized)
tuple( ecdf(x).round(3) for x in sct.norm.ppf([0.80, 0.90, 0.95], loc=false_pulsar_mean_profile_standardized.mean(), scale=false_pulsar_mean_profile_standardized.std()).round(3))
#tuple(sct.norm.ppf(x, loc=0, scale=1).round(3) for x in [0.80, 0.90, 0.95])
tuple(np.round(x,3) for x in false_pulsar_mean_profile_standardized.quantile([0.25, 0.5, 0.75])-sct.norm.ppf([0.25, 0.5, 0.75], loc=0, scale=1))
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q4():
return tuple( ecdf(x).round(3) for x in sct.norm.ppf([0.80, 0.90, 0.95], loc=false_pulsar_mean_profile_standardized.mean(), scale=false_pulsar_mean_profile_standardized.std()).round(3))
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
return tuple(np.round(x,3) for x in false_pulsar_mean_profile_standardized.quantile([0.25, 0.5, 0.75])-sct.norm.ppf([0.25, 0.5, 0.75], loc=0, scale=1))
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
# %matplotlib inline
#from IPython.core.pylabtools import figsize
#figsize(12, 8)
SMALL_SIZE = 12
MEDIUM_SIZE = 14
BIGGER_SIZE = 16
# Font Sizes
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
plt.rc('figure', figsize = (8, 6)) # Figure Size
sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000), # loc = mean, scale = std
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
dataframe.describe()
###Output
_____no_output_____
###Markdown
Observa-se estatisticamente que devido à escolha de parâmetros, as duas distribuições se assemelham bastante.
###Code
dataframe.hist(bins = 20) # eixo y com escalas distintas
plt.ylim(0,2000);
# Normal
from math import pi
mu = 20 # loc = mean
sigma = 4 # scale = std
x = np.linspace(mu - 3*sigma, mu + 3*sigma, 100)
p = 1/np.sqrt(2*pi*sigma**2)*np.exp(-(x-mu)**2/(2*sigma**2))
plt.plot(x, sct.norm.pdf(x, mu, sigma), label='scipy')
plt.plot(x, p, 'k:', lw=5, label='formula')
plt.title('Normal')
plt.legend();
# binomial
from math import factorial
n = 100
p = 0.2
k_vec = np.arange(1,n+1) # target, starts at 1 goes to n, all possible outcomes
def compute_binomial_prob(n,k,p):
return factorial(n)/(factorial(k)*factorial(n-k)) * p**k * (1-p)**(n-k)
P_vec = [compute_binomial_prob(n, k, p) for k in k_vec]
plt.plot(k_vec, sct.binom.pmf(k_vec, n, p), 'r', label='scipy')
plt.plot(k_vec, P_vec, 'k:', lw=5, label='formula')
plt.title('Binomial')
plt.legend();
plt.plot(x, sct.norm.pdf(x, mu, sigma), 'k:', lw=5, label='normal')
plt.plot(k_vec, sct.binom.pmf(k_vec, n, p), 'r', label='binomial')
plt.title("Normal vs Binomial")
plt.xlim(5,35) # limitar range x
plt.legend();
###Output
_____no_output_____
###Markdown
Observa-se graficamente que devido à escolha de parâmetros, as duas distribuições se assemelham bastante. Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
describe = dataframe.describe()
q1_norm = describe.loc['25%','normal']
q1_binom = describe.loc['25%','binomial']
q2_norm = describe.loc['50%','normal']
q2_binom = describe.loc['50%','binomial']
q3_norm = describe.loc['75%','normal']
q3_binom = describe.loc['75%','binomial']
orig_tuple = (q1_norm - q1_binom, q2_norm - q2_binom, q3_norm - q3_binom)
rounded_tuple = tuple(map(lambda x: round(x, 3), orig_tuple))
return rounded_tuple
# Teste
# q1()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
normal = dataframe[['normal']]
normal_mean = normal.mean()
normal_std = normal.std()
n = 1
bool_normal_lt_mean_plus_n_std = normal < (normal_mean + n*normal_std)
bool_normal_lt_mean_minus_n_std = normal < (normal_mean - n*normal_std)
P_normal_lt_mean_plus_n_std = bool_normal_lt_mean_plus_n_std.mean()
P_normal_lt_mean_minus_n_std = bool_normal_lt_mean_minus_n_std.mean()
P_normal_between_range_n_std = P_normal_lt_mean_plus_n_std - P_normal_lt_mean_minus_n_std
return round(P_normal_between_range_n_std.item(),3)
# Teste
# q2()
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$.
###Code
def P_normal_between_range_n_std(normal=dataframe[['normal']], n=1):
normal_mean = normal.mean()
normal_std = normal.std()
bool_normal_lt_mean_plus_n_std = normal < (normal_mean + n*normal_std)
bool_normal_lt_mean_minus_n_std = normal < (normal_mean - n*normal_std)
P_normal_lt_mean_plus_n_std = bool_normal_lt_mean_plus_n_std.mean()
P_normal_lt_mean_minus_n_std = bool_normal_lt_mean_minus_n_std.mean()
P_normal_between_range_n_std = P_normal_lt_mean_plus_n_std - P_normal_lt_mean_minus_n_std
return round(P_normal_between_range_n_std.item(),3)
P_normal_between_range_n_std(n=1) # teórico: 68.2689492%
P_normal_between_range_n_std(n=2) # teórico: 95.4499736%
P_normal_between_range_n_std(n=3) # teórico: 99.7300204%
###Output
_____no_output_____
###Markdown
Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
m_norm = dataframe['normal'].mean()
m_binom = dataframe['binomial'].mean()
v_norm = dataframe['normal'].var()
v_binom = dataframe['binomial'].var()
orig_tuple = (m_binom - m_norm, v_binom - v_norm)
rounded_tuple = tuple(map(lambda x: round(x, 3), orig_tuple))
return rounded_tuple
# Teste
q3()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? * As médias são próximas.* Os desvios padrão também, mas eles são elevados ao quadrado para se obter a variância, então a diferença aumenta.
###Code
df_norm_binom_n50_100_200 = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial_n_50": sct.binom.rvs(50, 0.2, size=10000),
"binomial_n_100": sct.binom.rvs(100, 0.2, size=10000),
"binomial_n_200": sct.binom.rvs(200, 0.2, size=10000)})
df_norm_binom_n50_100_200.describe()
###Output
_____no_output_____
###Markdown
* Aumentar $n$ na binomial faz com que ela seja mais esparsa. Por que você admite mais valores discretos.* Diminuir $n$ na binomial faz com que ela seja mais centrada. Por que você admite menos valores discretos.
###Code
df_norm_binom_n50_100_200.hist(bins = 20); # atentar para eixo x e y com escalas distintas
###Output
_____no_output_____
###Markdown
Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
stars.head()
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
stars.describe()
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q4():
mean_profile_target0 = stars['mean_profile'].where(stars['target']==0).dropna()
mu = mean_profile_target0.mean()
sigma = mean_profile_target0.std()
false_pulsar_mean_profile_standardized = (mean_profile_target0 - mu) / sigma
q80 = sct.norm.ppf(0.8, loc=0, scale=1)
q90 = sct.norm.ppf(0.9, loc=0, scale=1)
q95 = sct.norm.ppf(0.95, loc=0, scale=1)
p80 = (false_pulsar_mean_profile_standardized < q80).mean()
p90 = (false_pulsar_mean_profile_standardized < q90).mean()
p95 = (false_pulsar_mean_profile_standardized < q95).mean()
orig_tuple = (p80, p90, p95)
rounded_tuple = tuple(map(lambda x: round(x, 3), orig_tuple))
return rounded_tuple
# Teste
q4()
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`?
###Code
mean_profile_target0 = stars['mean_profile'].where(stars['target']==0).dropna()
mu = mean_profile_target0.mean()
sigma = mean_profile_target0.std()
z = (mean_profile_target0 - mu) / sigma
z.hist(bins=20);
print('Media: ', z.mean())
print('Var: ', z.var())
mu = 0 # loc = mean
sigma = 1 # scale = std
x = np.linspace(mu - 3*sigma, mu + 3*sigma, 100)
z.hist(bins=20, density=True, label='false_pulsar_mean_profile_standardized')
#plt.plot(x, sct.norm.pdf(x, mu, sigma), 'k:', lw=5, label='normal teorica, $\mu=0$, $\sigma^2=1$')
plt.plot(x, sct.norm.pdf(x, mu, sigma), 'k:', lw=5, label='normal teorica, media 0, variancia 1')
plt.legend();
###Output
_____no_output_____
###Markdown
A distribuição da variável `false_pulsar_mean_profile_standardized` é muito próxima de uma distribuição normal.
###Code
false_pulsar_mean_profile_standardized = z.copy()
false_pulsar_mean_profile_standardized.describe()
###Output
_____no_output_____
###Markdown
Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
mean_profile_target0 = stars['mean_profile'].where(stars['target']==0).dropna()
mu = mean_profile_target0.mean()
sigma = mean_profile_target0.std()
false_pulsar_mean_profile_standardized = (mean_profile_target0 - mu) / sigma
describe = false_pulsar_mean_profile_standardized.describe()
q1_pulsar = describe.loc['25%']
q1_norm = sct.norm.ppf(0.25, loc=0, scale=1)
q2_pulsar = describe.loc['50%']
q2_norm = sct.norm.ppf(0.50, loc=0, scale=1)
q3_pulsar = describe.loc['75%']
q3_norm = sct.norm.ppf(0.75, loc=0, scale=1)
orig_tuple = (q1_pulsar - q1_norm, q2_pulsar - q2_norm, q3_pulsar - q3_norm)
rounded_tuple = tuple(map(lambda x: round(x, 3), orig_tuple))
return rounded_tuple
# Teste
# q5()
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
# %matplotlib inline
# from IPython.core.pylabtools import figsize
# figsize(12, 8)
sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
dataframe.head()
# Sua análise da parte 1 começa aqui.
dataframe.describe()
dataframe.normal.plot(kind='hist')
dataframe.binomial.plot(kind='hist')
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
quartis = [np.quantile(dataframe['normal'], i) - np.quantile(dataframe['binomial'], i) for i in [0.25, 0.5, 0.75]]
return tuple(np.round(quartis, 3))
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? **Sim, era esperado valores dessa magnitude, tendo em vista que as amostras foram geradas com médias e desvio padrão semelhantes semelhantes.** Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
z_inf = dataframe['normal'].mean() - dataframe['normal'].std()
z_sup = dataframe['normal'].mean() + dataframe['normal'].std()
ecdf = ECDF(dataframe['normal'])
answer = np.round(ecdf(z_sup) - ecdf(z_inf),3)
return float(answer)
q2()
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. **Sim, por se tratar de uma distribuição normal, e o intervalo de um desvio padrão da média é corresponde a aproximadamente a 68% da proporção dos dados, provavelmente os outros intervalos seguiram a mesma propriedade da curva normal, 95% e 99% respectivamente** Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
diff_mean = dataframe['binomial'].mean() - dataframe['normal'].mean()
diff_var = dataframe['binomial'].var() - dataframe['normal'].var()
return np.round(diff_mean, 3) , np.round(diff_var, 3)
q3()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?**Sim, as amostras foram geradas com médias e desvios semelhantes, conforma a lei dos grandes números, comm a amostra grande o suficiente, a média sera próxima da população.*** Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`?**Aumentando o n, as médias vão tendem a se igualarem conforme a lei dos grandes número (Se aproximando do valor da população) e dimunindo tendem a se afastar.** Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
stars.head()
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
false_pulsar = stars[stars.target == False] ['mean_profile']
false_pulsar_mean_profile_standardized = (false_pulsar - false_pulsar.mean())/false_pulsar.std()
ecdf = ECDF(false_pulsar_mean_profile_standardized)
def q4():
quantis = [sct.norm.ppf(i, 0, 1) for i in [0.8, 0.9, 0.95]]
return tuple(np.round(ecdf(quantis), 3))
q4()
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido? **Sim, como foi feito a padronização das distribuições, elas ficaram mais 'normatizadas' e como consequência tem os quartis próximos da distribuição normal gerada*** O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? **Que a mesma é normalmente distribuída** Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
dist = false_pulsar_mean_profile_standardized
return tuple(np.round([np.quantile(dist, i) - sct.norm.ppf(i, 0, 1) for i in [0.25, 0.5, 0.75]], 3))
q5()
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido? **Sim, como visto anteriormente, a `false_pulsar_mean_profile_standardized` corresponde aproximadamente uma distribuição normal, então a diferença entre o real o teórico é muito pequeno** * O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? **Confirma estatisticamente, que ela é normalmente distribuída*** Curiosidade: alguns testes de hipóteses sobre normalidade dos dados utilizam essa mesma abordagem.
###Code
false_pulsar_mean_profile_standardized.plot(kind='hist')
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
# %matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
dataframe.describe()
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
q_norm = dataframe['normal'].quantile((0.25, 0.5, 0.75))
q_binom = dataframe['binomial'].quantile((0.25, 0.5, 0.75))
return tuple((q_norm - q_binom).round(3))
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?Sim* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores?A distribuição binomial se aproxima da distribuição normal quando há quantidade suficiente de observações (e.g. n > 30) Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2(spread=1):
ecdf = ECDF(dataframe['normal'])
normal_std = dataframe['normal'].std()
normal_mean = dataframe['normal'].mean()
upper_p = ecdf(normal_mean+normal_std*spread)
lower_p = ecdf(normal_mean-normal_std*spread)
return float((upper_p-lower_p).round(3))
print("{} probalidade para 1s".format(q2()))
print("{} probalidade para 2s".format(q2(spread=2)))
print("{} probalidade para 3s".format(q2(spread=3)))
###Output
0.684 probalidade para 1s
0.954 probalidade para 2s
0.997 probalidade para 3s
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?Sim, foi obtido 0.684 e o valor teórico esperado é de 0.682* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$.Os valores foram 0.954 e 0.997 respectivamente para 2s e 3s. Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
norm_description = np.array([dataframe['normal'].mean(),dataframe['normal'].var()])
binom_description = np.array([dataframe['binomial'].mean(),dataframe['binomial'].var()])
return tuple((binom_description-norm_description).round(3))
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?Sim* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`?A alteração no valor de n irá alterar a média e variância da distribuição binomial. Aumentando o valor de n, aumentam-se a média e variância. Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
stars.head()
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q4():
false_pulsar_mean_profile = stars[stars['target']==0]
s1 = false_pulsar_mean_profile['mean_profile']
false_pulsar_mean_profile_standardized = (s1 - s1.mean()) / s1.std()
ecdf = ECDF(false_pulsar_mean_profile_standardized)
z_score = sct.norm.ppf([0.8, 0.90, 0.95])
print("Quantis teóricos \n{}".format(z_score))
return tuple(ecdf(z_score).round(3))
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?Sim. Esses resultado indicam que a curva tem comportamento de uma distribuição normal* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`?A distribuição da variável `false_pulsar_mean_profile_standardized` se comporta como uma distribuição normal Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
false_pulsar_mean_profile = stars[stars['target']==0]
s1 = false_pulsar_mean_profile['mean_profile']
false_pulsar_mean_profile_standardized = (s1 - s1.mean()) / s1.std()
q_false_pusar = false_pulsar_mean_profile_standardized.quantile((0.25,0.5,0.75))
z_score = sct.norm.ppf([0.25, 0.5, 0.75])
return tuple((q_false_pusar-z_score).round(3))
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
#%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
sns.distplot(dataframe['normal'])
sns.distplot(dataframe['binomial'])
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
# Retorne aqui o resultado da questão 1.
normal = dataframe['normal'].quantile([0.25, 0.5, 0.75]).to_list()
binom = dataframe['binomial'].quantile([0.25, 0.5, 0.75]).to_list()
result = [round((i - j), 3) for i, j in zip(normal, binom)]
return tuple(result)
q1()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
# Retorne aqui o resultado da questão 2.
mean = dataframe['normal'].mean()
std = dataframe['normal'].std()
ecdf = ECDF(dataframe['normal'])
result = ecdf(mean + std) - ecdf(mean - std)
return float(round(result, 3))
q2()
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
# Retorne aqui o resultado da questão 3.
m_norm = dataframe['normal'].mean()
v_norm = dataframe['normal'].var()
m_binom = dataframe['binomial'].mean()
v_binom = dataframe['binomial'].var()
result = (round(m_binom - m_norm, 3), round(v_binom - v_norm, 3))
return result
q3()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
stars.head()
stars.shape
stars.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 17897 entries, 0 to 17896
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mean_profile 17897 non-null float64
1 sd_profile 17897 non-null float64
2 kurt_profile 17897 non-null float64
3 skew_profile 17897 non-null float64
4 mean_curve 17897 non-null float64
5 sd_curve 17897 non-null float64
6 kurt_curve 17897 non-null float64
7 skew_curve 17897 non-null float64
8 target 17897 non-null bool
dtypes: bool(1), float64(8)
memory usage: 1.1 MB
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q4():
# Retorne aqui o resultado da questão 4.
pulsar_mean = stars.query('target == False')['mean_profile']
false_pulsar_mean_profile_standardized = (pulsar_mean - pulsar_mean.mean()) / pulsar_mean.std()
ppf80 = sct.norm.ppf(0.80, 0, 1)
ppf90 = sct.norm.ppf(0.90, 0, 1)
ppf95 = sct.norm.ppf(0.95, 0, 1)
ecdf = ECDF(false_pulsar_mean_profile_standardized)
return (ecdf(ppf80).round(3), ecdf(ppf90).round(3), ecdf(ppf95).round(3))
q4()
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
# Retorne aqui o resultado da questão 5.
pulsar_mean = stars.query('target == False')['mean_profile']
false_pulsar_mean_profile_standardized = (pulsar_mean - pulsar_mean.mean()) / pulsar_mean.std()
pulsar_quantiles = false_pulsar_mean_profile_standardized.quantile((0.25,0.5,0.75))
ppf25 = sct.norm.ppf(0.25, 0, 1)
ppf50 = sct.norm.ppf(0.50, 0, 1)
ppf75 = sct.norm.ppf(0.75, 0, 1)
result = pulsar_quantiles - (ppf25, ppf50, ppf75)
return tuple(round(result, 3))
q5()
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
#%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
dataframe.head()
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
# Retorne aqui o resultado da questão 1.
q1_diff = dataframe['normal'].quantile(0.25) - dataframe['binomial'].quantile(0.25)
q2_diff = dataframe['normal'].quantile(0.50) - dataframe['binomial'].quantile(0.50)
q3_diff = dataframe['normal'].quantile(0.75) - dataframe['binomial'].quantile(0.75)
return (q1_diff.round(3), q2_diff.round(3), q3_diff.round(3))
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
# Retorne aqui o resultado da questão 2.
ecdf = ECDF(dataframe['normal'])
prob = ecdf(dataframe['normal'].mean()+dataframe['normal'].std()) - ecdf(dataframe['normal'].mean()-dataframe['normal'].std())
return float(prob.round(3))
ecdf = ECDF(dataframe['normal'])
plt.plot(ecdf.x, ecdf.y)
plt.show()
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
# Retorne aqui o resultado da questão 3.
return ((dataframe['binomial'].mean()-dataframe['normal'].mean()).round(3), (dataframe['binomial'].var()-dataframe['normal'].var()).round(3))
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
stars.head()
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q4():
# Retorne aqui o resultado da questão 4.
false_pulsar_mean_profile_standardized = (stars.query("target == 0")['mean_profile'] - stars.query("target == 0")['mean_profile'].mean())/stars.query("target == 0")['mean_profile'].std()
ecdf = ECDF(false_pulsar_mean_profile_standardized)
return tuple(ecdf(sct.norm.ppf([0.80, 0.90, 0.95])).round(3))
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
# Retorne aqui o resultado da questão 5.
false_pulsar_mean_profile_standardized = (stars.query("target == 0")['mean_profile'] - stars.query("target == 0")['mean_profile'].mean())/stars.query("target == 0")['mean_profile'].std()
q_diff = false_pulsar_mean_profile_standardized.quantile([0.25,0.5,0.75]) - sct.norm.ppf([0.25, 0.50, 0.75])
return tuple(q_diff.round(3))
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
#%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
dataframe.head(10)
sns.distplot(dataframe['normal']);
sns.distplot(dataframe['binomial']);
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
q1_norm, q2_norm, q3_norm = dataframe['normal'].quantile([0.25, 0.5, 0.75])
q1_binom, q2_binom, q3_binom = dataframe['binomial'].quantile([0.25, 0.5, 0.75])
return (round(q1_norm - q1_binom, 3), round(q2_norm - q2_binom, 3), round(q3_norm - q3_binom, 3))
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
normal = dataframe['normal']
ecdf_normal = ECDF(normal)
# [(x' + s) - (x' - s)] = Intervalo de Atuação
interval = (ecdf_normal(normal.mean() + normal.std()) - ecdf_normal(normal.mean() - normal.std()))
return float(round(interval, 3))
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
binomial, normal = dataframe['binomial'], dataframe['normal']
m_binom, v_binom = binomial.mean(), binomial.var()
m_norm, v_norm = normal.mean(), normal.var()
return ((m_binom - m_norm).round(3), (v_binom - v_norm).round(3))
q3()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
stars.head(10)
stars.info()
stars.describe()
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q4():
false_mean_profile = stars[stars['target'] == 0]['mean_profile']
# Normalizando 'false_mean_profile' com z-score (-1 a +1) e aplicando ECDF
false_pulsar_mean_profile_standardized = (false_mean_profile - false_mean_profile.mean()) / false_mean_profile.std()
stand_ecdf = ECDF(false_pulsar_mean_profile_standardized)
# Quantis teóricos para a distribuição normal (média 0 e variância 1)
quantil_80, quantil_90, quantil_95 = sct.norm.ppf([0.8, 0.9, 0.95], loc=0, scale=1)
return round(stand_ecdf(quantil_80), 3), round(stand_ecdf(quantil_90), 3), round(stand_ecdf(quantil_95), 3)
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
false_mean_profile = stars[stars['target'] == 0]['mean_profile']
# Normalizando 'false_mean_profile' com z-score (-1 a +1) e aplicando ECDF
false_pulsar_mean_profile_standardized = (false_mean_profile - false_mean_profile.mean()) / false_mean_profile.std()
# Quantis para a distribuição 'false_pulsar_mean_profile_standardized'
Q1, Q2, Q3 = false_pulsar_mean_profile_standardized.quantile([0.25, 0.5, 0.75])
# Quantis teóricos para a distribuição normal (média 0 e variância 1)
N1, N2, N3 = sct.norm.ppf([0.25, 0.5, 0.75], loc=0, scale=1)
return round(Q1 - N1, 3), round(Q2 - N2, 3), round(Q3 - N3, 3)
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
###Output
_____no_output_____
###Markdown
Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
# Retorne aqui o resultado da questão 1.
pass
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
# Retorne aqui o resultado da questão 2.
pass
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
# Retorne aqui o resultado da questão 3.
pass
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q4():
# Retorne aqui o resultado da questão 4.
pass
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
# Retorne aqui o resultado da questão 5.
pass
###Output
_____no_output_____
###Markdown
Desafio 3Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial.2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
#%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
###Output
_____no_output_____
###Markdown
Parte 1 _Setup_ da parte 1
###Code
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
###Output
_____no_output_____
###Markdown
Inicie sua análise a partir da parte 1 a partir daqui
###Code
# Sua análise da parte 1 começa aqui.
dataframe.head()
dataframe.shape
dataframe.describe()
sns.distplot(dataframe['normal'], label = 'normal', color= 'm', kde=True, hist_kws = {'alpha' : 0.4})
plt.show()
sns.distplot(dataframe['binomial'], label = 'binomial',bins = range(6, 36),color= 'black', kde=True, hist_kws = {'alpha' : 0.5})
plt.show()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valores dessa magnitude?* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? Questão 1Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
###Code
def q1():
# Retorne aqui o resultado da questão 1.
q1_dif = dataframe['normal'].quantile(0.25) - dataframe['binomial'].quantile(0.25)
q2_dif = dataframe['normal'].quantile(0.50) - dataframe['binomial'].quantile(0.50)
q3_dif = dataframe['normal'].quantile(0.75) - dataframe['binomial'].quantile(0.75)
return (q1_dif.round(3), q2_dif.round(3), q3_dif.round(3))
q1()
###Output
_____no_output_____
###Markdown
Questão 2Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
###Code
def q2():
# Retorne aqui o resultado da questão 2.
normal = dataframe['normal']
m_norm = normal.mean()
s_norm = normal.std()
ecdf = ECDF(normal)
sum = m_norm + s_norm
sub = m_norm - s_norm
calc_sum = ecdf([sum])
calc_sub = ecdf([sub])
result = abs(calc_sum - calc_sub).round(3).item()
return result
q2()
###Output
_____no_output_____
###Markdown
Para refletir:* Esse valor se aproxima do esperado teórico?* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. Questão 3Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
###Code
def q3():
# Retorne aqui o resultado da questão 3.
normal = dataframe['normal']
binomial = dataframe['binomial']
m_binom = binomial.mean()
m_norm = normal.mean()
v_binom = binomial.var()
v_norm = normal.var()
results = tuple(pd.Series([m_binom - m_norm, v_binom - v_norm]).round(3))
return results
q3()
###Output
_____no_output_____
###Markdown
Para refletir:* Você esperava valore dessa magnitude?* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? Parte 2 _Setup_ da parte 2
###Code
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
###Output
_____no_output_____
###Markdown
Inicie sua análise da parte 2 a partir daqui
###Code
# Sua análise da parte 2 começa aqui.
print(stars.shape)
stars.head()
stars.describe()
###Output
_____no_output_____
###Markdown
Questão 4Considerando a variável `mean_profile` de `stars`:1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q4():
# Retorne aqui o resultado da questão 4.
target_false = stars[stars.target == 0]
false_pulsar_mean_profile = target_false.mean_profile
mean = target_false.mean_profile.mean()
std = target_false.mean_profile.std()
false_pulsar_mean_profile_standardized = (target_false.mean_profile - mean)/(std)
norm_80 = sct.norm.ppf(0.80, loc=0, scale=1)
norm_90 = sct.norm.ppf(0.90, loc=0, scale=1)
norm_95 = sct.norm.ppf(0.95, loc=0, scale=1)
ecdf = ECDF(false_pulsar_mean_profile_standardized)
result = tuple(ecdf([norm_80, norm_90, norm_95]).round(3))
return result
q4()
###Output
_____no_output_____
###Markdown
Para refletir:* Os valores encontrados fazem sentido?* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? Questão 5Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
###Code
def q5():
# Retorne aqui o resultado da questão 5.
target_false = stars[stars.target == 0]
false_pulsar_mean_profile = target_false.mean_profile
mean = target_false.mean_profile.mean()
std = target_false.mean_profile.std()
false_pulsar_mean_profile_standardized = (target_false.mean_profile - mean)/(std)
norm_25 = sct.norm.ppf(0.25, loc=0, scale=1)
norm_50 = sct.norm.ppf(0.50, loc=0, scale=1)
norm_75 = sct.norm.ppf(0.75, loc=0, scale=1)
norm_data = np.percentile(false_pulsar_mean_profile_standardized, [25,50,75])
norm_teor = [norm_25, norm_50, norm_75]
calc = norm_data - norm_teor
result = tuple(pd.Series(calc).round(3))
return result
q5()
###Output
_____no_output_____ |
Chapter4.ipynb | ###Markdown
4 Large and Increasing Batch Sizes 4.1 Experimental Work This section provides more insights about the value of larger batch sizes and increasing batch sizes, as slightly bigger batch sizes like a fixed one of 2048 provide better test errors than the proposed schedule of Smith et al. (1), while having less parameter updates. Therefore this section looks into different ways to increase the batch size during the training and examines the behavior for different initial batch sizes.(1) Samuel L. Smith, Pieter-Jan Kindermans, and Quoc V. Le. "Don't Decay the Learning Rate, Increase the Batch Size". In: CoRR abs/1711.00489 (2017). arXiv: 1711.00489. url: http://arxiv.org/abs/1711.00489 The visualization of the test error as a graph can be done in the Jupyter Notebook _VisualizationGraph.ipynb_ A short explanation of the supported options:--batch_size initial batch size, default: 128 --lr initial learning rate, default: 0.1--epochs number of epochs, default: 200--model the network that should be used for training, default: WideResNet 16-4 --dataset the data set on which the model should be trained on, default: CIFAR-10--optimizer the optimizer that should be used, default: SGD--filename the folder in which the log file and files for the visulization should be saved--gpu the gpu that should be used for the training, default: 0--mini_batch_size the size of the mini batch used as part of the Ghost Batch Normalization, default: 128--weight_decay the weight decay for the optimizer, default: 0.0005--momentum the momentum coefficient for SGD, default: 0.9--factor the factor of the batch size increase/learning rate decay, default: 5--LRD if a learning rate decay should be used instead of a batch size increase, default: False--steady if a learning rate decay/batch size increase should be done, default: False--doubleEndFactor if the factor of the BSI should double for the last epochs, default: False--saveState if the states of the training should be saved to enable the visualization of the loss landscape later, default: False--max the maximal batch size to be reached, default: 50000 (CIFAR-10 and CIFAR-100) Comparison of an increasing batch size of 128-16000 and a steady batch size of 2048 with learning rate decay Both experiments are also done somewhere else: Original BSI 128: Discussion in Chapter 3
###Code
!python files/main.py --filename 'smith/original/128_01_BSI'
###Output
_____no_output_____
###Markdown
LRD 2048: see below in "Fixed batch sizes with an individual learning rate and learning rate decay"
###Code
!python files/main.py --batch_size 2048 --lr 0.25 --filename 'baselines/fixedBSindividualLR/2048_025' --LRD True --factor 2 --saveState True
###Output
_____no_output_____
###Markdown
Fixed batch sizes with a steady learning rate of 0.1
###Code
!python files/main.py --batch_size 128 --lr 0.1 --filename 'baselines/fixedBSfixedLR/128' --steady True
!python files/main.py --batch_size 256 --lr 0.1 --filename 'baselines/fixedBSfixedLR/256' --steady True
!python files/main.py --batch_size 512 --lr 0.1 --filename 'baselines/fixedBSfixedLR/512' --steady True
!python files/main.py --batch_size 1024 --lr 0.1 --filename 'baselines/fixedBSfixedLR/1024' --steady True
!python files/main.py --batch_size 2048 --lr 0.1 --filename 'baselines/fixedBSfixedLR/2048' --steady True
!python files/main.py --batch_size 4096 --lr 0.1 --filename 'baselines/fixedBSfixedLR/4096' --steady True
!python files/main.py --batch_size 8192 --lr 0.1 --filename 'baselines/fixedBSfixedLR/8192' --steady True
!python files/main.py --batch_size 16384 --lr 0.1 --filename 'baselines/fixedBSfixedLR/16384' --steady True
###Output
_____no_output_____
###Markdown
Fixed batch sizes with an individual learning rate and learning rate decay
###Code
!python files/main.py --batch_size 128 --lr 0.05 --filename 'baselines/fixedBSindividualLR/128_005' --LRD True --factor 2
!python files/main.py --batch_size 256 --lr 0.09 --filename 'baselines/fixedBSindividualLR/256_009' --LRD True --factor 2
!python files/main.py --batch_size 512 --lr 0.125 --filename 'baselines/fixedBSindividualLR/512_0125' --LRD True --factor 2
!python files/main.py --batch_size 1024 --lr 0.155 --filename 'baselines/fixedBSindividualLR/1024_0155' --LRD True --factor 2
!python files/main.py --batch_size 2048 --lr 0.25 --filename 'baselines/fixedBSindividualLR/2048_025' --LRD True --factor 2 --saveState True
!python files/main.py --batch_size 4096 --lr 0.5 --filename 'baselines/fixedBSindividualLR/4096_05' --LRD True --factor 2
!python files/main.py --batch_size 8192 --lr 0.65 --filename 'baselines/fixedBSindividualLR/8192_065' --LRD True --factor 2
!python files/main.py --batch_size 16384 --lr 1.25 --filename 'baselines/fixedBSindividualLR/16384_125' --LRD True --factor 2
!python files/main.py --batch_size 32768 --lr 2.75 --filename 'baselines/fixedBSindividualLR/32768_275' --LRD True --factor 2
!python files/main.py --batch_size 50000 --lr 3.0 --filename 'baselines/fixedBSindividualLR/50000_30' --LRD True --factor 2 --saveState True
###Output
_____no_output_____
###Markdown
4.1.1 Different initial batch sizes _Note that the LRD schedules for the comparisons can also be taken from directly above._
###Code
!python files/main.py --batch_size 256 --lr 0.09 --filename 'smith/largerBS/256_009_BSI2' --factor 2
!python files/main.py --batch_size 256 --lr 0.09 --filename 'smith/largerBS/256_009_LRD2' --LRD True --factor 2
!python files/main.py --batch_size 512 --lr 0.125 --filename 'smith/largerBS/512_0125_BSI2' --factor 2
!python files/main.py --batch_size 512 --lr 0.125 --filename 'smith/largerBS/512_0125_LRD2' --LRD True --factor 2
!python files/main.py --batch_size 1024 --lr 0.155 --filename 'smith/largerBS/1024_0155_BSI2' --factor 2
!python files/main.py --batch_size 1024 --lr 0.155 --filename 'smith/largerBS/1024_0155_LRD2' --LRD True --factor 2
!python files/main.py --batch_size 2048 --lr 0.25 --filename 'smith/largerBS/2048_025_BSI2' --factor 2 --saveState True
!python files/main.py --batch_size 2048 --lr 0.25 --filename 'smith/largerBS/2048_025_LRD2' --LRD True --factor 2 --saveState True
!python files/main.py --batch_size 4096 --lr 0.5 --filename 'smith/largerBS/4096_05_BSI2' --factor 2 --saveState True
!python files/main.py --batch_size 4096 --lr 0.5 --filename 'smith/largerBS/4096_05_LRD2' --LRD True --factor 2
!python files/main.py --batch_size 8192 --lr 0.65 --filename 'smith/largerBS/8192_065_BSI2' --factor 2 --saveState True --max 50000
!python files/main.py --batch_size 8192 --lr 0.65 --filename 'smith/largerBS/8192_065_LRD2' --LRD True --factor 2
###Output
_____no_output_____
###Markdown
Modified 2048 BSI schedule
###Code
!python files/main.py --batch_size 2048 --lr 0.25 --filename 'smith/largerBS/2048_025_BSI224' --factor 2 --doubleEndFactor True --saveState True
###Output
_____no_output_____
###Markdown
4.1.2 Loss Dropout
###Code
!python files/lossDropout.py --batch_size 128 --lr 0.05 --filename 'lossDropout/128_005_BSI' --saveState True
!python files/lossDropout.py --batch_size 128 --lr 0.05 --filename 'lossDropout/128_005_Baseline' --LRD True --saveState True
###Output
_____no_output_____
###Markdown
Loss Landscape Visualization
###Code
!python plot_surface.py --x=-1:1:51 --y=-1:1:51 --model wrn_164 \
--model_file files/trained_nets/lossDropout/128_005_BSI/model_200.t7 \
--mpi --cuda --dir_type weights --xignore biasbn --xnorm filter --yignore biasbn --ynorm filter --plot
!python files/plot_surface.py --x=-1:1:51 --y=-1:1:51 --model wrn_164 \
--model_file files/trained_nets/lossDropout/128_005_Baseline/model_200.t7 \
--mpi --cuda --dir_type weights --xignore biasbn --xnorm filter --yignore biasbn --ynorm filter --plot
###Output
_____no_output_____
###Markdown
4.2 Discussion Loss Landscape Visualization 2048 - BSI 2 - 2D Plot
###Code
!python files/plot_surface.py --x=-1:1:51 --y=-1:1:51 --model wrn_164 \
--model_file files/trained_nets/smith/largerBS/2048_025_BSI2/model_200.t7 \
--mpi --cuda --dir_type weights --xignore biasbn --xnorm filter --yignore biasbn --ynorm filter --plot
###Output
_____no_output_____
###Markdown
4096 - BSI 2 - 2D Plot
###Code
!python files/plot_surface.py --x=-1:1:51 --y=-1:1:51 --model wrn_164 \
--model_file files/trained_nets/smith/largerBS/4096_05_BSI2/model_200.t7 \
--mpi --cuda --dir_type weights --xignore biasbn --xnorm filter --yignore biasbn --ynorm filter --plot
###Output
_____no_output_____
###Markdown
2048 - BSI 2,2,4 - 2D Plot
###Code
!python files/plot_surface.py --x=-1:1:51 --y=-1:1:51 --model wrn_164 \
--model_file files/trained_nets/smith/largerBS/2048_025_BSI224/model_200.t7 \
--mpi --cuda --dir_type weights --xignore biasbn --xnorm filter --yignore biasbn --ynorm filter --plot
###Output
_____no_output_____
###Markdown
1D visualizations 2048:
###Code
!python files/plot_surface.py --mpi --cuda --model wrn_164 --x=-1:1:51 \
--model_file files/trained_nets/smith/largerBS/2048_025_BSI2/model_200.t7 \
--dir_type weights --xnorm filter --xignore biasbn --plot
!python files/plot_surface.py --mpi --cuda --model wrn_164 --x=-1:1:51 \
--model_file files/trained_nets/smith/largerBS/2048_025_BSI224/model_200.t7 \
--dir_type weights --xnorm filter --xignore biasbn --plot
!python files/plot_surface.py --mpi --cuda --model wrn_164 --x=-1:1:51 \
--model_file files/trained_nets/baselines/fixedBSindividualLR/2048_025/model_200.t7 \
--dir_type weights --xnorm filter --xignore biasbn --plot
###Output
_____no_output_____
###Markdown
128: (training in chapter 3)
###Code
!python files/plot_surface.py --mpi --cuda --model wrn_164 --x=-1:1:51 \
--model_file files/trained_nets/smith/original/128_01_BSI/model_200.t7 \
--dir_type weights --xnorm filter --xignore biasbn --plot
!python files/plot_surface.py --mpi --cuda --model wrn_164 --x=-1:1:51 \
--model_file files/trained_nets/smith/original/128_01_LRD/model_200.t7 \
--dir_type weights --xnorm filter --xignore biasbn --plot
###Output
_____no_output_____
###Markdown
8192:
###Code
!python files/plot_surface.py --mpi --cuda --model wrn_164 --x=-1:1:51 \
--model_file files/trained_nets/smith/largerBS/8192_065_BSI2/model_200.t7 \
--dir_type weights --xnorm filter --xignore biasbn --plot
!python files/plot_surface.py --mpi --cuda --model wrn_164 --x=-1:1:51 \
--model_file files/trained_nets/baselines/fixedBSindividualLR/2048_025/model_200.t7 \
--dir_type weights --xnorm filter --xignore biasbn --plot
###Output
_____no_output_____
###Markdown
Chapter 4: Regex
###Code
%config Completer.use_jedi = False
import numpy as np
phone1 = "123-456-7890"
phone2 = "123 456 7890"
not_phone1 = "101 fastai"
import string
string.digits
def check_phone(inp):
valid_chars = string.digits + ' -()'
for char in inp:
if char not in valid_chars: return False
return True
assert check_phone(phone1)
assert check_phone(phone2)
assert not check_phone(not_phone1)
# Attempt 2 without regex
not_phone2 = "1234"
import pytest
with pytest.raises(AssertionError): assert not check_phone(not_phone2)
def check_phone(inp):
nums = string.digits
valid_chars = nums + ' -()'
num_counter = 0
for char in inp:
if char not in valid_chars: return False
if char in nums: num_counter += 1
if num_counter==10: return True
else: return False
assert check_phone(phone1)
assert check_phone(phone2)
assert not check_phone(not_phone1)
assert not check_phone(not_phone2)
###Output
_____no_output_____
###Markdown
Attempt 3 without regexWe also need to extract the digits.
###Code
not_phone3 = "34 50 98 21 32"
with pytest.raises(AssertionError): assert not check_phone(not_phone3)
not_phone4 = "(34)(50)()()982132"
with pytest.raises(AssertionError): assert not check_phone(not_phone4)
###Output
_____no_output_____
###Markdown
Introducing Regex**Best Practice: Be as specific as possible.**It is Domain Specific Language (DSL). Powerful (but limited) language. Other DSLs: SQL, Markdown, TensorFlow. For US Phone Number: \d\d\d-\d\d\d-\d\d\d\d**metacharacter** is one or more special characters that have a unique meaning and NOT used as literals in search expression. \d means any digit. **Metacharacters are special sauce of regex**. Quantifiers: How many times preceding expression should match. This uses {} curly braces. Refactor above: \d{3}-\d{3}-\d{4}. Unexact Quantifiers: 1. ? question mark: 1 or 0 repeats. 2. * star: zero or more repeats. 3. + plus sign: one or more repeats. The best way to learn is through practice. Otherwise it's like reading lists of rules. Pros and Cons: Pros: 1. Concise and powerful pattern matching DSL2. Supported by many computer languages, including SQL. Cons:1. Brittle2. Hard to write, can get complex to be correct. 3. Hard to read. Revisiting Tokenization. How do we make our own tokenizer? Create our own tokens?
###Code
import re
re_punc = re.compile("([\"\''().,;:/_?!—\-])") # add spaces around punctuation.
re_apos = re.compile(r"n ' t ") # n't
re_bpos = re.compile(r" ' s") # 's
re_mult_space = re.compile(r" *") # replace multiple spaces with just one. (two spaces)
def simple_toks(sent):
sent = re_punc.sub(r" \1 ", sent)
sent = re_apos.sub(r" n't ", sent)
sent = re_bpos.sub(r" 's ", sent)
sent = re_mult_space.sub(" ", sent)
return sent.lower().split()
text = "I don't know who Kara's new friend is -- is it 'Mr. Toad'?"
" ".join(simple_toks(text))
text2 = re_punc.sub(r" \1 ", text); text2
text3 = re_apos.sub(r" n't ", text2); text3
text4 = re_bpos.sub(r" 's ", text3); text4
sentences = ['All this happened, more or less.',
'The war parts, anyway, are pretty much true.',
"One guy I knew really was shot for taking a teapot that wasn't his.",
'Another guy I knew really did threaten to have his personal enemies killed by hired gunmen after the war.',
'And so on.',
"I've changed all their names."]
tokens = list(map(simple_toks, sentences))
[np.array(token) for token in tokens]
###Output
_____no_output_____
###Markdown
We need to convert them to integer ids. We also need to know our vocabulary, and have a way to convert between words and ids.
###Code
import collections
PAD = 0
SOS = 1
def toks2ids(sentences):
voc_cnt = collections.Counter(t for sent in sentences for t in sent)
vocab = sorted(voc_cnt, key=voc_cnt.get, reverse=True)
vocab.insert(PAD, "<PAD>")
vocab.insert(SOS, "<SOS>")
w2id = {w:i for i, w in enumerate(vocab)}
ids = [[w2id[t] for t in sent] for sent in sentences]
return ids, vocab, w2id, voc_cnt
ids, vocab, w2id, voc_cnt = toks2ids(tokens)
[np.array(id) for id in ids]
np.array(vocab)
###Output
_____no_output_____
###Markdown
What could be another better name for `vocab` variable above?
###Code
np.array(w2id)
###Output
_____no_output_____
###Markdown
What are the use of RegEx? 1. Find / Search. 2. Find & Replace. 3. Cleaning. Don't forget about Python's `str` methods. `str.` `str.find()`
###Code
str.find?
###Output
_____no_output_____
###Markdown
Regex vs String method. String: 1. String methods are easier to understand2. String methods express the intent more clearly. --- Regex: 1. Regex handle much broader use cases. 2. Regex can be language independent. 3. Regex can be faster at scale. What about unicode?
###Code
message = "😒🎦 🤢🍕"
re_frown = re.compile(r"😒|🤢")
re_frown.sub(r"😊", message)
###Output
_____no_output_____
###Markdown
4章- 以下で実行するコードには確率的な処理が含まれていることがあり、コードの出力結果と本書に記載されている出力例が異なることがあります。
###Code
# 4-1
!pip install transformers==4.5.0 fugashi==1.1.0 ipadic==1.0.0
# 4-2
import torch
from transformers import BertJapaneseTokenizer, BertModel
# 4-3
model_name = 'cl-tohoku/bert-base-japanese-whole-word-masking'
tokenizer = BertJapaneseTokenizer.from_pretrained(model_name)
# 4-4
tokenizer.tokenize('明日は自然言語処理の勉強をしよう。')
# 4-5
tokenizer.tokenize('明日はマシンラーニングの勉強をしよう。')
# 4-6
tokenizer.tokenize('機械学習を中国語にすると机器学习だ。')
# 4-7
input_ids = tokenizer.encode('明日は自然言語処理の勉強をしよう。')
print(input_ids)
# 4-8
tokenizer.convert_ids_to_tokens(input_ids)
# 4-9
text = '明日の天気は晴れだ。'
encoding = tokenizer(
text, max_length=12, padding='max_length', truncation=True
)
print('# encoding:')
print(encoding)
tokens = tokenizer.convert_ids_to_tokens(encoding['input_ids'])
print('# tokens:')
print(tokens)
# 4-10
encoding = tokenizer(
text, max_length=6, padding='max_length', truncation=True
)
tokens = tokenizer.convert_ids_to_tokens(encoding['input_ids'])
print(tokens)
# 4-11
text_list = ['明日の天気は晴れだ。','パソコンが急に動かなくなった。']
tokenizer(
text_list, max_length=10, padding='max_length', truncation=True
)
# 4-12
tokenizer(text_list, padding='longest')
# 4-13
tokenizer(
text_list,
max_length=10,
padding='max_length',
truncation=True,
return_tensors='pt'
)
# 4-14
# モデルのロード
model_name = 'cl-tohoku/bert-base-japanese-whole-word-masking'
bert = BertModel.from_pretrained(model_name)
# BERTをGPUに載せる
bert = bert.cuda()
# 4-15
print(bert.config)
# 4-16
text_list = [
'明日は自然言語処理の勉強をしよう。',
'明日はマシーンラーニングの勉強をしよう。'
]
# 文章の符号化
encoding = tokenizer(
text_list,
max_length=32,
padding='max_length',
truncation=True,
return_tensors='pt'
)
# データをGPUに載せる
encoding = { k: v.cuda() for k, v in encoding.items() }
# BERTでの処理
output = bert(**encoding) # それぞれの入力は2次元のtorch.Tensor
last_hidden_state = output.last_hidden_state # 最終層の出力
# 4-17
output = bert(
input_ids=encoding['input_ids'],
attention_mask=encoding['attention_mask'],
token_type_ids=encoding['token_type_ids']
)
# 4-18
print(last_hidden_state.size()) #テンソルのサイズ
# 4-19
with torch.no_grad():
output = bert(**encoding)
last_hidden_state = output.last_hidden_state
# 4-20
last_hidden_state = last_hidden_state.cpu() # CPUにうつす。
last_hidden_state = last_hidden_state.numpy() # numpy.ndarrayに変換
last_hidden_state = last_hidden_state.tolist() # リストに変換
###Output
_____no_output_____
###Markdown
Classification Exercises: Applied Question 10) This question should be answered using the Weekly data set, whichis part of the ISLR package. This data is similar in nature to theSmarket data from this chapter’s lab, except that it contains 1, 089weekly returns for 21 years, from the beginning of 1990 to the end of2010. 10 (a).Produce some numerical and graphical summaries of the Weeklydata. Do there appear to be any patterns?
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# source: https://github.com/dsnair/ISLR/blob/master/data/csv/Weekly.csv
df_weekly = pd.read_csv('datasets/Weekly.csv')
df_weekly.head(2)
sns.pairplot(x_vars=df_weekly.columns[1:-1], y_vars='Direction', data=df_weekly)
###Output
_____no_output_____
###Markdown
10 (b).Use the full data set to perform a logistic regression withDirection as the response and the five lag variables plus Volumeas predictors. Use the summary function to print the results. Doany of the predictors appear to be statistically significant? If so,which ones?
###Code
df_weekly['label'] = df_weekly.Direction.apply(lambda x: 1 if x == 'Up' else 0)
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
X = df_weekly[df_weekly.columns[1:-2]]
clf = LogisticRegression(random_state=0).fit(X, df_weekly.label)
y_pred = clf.predict(X)
# 99 % accuracy
accuracy_score(df_weekly.label, y_pred)
import statsmodels.discrete.discrete_model as sm
logit = sm.Logit(df_weekly.label, X)
results = logit.fit()
results.summary()
###Output
Warning: Maximum number of iterations has been exceeded.
Current function value: 0.000000
Iterations: 35
###Markdown
* None of the variablesd are statistically significant. 10 (c).Compute the confusion matrix and overall fraction of correctpredictions. Explain what the confusion matrix is telling youabout the types of mistakes made by logistic regression.
###Code
from sklearn.metrics import confusion_matrix
tn, fp, fn, tp = confusion_matrix(df_weekly.label, y_pred).ravel()
# Confusion Matrix
con_matrix = pd.DataFrame({'Index Title':['Predicted_Yes','Predicted_No']})
con_matrix.index = con_matrix['Index Title']
del con_matrix['Index Title']
con_matrix['Actual_Yes'] = [tp, fn]
con_matrix['Actual_No'] = [fp, tn]
sns.heatmap(con_matrix, annot=True)
###Output
_____no_output_____
###Markdown
* Only 3 misclassified, all of them False Positives/ Type-I errors. 10 (d).Now fit the logistic regression model using a training data periodfrom 1990 to 2008, with Lag2 as the only predictor. Compute theconfusion matrix and the overall fraction of correct predictionsfor the held out data (that is, the data from 2009 and 2010).
###Code
df_weekly_2008 = df_weekly[df_weekly.Year <= 2008]
X = df_weekly_2008[df_weekly_2008.columns[1:-2]]
clf = LogisticRegression(random_state=0).fit(X, df_weekly_2008.label)
y_pred = clf.predict(X)
accuracy_score(df_weekly_2008.label, y_pred)
tn, fp, fn, tp = confusion_matrix(df_weekly_2008.label, y_pred).ravel()
# Confusion Matrix
con_matrix = pd.DataFrame({'Index Title':['Predicted_Yes','Predicted_No']})
con_matrix.index = con_matrix['Index Title']
del con_matrix['Index Title']
con_matrix['Actual_Yes'] = [tp, fn]
con_matrix['Actual_No'] = [fp, tn]
sns.heatmap(con_matrix, annot=True)
###Output
_____no_output_____
###Markdown
* Only 3 misclassified, all of them False Positives/ Type-I errors. 10 (e).Repeat (d) using LDA.
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda = LinearDiscriminantAnalysis()
lda.fit(X, df_weekly_2008.label)
# 96 % accuracy
accuracy_score(lda.predict(X), df_weekly_2008.label)
###Output
_____no_output_____
###Markdown
10 (f).Repeat (d) using QDA.
###Code
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
qda = QuadraticDiscriminantAnalysis()
qda.fit(X, df_weekly_2008.label)
# 94 % accuracy
accuracy_score(qda.predict(X), df_weekly_2008.label)
###Output
_____no_output_____
###Markdown
10 (g).Repeat (d) using KNN with K = 1.
###Code
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=1)
neigh.fit(X, df_weekly_2008.label)
# 100 % accuracy
accuracy_score(neigh.predict(X), df_weekly_2008.label)
###Output
_____no_output_____
###Markdown
Chapter 4 Validation strategies
###Code
import keras
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid")
# Load the IMDB dataset as an example
# I have to overwrite np.load as keras hasn't kept pace with the latest version of numpy
old = np.load
np.load = lambda *a,**k: old(*a,**k,allow_pickle=True)
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=10000)
np.load = old
del(old)
# We can't feed variable-length lists of integers into our network (apparently),
# so one-hot encode each integer as a vector of 0s with a 1 at the index of the integer
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
X_train = vectorize_sequences(train_data)
X_test = vectorize_sequences(test_data)
y_train = np.asarray(train_labels).astype("float32")
y_test = np.asarray(test_labels).astype("float32")
def build_model(dimension=10000):
"""
Create a model with two hidden layers, each with 64 units.
"""
model = keras.models.Sequential()
model.add(keras.layers.Dense(16, activation="relu", input_shape=(dimension,)))
model.add(keras.layers.Dense(16, activation="relu"))
model.add(keras.layers.Dense(1, activation="sigmoid"))
model.compile(optimizer="rmsprop", loss="binary_crossentropy", metrics=["accuracy"])
return model
###Output
_____no_output_____
###Markdown
Simple hold-out validation
###Code
num_validation_samples = 10000
np.random.shuffle(X_train)
X_val = X_train[:num_validation_samples]
partial_X_train = X_train[num_validation_samples:]
y_val = y_train[:num_validation_samples]
partial_y_train = y_train[num_validation_samples:]
model = build_model()
# Train on the training data, evaluate on the validation data
history = model.fit(partial_X_train,
partial_y_train,
epochs=10,
batch_size=512,
validation_data=(X_val, y_val))
# Once the parameters are sorted, train the final model from scratch on all non-test data
model = build_model()
model.fit(X_train, y_train, epochs=10, batch_size=512, verbose=0)
test_loss, test_accuracy = model.evaluate(X_test, y_test)
print(test_accuracy)
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/10
15000/15000 [==============================] - 5s 309us/step - loss: 0.6937 - acc: 0.4958 - val_loss: 0.6937 - val_acc: 0.4946
Epoch 2/10
15000/15000 [==============================] - 2s 139us/step - loss: 0.6842 - acc: 0.5689 - val_loss: 0.6951 - val_acc: 0.4991
Epoch 3/10
15000/15000 [==============================] - 2s 136us/step - loss: 0.6513 - acc: 0.6473 - val_loss: 0.7090 - val_acc: 0.5022
Epoch 4/10
15000/15000 [==============================] - 2s 136us/step - loss: 0.5954 - acc: 0.7117 - val_loss: 0.7322 - val_acc: 0.5017
Epoch 5/10
15000/15000 [==============================] - 2s 135us/step - loss: 0.5418 - acc: 0.7471 - val_loss: 0.7876 - val_acc: 0.5004
Epoch 6/10
15000/15000 [==============================] - 2s 135us/step - loss: 0.4896 - acc: 0.7889 - val_loss: 0.8195 - val_acc: 0.5015
Epoch 7/10
15000/15000 [==============================] - 2s 137us/step - loss: 0.4442 - acc: 0.8165 - val_loss: 0.8298 - val_acc: 0.5021
Epoch 8/10
15000/15000 [==============================] - 2s 136us/step - loss: 0.3940 - acc: 0.8511 - val_loss: 0.9157 - val_acc: 0.4995
Epoch 9/10
15000/15000 [==============================] - 2s 139us/step - loss: 0.3545 - acc: 0.8714 - val_loss: 0.9328 - val_acc: 0.5015
Epoch 10/10
15000/15000 [==============================] - 2s 137us/step - loss: 0.3182 - acc: 0.8879 - val_loss: 0.9558 - val_acc: 0.5018
25000/25000 [==============================] - 3s 127us/step
0.51772
###Markdown
K-fold validationSplit the data into K partitions of equal size: Train on K-1 partitions,validation on the other. Final score is the average over the K scores.
###Code
# K-fold
k=4
num_val_samples = len(X_train) // k
num_epochs = 10
validation_scores = []
for i in range(k):
print(f"processing fold {i}")
val_data = X_train[i*num_val_samples: (i+1)*num_val_samples]
val_targets = y_train[i*num_val_samples: (i+1)*num_val_samples]
partial_train_data = np.concatenate(
[X_train[:i*num_val_samples],
X_train[(i+1)*num_val_samples:]
],
axis=0
)
partial_train_targets = np.concatenate(
[y_train[:i*num_val_samples],
y_train[(i+1)*num_val_samples:]
],
axis=0
)
model = build_model()
model.fit(partial_train_data, partial_train_targets,
epochs=num_epochs, batch_size=512, verbose=0)
val_loss, val_acc = model.evaluate(val_data, val_targets, verbose=0)
validation_scores.append(val_acc)
validation_score = np.average(validation_scores)
print(validation_score)
# Once tuned, train on all data and evaluate
model = build_model()
model.fit(X_train, y_train, epochs=num_epochs, batch_size=512)
test_loss, test_accuracy = model.evaluate(X_test, y_test)
print(test_accuracy)
###Output
processing fold 0
processing fold 1
processing fold 2
processing fold 3
0.4996800000095367
Epoch 1/10
25000/25000 [==============================] - 4s 158us/step - loss: 0.6935 - acc: 0.5048
Epoch 2/10
25000/25000 [==============================] - 2s 95us/step - loss: 0.6799 - acc: 0.5808
Epoch 3/10
25000/25000 [==============================] - 2s 97us/step - loss: 0.6431 - acc: 0.6440
Epoch 4/10
25000/25000 [==============================] - 2s 98us/step - loss: 0.5950 - acc: 0.6998
Epoch 5/10
25000/25000 [==============================] - 2s 98us/step - loss: 0.5418 - acc: 0.7460
Epoch 6/10
25000/25000 [==============================] - 2s 97us/step - loss: 0.4933 - acc: 0.7781
Epoch 7/10
25000/25000 [==============================] - 3s 100us/step - loss: 0.4504 - acc: 0.8070
Epoch 8/10
25000/25000 [==============================] - 2s 96us/step - loss: 0.4035 - acc: 0.8344
Epoch 9/10
25000/25000 [==============================] - 2s 99us/step - loss: 0.3615 - acc: 0.8588
Epoch 10/10
25000/25000 [==============================] - 2s 98us/step - loss: 0.3223 - acc: 0.8836
25000/25000 [==============================] - 4s 140us/step
0.51136
###Markdown
Iterated K-fold validation with shufflingApparently we aren't coding this one up, but it's K-fold validation multiple times, shuffling the data every time before splitting it K times - means you train and evaluate P x K models (where P is the number of iterations), which isn't cheap Overfitting and underfitting / RegularisationIt's easy to fit, it's hard to generalise. The simplest way to prevent overfitting is to reduce the size of the model, hence its capacity.
###Code
# Original model
model = keras.models.Sequential()
model.add(keras.layers.Dense(16, activation="relu", input_shape=(10000,)))
model.add(keras.layers.Dense(16, activation="relu"))
model.add(keras.layers.Dense(1, activation="sigmoid"))
model.compile(optimizer="rmsprop", loss="binary_crossentropy", metrics=["accuracy"])
# Smaller model
small_model = keras.models.Sequential()
small_model.add(keras.layers.Dense(4, activation="relu", input_shape=(10000,)))
small_model.add(keras.layers.Dense(4, activation="relu"))
small_model.add(keras.layers.Dense(1, activation="sigmoid"))
small_model.compile(optimizer="rmsprop", loss="binary_crossentropy", metrics=["accuracy"])
# Compare the validation losses
X_val = X_train[:10000]
partial_X_train = X_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
history = model.fit(partial_X_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(X_val, y_val))
small_history = small_model.fit(partial_X_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(X_val, y_val))
for name, record in zip(["model", "small_model"],[history, small_history]):
history_dict = record.history
val_loss_values = history_dict["val_loss"]
epochs = range(1, len(val_loss_values) +1)
plt.plot(epochs, val_loss_values, label=f"{name} validation loss")
plt.title("Validation loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Let's try a bigger model
# Smaller model
big_model = keras.models.Sequential()
big_model.add(keras.layers.Dense(512, activation="relu", input_shape=(10000,)))
big_model.add(keras.layers.Dense(512, activation="relu"))
big_model.add(keras.layers.Dense(1, activation="sigmoid"))
big_model.compile(optimizer="rmsprop", loss="binary_crossentropy", metrics=["accuracy"])
big_history = big_model.fit(partial_X_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(X_val, y_val))
for name, record in zip(["model", "big_model"],[history, big_history]):
history_dict = record.history
val_loss_values = history_dict["val_loss"]
epochs = range(1, len(val_loss_values) +1)
plt.plot(epochs, val_loss_values, label=f"{name} validation loss")
plt.title("Validation loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
for name, record in zip(["model", "big_model"],[history, big_history]):
history_dict = record.history
loss_values = history_dict["loss"]
epochs = range(1, len(loss_values) +1)
plt.plot(epochs, loss_values, label=f"{name} training loss")
plt.title("Training loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Weight regularisationBy reducing the entropy of the distribution of parameter values (e.g. making them more regular), we force the model to be simpler. We do this by adding some cost to the loss function associated with the weights. E.g.:- L1 regularisation: cost is proportional to the absolute value of the weights- L2 regularisation: cost is proportional to the squares of the weights.
###Code
# Add regularisers to the movie-review classifier
l2_model = keras.models.Sequential()
# We could also use "l1" or "l1_l2"
l2_model.add(keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation="relu", input_shape=(10000,)))
l2_model.add(keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation="relu"))
l2_model.add(keras.layers.Dense(1, activation="sigmoid"))
l2_model.compile(optimizer="rmsprop", loss="binary_crossentropy", metrics=["accuracy"])
l2_history = l2_model.fit(partial_X_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(X_val, y_val))
for name, record in zip(["model", "l2_model"],[history, l2_history]):
history_dict = record.history
val_loss_values = history_dict["val_loss"]
epochs = range(1, len(val_loss_values) +1)
plt.plot(epochs, val_loss_values, label=f"{name} validation loss")
plt.title("Validation loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
DropoutRandomly set a number of output features of the layer to zero during training (apparently a rate between 0.2 and 0,5 is common).At test time, instead of dropping features the layer's output values are scaled down by the dropout rate.
###Code
# Add dropout to the movie-review classifier
dropout_model = keras.models.Sequential()
# We could also use "l1" or "l1_l2"
dropout_model.add(keras.layers.Dense(16, activation="relu", input_shape=(10000,)))
dropout_model.add(keras.layers.Dropout(0.5))
dropout_model.add(keras.layers.Dense(16, activation="relu"))
dropout_model.add(keras.layers.Dropout(0.5))
dropout_model.add(keras.layers.Dense(1, activation="sigmoid"))
dropout_model.compile(optimizer="rmsprop", loss="binary_crossentropy", metrics=["accuracy"])
dropout_history = dropout_model.fit(partial_X_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(X_val, y_val),
verbose=0)
for name, record in zip(["model", "dropout_model"],[history, dropout_history]):
history_dict = record.history
val_loss_values = history_dict["val_loss"]
epochs = range(1, len(val_loss_values) +1)
plt.plot(epochs, val_loss_values, label=f"{name} validation loss")
plt.title("Validation loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Chapter4.1 머신 러닝의 네 가지 분류 지도 학습(supervised learning) - 샘플 데이터가 주이지면 알고 있는 타깃(꼬리표(annotation))에 입력 데이터를 매핑하는 방법을 학습. - Main Part : 분류, 회귀 - 시퀀스 생성(sequence generation) : 사진이 주어지면 이를 설명하는 캡션을 생성합니다. (일련의 분류 문제) - 구문 트리(syntax tree)예측 : 문장이 주어지면 분해된 구문 트리를 예측 - 물체 감지(object detection) : 사진이 주어지면 사진 안의 특정 물체 주위에 경계 상자(bounding box)를 그립니다. (회귀 + 분류) - 이미지 분할(image segmentation): 사진이 주어졌을 때 픽셀 단위로 특정 물체에 마스팅(masking)을 합니다. 비지도 학습(unsupervised learning) - 차원 축소(dimensionality reduction)와 군집(clustering) 자기 지도 학습(self-supervised learning) - 오토인코더(autoencoder) 강화 학습(reinforcement learning) Chapter 4.2 머신 러닝 모델 평가 - 검증 세터에 과대적합 : 정보 누설(information leark) // 검증 세트의 모델 성능에 기반하여 모델의 하이퍼파라미터를 조정할때마다 검증 데이터에 관한 정보가 모델로 새는 것 - 단순 홀드아웃 검증(hold-out validation), K-겹 교차 검증(k-fold cross-validation), 셔플링(shuffling)을 사용한 반복 k-겹 교차 검증(iterated K-fold cross-validation) - RepeatedStratifiedKFold 유의점 - 대표서 있는 데이터 : 예) 이미지 데이터의 경우 순서대로 80:20 비율로 나누면 train에는 0~7만 들어간다. shuffle 해주어야. train_test_split() 에서 stratify 매개변수로 타깃 레이블을 전달 - 시간의 방향 : 시계열 데이터의 경우 훈련 세트에 있는 데이터보다 테스트 세트에 있는 모든 데이터가 미래의 것이어야 - 데이터 중복 : 한 데이터셋에 어떤 데이터 포인트가 두 번 등장하면 트레인 셋과 테스트 셋에 데이터가 중복될 수 있다. 중복여부 확인! GroupKFold 클래스를 cross_validate()함수에 적용 Chapter 4.3 데이터 전처리, 특성 공학, 특성 학습 신경망을 위한 데이터 전처리 벡터화(data vectorziation) ex) one hot encoding 값 정규화
###Code
x -= x.mean(axis=0) # 평균 0
x /= x.std(axis=0) # 표준 편차 1
###Output
_____no_output_____
###Markdown
누락된 값 다루기 신경망에서는 0이 사전에 정의된 의미 있는 값이 아니라면 누락된 값을 0으로 입력해도 이 값을 무시한다.(0이 누락된 값이라는 것을 학습) 특성 공학(feature engineering) - 좋은 특성은 적은 자원을 사용하여 문제를 더 멋지게 풀어낼 수 있습니다. 예를 들어 시계 바늘을 읽는 문제에 합성곱 신경망을 사용하는 것은 어울리지 않습니다. - 좋은 특성은 더 적은 데이터로 문제를 풀 수 있습니다. 딥러닝 모델이 스스로 특성을 학습하는 능력은 가용한 훈련 데이터가 많을 때 발휘합니다. 새믈의 개수가 적다면 특성에 있는 정보가 매우 중요해집니다. Chapter4.4과대적합(Overfitting)과 과소적합(Underfitting) 네트워크 크기 축소 - Overfitting을 막는 가장 단순한 방법은 parameter의 수를 줄이는 것. 이를 모델의 용량(capacity)라고 한다. - 예제 원본 모델 vs 더 작은 용량 모델
###Code
# data import
from keras.datasets import imdb
import numpy as np
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
def vectorize_sequences(sequences, dimension=10000):
# 크기가 (len(sequences), dimension))이고 모든 원소가 0인 행렬을 만듭니다
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # results[i]에서 특정 인덱스의 위치를 1로 만듭니다
return results
# 훈련 데이터를 벡터로 변환합니다
x_train = vectorize_sequences(train_data)
# 테스트 데이터를 벡터로 변환합니다
x_test = vectorize_sequences(test_data)
# 레이블을 벡터로 변환합니다
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
from keras import models
from keras import layers
original_model = models.Sequential()
original_model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
original_model.add(layers.Dense(16, activation='relu'))
original_model.add(layers.Dense(1, activation='sigmoid'))
original_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
smaller_model = models.Sequential()
smaller_model.add(layers.Dense(6, activation='relu', input_shape=(10000,)))
smaller_model.add(layers.Dense(6, activation='relu'))
smaller_model.add(layers.Dense(1, activation='sigmoid'))
smaller_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
original_hist = original_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
smaller_model_hist = smaller_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
epochs = range(1, 21)
original_val_loss = original_hist.history['val_loss']
smaller_model_val_loss = smaller_model_hist.history['val_loss']
import matplotlib.pyplot as plt
%matplotlib inline
# ‘b+’는 파란색 덧셈 기호을 의미합니다
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
# ‘bo’는 파란색 점을 의미합니다
plt.plot(epochs, smaller_model_val_loss, 'bo', label='Smaller model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
- 작은 네트워크가 원래 네트워크보다 더 늦게(많은 Epochs에서) 과대적합되기 시작했다.
###Code
# 더 큰 용량을 가진 네트워크
bigger_model = models.Sequential()
bigger_model.add(layers.Dense(1024, activation='relu', input_shape=(10000,)))
bigger_model.add(layers.Dense(1024, activation='relu'))
bigger_model.add(layers.Dense(1, activation='sigmoid'))
bigger_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
bigger_model_hist = bigger_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
bigger_model_val_loss = bigger_model_hist.history['val_loss']
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
plt.plot(epochs, bigger_model_val_loss, 'bo', label='Bigger model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
- Bigger model이 더 먼저 과대적합하기 시작했다.
###Code
original_train_loss = original_hist.history['loss']
bigger_model_train_loss = bigger_model_hist.history['loss']
plt.plot(epochs, original_train_loss, 'b+', label='Original model')
plt.plot(epochs, bigger_model_train_loss, 'bo', label='Bigger model')
plt.xlabel('Epochs')
plt.ylabel('Training loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
- Training data 에서 loss를 비교 : Bigger model 에서 더 빨리 0으로 수렴(과대적합 의심) 가중치 규제 추가 - 가중치 규제(weight regularization) : 네트워크 복잡도에 제한을 두어 가중치가 작은 값을 가지도록 강제하는 것. 가중치 값의 분포가 더 균일하게 된다. - L1 규제: 가중치가 절대값에 비례하는 비용이 추가 - L2 규제 : 가중치의 제곱에 비례하는 비용이 추가. called 가중치 감쇠(weight decay)
###Code
# 영하 리뷰에 가중치 규제 (L2) 추가
from keras import regularizers
l2_model = models.Sequential()
l2_model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
activation='relu', input_shape=(10000,)))
l2_model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
activation='relu'))
l2_model.add(layers.Dense(1, activation='sigmoid'))
l2_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
l2_model_hist = l2_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
l2_model_val_loss = l2_model_hist.history['val_loss']
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
plt.plot(epochs, l2_model_val_loss, 'bo', label='L2-regularized model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
- L2 규제를 추가한것이 훨씬 더 과대적합을 피하고있다.
###Code
# 다른 가중치 규제
from keras import regularizers
regularizers.l1(0.001) # L1규제
regularizers.l1_l2(l1=0.001, l2=0.001) # L1, L2 병행규제
###Output
_____no_output_____
###Markdown
드롭아웃(dropout) 추가 - 훈련하는 동안 무작위로 층의 일부 출력 특성을 제외시킨다.(w=0) - 드랍아웃 비율 : 0이 될 특성의 비율 // 보통 0.2 ~ 0.5
###Code
# dropout model
dpt_model = models.Sequential()
dpt_model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
dpt_model.add(layers.Dropout(0.5))
dpt_model.add(layers.Dense(16, activation='relu'))
dpt_model.add(layers.Dropout(0.5))
dpt_model.add(layers.Dense(1, activation='sigmoid'))
dpt_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
dpt_model_hist = dpt_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
dpt_model_val_loss = dpt_model_hist.history['val_loss']
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
plt.plot(epochs, dpt_model_val_loss, 'bo', label='Dropout-regularized model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Chapter 4: NumPy Basics: Arrays and Vectorized Computation Display the date type of the following array, and cast it to float64 type
###Code
#
arr = np.array([1, 2, 3, 4, 5])
arr.dtype
float_arr = arr.astype(np.float64)
###Output
_____no_output_____
###Markdown
Create a copy of arr[1:3]
###Code
copy = arr[1:3].copy()
copy
###Output
_____no_output_____
###Markdown
Run next cell to create arr
###Code
#
arr = np.empty((8, 4))
for i in range(8):
arr[i] = i
arr
###Output
_____no_output_____
###Markdown
Fancy indexing: return the row 4, row 3, row 0 and row 6 of arr
###Code
arr[[4, 3, 0, 6]]
###Output
_____no_output_____
###Markdown
Run next cell to create arr
###Code
#
arr = np.arange(32).reshape((8, -1))
arr
###Output
_____no_output_____
###Markdown
Fancy indexing: select elements (1,0), (5,3), (7, 1) and (2, 2) from arr and return a 1-d array
###Code
arr[[1,5,7,2],[0,3,1,2]]
###Output
_____no_output_____
###Markdown
Run next celll to create arr
###Code
#
arr = np.random.randn(7) * 5
arr
###Output
_____no_output_____
###Markdown
Elementwisely return the fractional and integral parts of a floating point array arr
###Code
np.modf(arr)
###Output
_____no_output_____
###Markdown
Run next cell to create three arrays
###Code
#
xarr = np.array([1.1, 1.2, 1.3, 1.4, 1.5])
yarr = np.array([2.1, 2.2, 2.3, 2.4, 2.5])
cond = np.array([True, False, True, True, False])
###Output
_____no_output_____
###Markdown
Use a numpy function to generate an array by taking a value from xarr whenever the correspoing value in cond is True otherwise take the value from yarr
###Code
np.where(cond, xarr, yarr)
###Output
_____no_output_____
###Markdown
Run next cell to create arr
###Code
#
arr = np.random.randn(4, 4)
arr
###Output
_____no_output_____
###Markdown
For a matrix arr of random numbers, use the same numpy function to replace all positive values with 2 and all negative values with -2
###Code
np.where(arr>0, 2, -2)
###Output
_____no_output_____
###Markdown
Now use the same function, set only the positive values to 2 and keep other values of arr
###Code
np.where(arr>0, 2, arr)
###Output
_____no_output_____
###Markdown
Run next cell to create names and ints
###Code
#
names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe'])
ints = np.array([3, 3, 3, 2, 2, 1, 1, 4, 4])
###Output
_____no_output_____
###Markdown
Return the unique values in names and ints
###Code
np.unique(names)
np.unique(ints)
###Output
_____no_output_____
###Markdown
Run next cell to create values
###Code
#
values = np.array([6, 0, 0, 3, 2, 5, 6])
###Output
_____no_output_____
###Markdown
Check the membership of values in another array [2, 3, 6] and return the result as a boolean array
###Code
np.in1d(values, [2, 3, 6])
###Output
_____no_output_____
###Markdown
Save arr to a .npy file, and then load it back and assign to arr2
###Code
np.save('some_array.npy', arr)
arr2 = np.load('some_array.npy')
arr2
###Output
_____no_output_____
###Markdown
Save arr and arr2 to a single npz file, then load them back and assign arr2 to arr3
###Code
np.savez('array_archive.npz', arr_save = arr, arr2_save = arr2)
arch = np.load('array_archive.npz')
arr3 = arch['arr2_save']
arr3
###Output
_____no_output_____
###Markdown
Create an ndarray arr by load a txt file from examples/array_ex.txt
###Code
arr = np.loadtxt('examples/array_ex.txt', delimiter = ',')
arr
###Output
_____no_output_____
###Markdown
Now save arr to a txt file 'some_text.txt'
###Code
np.savetxt('some_text.txt', arr)
###Output
_____no_output_____
###Markdown
###Code
import numpy as np
X = 2 * np.random.rand(100, 1) # X is a column vector
print(X, X.shape)
y = 4 + 3 * X + np.random.randn(100, 1) # np.random.randn(100, 1) generates gaussian noise
import matplotlib.pyplot as plt
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([0, 2, 0, 15])
#save_fig("generated_data_plot")
plt.show()
###Output
_____no_output_____
###Markdown
Normal equation (for computing $\theta$ directly)
$$
\theta=\left(X^{T} X\right)^{-1} \cdot\left(X^{T} y\right)
$$
###Code
# before we need to add all 1's to x0 for each instance
X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance
print(X_b)
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
print(theta_best)
###Output
[[3.90919602]
[3.26085248]]
###Markdown
Let's make predictions using
$$
\hat{y} = \theta^{T} \cdot x = x \cdot \theta
$$
###Code
X_new = np.array([[0], [2]]) # Once theta is found, I only need two points to plot the line
print(X_new)
X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance
print(X_new_b)
y_predict = X_new_b.dot(theta_best)
print(y_predict)
plt.figure(figsize=(10,7))
plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions")
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 2, 0, 15])
#save_fig("linear_model_predictions_plot")
plt.show()
# Now with Scikit-Learn
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
print(lin_reg.intercept_, lin_reg.coef_)
print(lin_reg.predict(X_new))
###Output
[[ 3.90919602]
[10.43090099]]
###Markdown
Linear regression class of Scikit-Learn is based on
###Code
# See how this results matches the previous one
theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6)
print(theta_best_svd)
###Output
_____no_output_____
###Markdown
Batch Gradient Descent implementation
###Code
eta = 0.1 # learning rate
n_iterations = 1000
m = 100
theta = np.random.randn(2,1) # random initialization
for iteration in range(n_iterations):
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
print(theta) # See how the result matches
###Output
[[3.90919602]
[3.26085248]]
###Markdown
Stochastic Gradient Descent
###Code
theta_path_sgd = []
m = len(X_b)
np.random.seed(42)
n_epochs = 50
t0, t1 = 5, 50 # learning schedule hyperparameters
def learning_schedule(t):
return t0 / (t + t1)
theta = np.random.randn(2,1) # random initialization
for epoch in range(n_epochs):
for i in range(m):
if epoch == 0 and i < 20: # not shown in the book
y_predict = X_new_b.dot(theta) # not shown
style = "b-" if i > 0 else "r--" # not shown
plt.plot(X_new, y_predict, style) # not shown
random_index = np.random.randint(m)
xi = X_b[random_index:random_index+1]
yi = y[random_index:random_index+1]
gradients = 2 * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(epoch * m + i)
theta = theta - eta * gradients
theta_path_sgd.append(theta) # not shown
plt.plot(X, y, "b.") # not shown
plt.xlabel("$x_1$", fontsize=18) # not shown
plt.ylabel("$y$", rotation=0, fontsize=18) # not shown
plt.axis([0, 2, 0, 15]) # not shown
# save_fig("sgd_plot") # not shown
plt.show() # not shown
print(theta)
# Let's see how to implement Stochastics Gradient Descent with Scikit-Learn
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42)
sgd_reg.fit(X, y.ravel())
print(sgd_reg.intercept_, sgd_reg.coef_)
###Output
[3.88432476] [3.2153003]
###Markdown
Polynomial Regression
###Code
import numpy as np
import numpy.random as rnd
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
# save_fig("quadratic_data_plot")
plt.show()
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly_features.fit_transform(X)
X[0]
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
lin_reg.intercept_, lin_reg.coef_
###Output
_____no_output_____
###Markdown
Learning Curves
###Code
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
def plot_learning_curves(model, X, y):
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10)
train_errors, val_errors = [], []
for m in range(1, len(X_train)):
model.fit(X_train[:m], y_train[:m])
y_train_predict = model.predict(X_train[:m])
y_val_predict = model.predict(X_val)
train_errors.append(mean_squared_error(y_train[:m], y_train_predict))
val_errors.append(mean_squared_error(y_val, y_val_predict))
plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train")
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val")
plt.legend(loc="upper right", fontsize=14) # not shown in the book
plt.xlabel("Training set size", fontsize=14) # not shown
plt.ylabel("RMSE", fontsize=14) # not shown
###Output
_____no_output_____
###Markdown
Underfitting
###Code
# See the condition of underfitting
lin_reg = LinearRegression()
plot_learning_curves(lin_reg, X, y)
plt.axis([0, 80, 0, 3]) # not shown in the book
# save_fig("underfitting_learning_curves_plot") # not shown
plt.show()
###Output
_____no_output_____
###Markdown
Overfitting
###Code
from sklearn.pipeline import Pipeline
polynomial_regression = Pipeline([
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("lin_reg", LinearRegression()),
])
plot_learning_curves(polynomial_regression, X, y)
plt.axis([0, 80, 0, 3]) # not shown
#save_fig("learning_curves_plot") # not shown
plt.show() # not shown
###Output
_____no_output_____
###Markdown
A sports team has a record of winning games 55% of the time on days of nice weather. Given that tomorrow's weather has a 70% chance of not raining, what chance does the team have of winning their match?
###Code
import numpy as np
NUM_TRIALS = 10000
weather = np.random.choice(['rain', 'no rain'], NUM_TRIALS, p=[.3, .7])
win = np.random.choice(['win', 'lose'], NUM_TRIALS, p=[.55, .45])
trials = np.stack([weather, win], axis=1)
# Tally outcomes where it wasn't raining and the team win the game.
outcomes = np.sum((trials[:, 0] == 'no rain') & (trials[:, 1] == 'win'))
print(f"Probability that it is a nice and and the team will win is {outcomes / NUM_TRIALS}")
###Output
Probability that it is a nice and and the team will win is 0.3807
###Markdown
Chapter 4, Training Models Linear Regression The Normal Equation
###Code
import numpy as np
X = 2 * np.random.rand(100, 1) # sampling from uniform([0, 2])
X[9]
y = 4 + 3*X + np.random.randn(100, 1)
###Output
_____no_output_____
###Markdown
Plotting X, and y
###Code
import matplotlib.pyplot as plt
plt.plot(X, y, 'o')
###Output
_____no_output_____
###Markdown
Now, let's compute the estimated parameters using the normal equation, $\hat{\theta} = (X^{T}X)^{-1}X^Ty$
###Code
X_b = np.c_[np.ones([len(X), 1]), X]
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
theta_best
###Output
_____no_output_____
###Markdown
Remember that the respective params were 4, 3 - i.e. the noise didn't allow us to recover the actual parameters. Now we can create a function to predict new_values,
###Code
from np_typing import Array
def norm_predict(X_new: Array["N,1"], theta: Array["2,1"]) -> Array["N,1"]:
X_b_new: Array["N,2"] = np.c_[np.ones([len(X_new), 1]), X_new]
y_pred = X_b_new.dot(theta)
return y_pred
X_new = np.array([[0.5], [1.], [1.5]])
X_new.shape
y_pred = norm_predict(X_new, theta_best)
###Output
_____no_output_____
###Markdown
Let's plot this model's predictions
###Code
plt.plot(X_new, y_pred, 'r-')
plt.plot(X, y, '.')
###Output
_____no_output_____
###Markdown
Batch Gradient Descent Gradient vector of MSE cost function for the Linear Regression model is: $$\nabla_{\theta} MSE(\theta) = \frac{2}{N}X^T(X\theta - y)$$ Note that it is called "Batch" gradient descent since it uses the entire sample to compute the gradients at each step.The gradient descent step equals $$\theta^{next_step} = \theta - \eta \nabla_{\theta} MSE(\theta)$$
###Code
def gradient_descent(X: Array["N,m"], y: Array["N,1"], learning_rate=.1, min_tolerance=0.01, max_its=10000) -> Array["m+1,1"]:
N, m = X.shape
theta = np.zeros([m+1, 1])
X_b = np.c_[np.ones([N, 1]), X]
err = X_b.dot(theta) - y
n_its = 0
while (n_its < max_its) and (np.linalg.norm(err)/len(err) > min_tolerance):
grad = 2/N * X_b.T.dot(err)
theta -= learning_rate*grad
err = X_b.dot(theta) - y
n_its += 1
if n_its % 1000 == 0:
print(f"Running iteration {n_its}, error norm is {np.linalg.norm(err)/len(err)}")
return theta
gradient_descent(X, y)
gradient_descent(X, y, learning_rate=.001, max_its=int(2*10**4))
###Output
Running iteration 1000, error norm is 0.11322708445406253
Running iteration 2000, error norm is 0.10935678490228198
Running iteration 3000, error norm is 0.10732621784932171
Running iteration 4000, error norm is 0.10620211116173117
Running iteration 5000, error norm is 0.10558382134321447
Running iteration 6000, error norm is 0.10524498856273434
Running iteration 7000, error norm is 0.105059681218012
Running iteration 8000, error norm is 0.10495845082328328
Running iteration 9000, error norm is 0.10490318446584915
Running iteration 10000, error norm is 0.10487302220866988
Running iteration 11000, error norm is 0.10485656384936252
Running iteration 12000, error norm is 0.10484758407569462
Running iteration 13000, error norm is 0.10484268493054202
Running iteration 14000, error norm is 0.104840012157009
Running iteration 15000, error norm is 0.10483855402484295
Running iteration 16000, error norm is 0.10483775854765336
Running iteration 17000, error norm is 0.10483732458090939
Running iteration 18000, error norm is 0.1048370878341673
Running iteration 19000, error norm is 0.1048369586792512
Running iteration 20000, error norm is 0.10483688822008047
###Markdown
10 (h).(h) Which of these methods appears to provide the best results onthis data?* KNN (K=1) model attains highest accuracy. 10 (i).Experiment with different combinations of predictors, including possible transformations and interactions, for each of themethods. Report the variables, method, and associated confusion matrix that appears to provide the best results on the heldout data. Note that you should also experiment with values forK in the KNN classifier.
###Code
from sklearn.preprocessing import PolynomialFeatures
X = pd.concat([X, pd.DataFrame(PolynomialFeatures().fit_transform(X))], axis=1)
neigh = KNeighborsClassifier(n_neighbors=1)
neigh.fit(X, df_weekly_2008.label)
# 100 % accuracy
accuracy_score(neigh.predict(X), df_weekly_2008.label)
###Output
_____no_output_____
###Markdown
Problem 11).In this problem, you will develop a model to predict whether a givencar gets high or low gas mileage based on the Auto data set. 11 (a). Create a binary variable, mpg01, that contains a 1 if mpg containsa value above its median, and a 0 if mpg contains a value belowits median. You can compute the median using the median()function. Note you may find it helpful to use the data.frame()function to create a single data set containing both mpg01 andthe other Auto variables.
###Code
df_auto = pd.read_csv('datasets/Auto.data', delim_whitespace=True)
df_auto.replace('?',0, inplace=True)
df_auto.head(2)
df_auto['mpg01'] = df_auto.mpg.apply(lambda x: 1 if x>=df_auto.mpg.median() else 0)
###Output
_____no_output_____
###Markdown
11 (b).Explore the data graphically in order to investigate the association between mpg01 and the other features. Which of the otherfeatures seem most likely to be useful in predicting mpg01? Scatterplots and boxplots may be useful tools to answer this question. Describe your findings.
###Code
df_auto.drop(columns='mpg', inplace=True)
df_auto.head(2)
df_auto.groupby(['mpg01']).mean()
sns.boxplot(x='mpg01', y='cylinders', data=df_auto)
sns.boxplot(x='mpg01', y='displacement', data=df_auto)
sns.boxplot(x='mpg01', y='weight', data=df_auto)
###Output
_____no_output_____
###Markdown
11 (c).Split the data into a training set and a test set.
###Code
from sklearn.model_selection import train_test_split
X, y = df_auto[df_auto.columns[:-2]], df_auto.mpg01
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
###Output
_____no_output_____
###Markdown
11 (d).Perform LDA on the training data in order to predict mpg01using the variables that seemed most associated with mpg01 in(b). What is the test error of the model obtained?
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda = LinearDiscriminantAnalysis()
lda.fit(X_train, y_train)
y_pred = lda.predict(X_test)
# 11 % misclassification
1-accuracy_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
11 (e).Perform QDA on the training data in order to predict mpg01using the variables that seemed most associated with mpg01 in(b). What is the test error of the model obtained?
###Code
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
qda = QuadraticDiscriminantAnalysis()
qda.fit(X_train, y_train)
y_pred = qda.predict(X_test)
# 10.6 % misclassification
1-accuracy_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
11 (f).Perform logistic regression on the training data in order to predict mpg01 using the variables that seemed most associated withmpg01 in (b). What is the test error of the model obtained?
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train, y_train)
y_pred = lr.predict(X_test)
# 12 % misclassification
1-accuracy_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
11 (g).Perform KNN on the training data, with several values of K, inorder to predict mpg01. Use only the variables that seemed mostassociated with mpg01 in (b). What test errors do you obtain?Which value of K seems to perform the best on this data set?
###Code
from sklearn.neighbors import KNeighborsClassifier
knn1 = KNeighborsClassifier()
knn1.fit(X_train, y_train)
y_pred = knn1.predict(X_test)
# 12 % misclassification
1-accuracy_score(y_test, y_pred)
ks = [i for i in range(1, 11)]
error_rates = []
for k in ks:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
error_rates.append(1-accuracy_score(y_test, y_pred))
error_rates
###Output
_____no_output_____
###Markdown
* K=1 provides least misclassification rate. Problem 13).Using the Boston data set, fit classification models in order to predictwhether a given suburb has a crime rate above or below the median.Explore logistic regression, LDA, and KNN models using various subsets of the predictors. Describe your findings.
###Code
from sklearn.datasets import load_boston
bos = load_boston()
X, y = load_boston(return_X_y=True)
df_bos = pd.DataFrame(X, columns=bos.feature_names)
df_bos['y'] = y
df_bos['label'] = df_bos.y.apply(lambda x: 1 if x >= df_bos.y.median() else 0)
df_bos.label.value_counts()
X = df_bos[df_bos.columns[:-2]]
y = df_bos.label
knn = KNeighborsClassifier()
knn.fit(X, y)
y_pred = knn.predict(X)
# 12 % misclassification
1-accuracy_score(y, y_pred)
lda = LinearDiscriminantAnalysis()
lda.fit(X, y)
y_pred = lda.predict(X)
# 14 % misclassification
1-accuracy_score(y, y_pred)
###Output
_____no_output_____
###Markdown
4章- 以下で実行するコードには確率的な処理が含まれていることがあり、コードの出力結果と本書に記載されている出力例が異なることがあります。
###Code
# 4-1
!pip install transformers==4.5.0 fugashi==1.1.0 ipadic==1.0.0
# 4-2
import torch
from transformers import BertJapaneseTokenizer, BertModel
# 4-3
model_name = 'cl-tohoku/bert-base-japanese-whole-word-masking'
tokenizer = BertJapaneseTokenizer.from_pretrained(model_name)
# 4-4
tokenizer.tokenize('明日は自然言語処理の勉強をしよう。')
# 4-5
tokenizer.tokenize('明日はマシンラーニングの勉強をしよう。')
# 4-6
tokenizer.tokenize('機械学習を中国語にすると机器学习だ。')
# 4-7
input_ids = tokenizer.encode('明日は自然言語処理の勉強をしよう。')
print(input_ids)
# 4-8
tokenizer.convert_ids_to_tokens(input_ids)
# 4-9
text = '明日の天気は晴れだ。'
encoding = tokenizer(
text, max_length=12, padding='max_length', truncation=True
)
print('# encoding:')
print(encoding)
tokens = tokenizer.convert_ids_to_tokens(encoding['input_ids'])
print('# tokens:')
print(tokens)
# 4-10
encoding = tokenizer(
text, max_length=6, padding='max_length', truncation=True
)
tokens = tokenizer.convert_ids_to_tokens(encoding['input_ids'])
print(tokens)
# 4-11
text_list = ['明日の天気は晴れだ。','パソコンが急に動かなくなった。']
tokenizer(
text_list, max_length=10, padding='max_length', truncation=True
)
# 4-12
tokenizer(text_list, padding='longest')
# 4-13
tokenizer(
text_list,
max_length=10,
padding='max_length',
truncation=True,
return_tensors='pt'
)
# 4-14
# モデルのロード
model_name = 'cl-tohoku/bert-base-japanese-whole-word-masking'
bert = BertModel.from_pretrained(model_name)
# BERTをGPUに載せる
bert = bert.cuda()
# 4-15
print(bert.config)
# 4-16
text_list = [
'明日は自然言語処理の勉強をしよう。',
'明日はマシーンラーニングの勉強をしよう。'
]
# 文章の符号化
encoding = tokenizer(
text_list,
max_length=32,
padding='max_length',
truncation=True,
return_tensors='pt'
)
# データをGPUに載せる
encoding = { k: v.cuda() for k, v in encoding.items() }
# BERTでの処理
output = bert(**encoding) # それぞれの入力は2次元のtorch.Tensor
last_hidden_state = output.last_hidden_state # 最終層の出力
# 4-17
output = bert(
input_ids=encoding['input_ids'],
attention_mask=encoding['attention_mask'],
token_type_ids=encoding['token_type_ids']
)
# 4-18
print(last_hidden_state.size()) #テンソルのサイズ
# 4-19
with torch.no_grad():
output = bert(**encoding)
last_hidden_state = output.last_hidden_state
# 4-20
last_hidden_state = last_hidden_state.cpu() # CPUにうつす。
last_hidden_state = last_hidden_state.numpy() # numpy.ndarrayに変換
last_hidden_state = last_hidden_state.tolist() # リストに変換
###Output
_____no_output_____
###Markdown
Chapter 4 Linear Algebra Vectors
###Code
height = [70, # inches,
170, # pounds,
40] # years
grades = [95, # exam1,
80, # exam2,
75, # exam3,
62] # exam4
def vector_add(v, w):
'''adds corresponding elements'''
return [v_i + w_i
for v_i, w_i in zip(v, w)]
def vector_substract(v, w):
"""substracts corresponding elements"""
return [v_i - w_i
for v_i, w_i in zip(v, w)]
def vector_sum(vectors):
"""sums all corresponding elements"""
result = vectors[0] # start with the first vector
for vector in vectors[1: ]: # then loop over the others
result = vector_add(result, vector) # and add them to the result
return result
def vector_sum2(vectors):
return reduce(vector_add, vectors)
def scalar_multiply(c, v):
"""c is a number and v is a vector"""
return [c * v_i for v_i in v]
def vector_mean(vectors):
"""compute the vector whose ith element is the mean
of the ith elements of the input vectors"""
n = len(vectors)
return scalar_multiply(1/n, vector_sum(vectors))
def dot(v, w):
"""v_1 * w_1 + v_2 * w_2 + ... + v_n * w_n"""
return sum(v_i * w_i
for v_i, w_i in zip(v, w))
def sum_of_squares(v):
"""v_1 * v_1 + v_2 * v_2 + v_3 * v_3 + ... + v_n * v_n"""
return dot(v, v)
import math
def magnitude(v):
return math.sqrt(sum_of_squares(v))
def squared_distance(v, w):
"""(v_1 - w_1)^2 + (v_2 - w_2)^2 + ... + (v_n - w_n)^2"""
return sum_of_squares(substract(v, w))
def distance1(v, w):
return math.sqrt(squared_distance(v, w))
def distance2(v, w):
return magnitude(substract(v, w))
###Output
_____no_output_____
###Markdown
Matrices
###Code
A = [[1, 2, 3], # A has 2 rows and 3 columns
[4, 5, 6]]
B = [[1, 2], # B has 3 rows and 2 columns
[3, 4],
[5, 6]]
def shape(A):
num_rows = len(A)
num_cols = len(A[0]) if A else 0
return num_rows, num_cols
def get_row(A, i):
return A[i]
def get_column(A, j):
return [A_i[j] for A_i in A]
def make_matrix(num_rows, num_cols, entry_fn):
"""returns a num_rows x num_cols matrix
whose (i, j)th entry is generated by function entry_fn(i, j)"""
return [[entry_fn(i, j)
for j in range(num_cols)]
for i in range(num_rows)]
def is_diagonal(i, j):
return 1 if i == j else 0
identity_matrix = make_matrix(5, 5, is_diagonal)
identity_matrix
###Output
_____no_output_____
###Markdown
Import required packages
###Code
#
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import sklearn
###Output
_____no_output_____
###Markdown
Linear regression Model: $\hat{y} = h_{\theta}(X) = X \bullet \theta$ Run next cell to data for linear regression.
###Code
#
X = 2 * np.random.randn(100, 1)
y = 4 + 3 * X + np.random.rand(100, 1)
###Output
_____no_output_____
###Markdown
First visualize X and y
###Code
plt.figure(figsize=(8, 8))
plt.scatter(X, y)
plt.xlabel('X')
plt.ylabel('y')
###Output
_____no_output_____
###Markdown
Add X0 to X and return X_b; then compute $\theta$ by normal function
###Code
from numpy.linalg import inv
X_b = np.c_[np.ones((100, 1)), X]
theta = inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
theta
###Output
_____no_output_____
###Markdown
Now plot the line of the fitted model with the scatter plot of the training data. Set limit at [-4, 4]
###Code
X_end = np.array([[1, -4], [1, 4]])
y_pred_end = X_end.dot(theta)
plt.figure(figsize=(8, 6))
plt.scatter(X, y)
plt.plot(X_end[:, -1], y_pred_end, 'r-')
plt.xlabel('X')
plt.ylabel('y')
plt.xlim([-4, 4])
###Output
_____no_output_____
###Markdown
sklearn.linear_model.LinearRegression uses this normal function. Now fit the model by LinearRegressor and check the fitted coefficients. And test predictioin.
###Code
from sklearn.linear_model import LinearRegression
reg_lr = LinearRegression()
reg_lr.fit(X, y)
reg_lr.coef_
reg_lr.intercept_
reg_lr.predict(X[:10, :])
###Output
_____no_output_____
###Markdown
The SGDRegressor uses gradient descent to optimize theta, instead of normal function. Now solve the linear regression by SGDRegressor with no penalty and max_iter=50
###Code
from sklearn.linear_model import SGDRegressor
reg_sgd = SGDRegressor(penalty=None, max_iter=50)
reg_sgd.fit(X, y)
reg_sgd.coef_
reg_sgd.intercept_
###Output
_____no_output_____
###Markdown
Polynomial Regression Run next cell to generate some data
###Code
#
n = 100
X = 6 * np.random.rand(n, 1) - 3
y = 0.5 * X**2 + X + 2 + np.random.randn(n, 1) # y = 0.5X^2 + X + 2
y = y.ravel()
###Output
_____no_output_____
###Markdown
Now use PolynomialFeatures to generate polynomial features at degree of 2.
###Code
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly.fit_transform(X)
###Output
_____no_output_____
###Markdown
Now fit linear regression.
###Code
reg_lr = LinearRegression()
reg_lr.fit(X_poly, y)
###Output
_____no_output_____
###Markdown
Now visualize the traing data and the fitted model.
###Code
plt.figure(figsize=(8, 6))
plt.scatter(X, y)
y_pred = reg_lr.predict(X_poly)
idx_sort = np.argsort(X.ravel())
X_sort = X[idx_sort]
y_pred_sort = y_pred[idx_sort]
plt.plot(X_sort, y_pred_sort, color='r')
plt.xlabel('X')
plt.ylabel('y')
###Output
_____no_output_____
###Markdown
Ridge regression Use Ridge to fit the linear regression model for X_poly and y. Use alpha=1 and the closed-form solution.
###Code
from sklearn.linear_model import Ridge
reg_ridge = Ridge(alpha=1, solver='cholesky')
reg_ridge.fit(X_poly, y)
###Output
_____no_output_____
###Markdown
Predict by the ridge model at X_poly[0]
###Code
reg_ridge.predict(X_poly[0:1])
###Output
_____no_output_____
###Markdown
Use SGDRegressor to fit the model again with L2 penalty at alpha=0.1
###Code
reg_sgd_l2 = SGDRegressor(penalty='l2', alpha=0.1)
reg_sgd_l2.fit(X_poly, y)
###Output
/home/chris/ml/env/lib/python3.5/site-packages/sklearn/linear_model/stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.stochastic_gradient.SGDRegressor'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
"and default tol will be 1e-3." % type(self), FutureWarning)
###Markdown
Predict at X_poly[0]
###Code
reg_sgd_l2.predict(X_poly[0:1])
###Output
_____no_output_____
###Markdown
Lasso Regression Now use Lasso to fit X_poly and y at alph=0.1. Then predict at X_poly[0]
###Code
from sklearn.linear_model import Lasso
reg_lasso = Lasso(alpha=0.1)
reg_lasso.fit(X_poly, y)
reg_lasso.predict(X_poly[0:1])
###Output
_____no_output_____
###Markdown
Early Stopping Run next cell to generate synthetic data.
###Code
#
n = 100
X = 6*np.random.rand(n, 1) - 3
y = 0.5*X**2 + X + 2 + np.random.randn(n, 1)
y = y.ravel()
###Output
_____no_output_____
###Markdown
Visualize X and y
###Code
plt.scatter(X, y)
###Output
_____no_output_____
###Markdown
Split X and y to X_train, X_val, y_train, y_val at a 80:20 ratio.
###Code
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)
###Output
_____no_output_____
###Markdown
Now define a pipeline that:1. Convert X to X_poly = [X, X^2, X^3, ..., X^90]2. Normalize X_poly
###Code
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
line = Pipeline([('poly', PolynomialFeatures(degree=90, include_bias=False)), \
('normalize', StandardScaler())])
###Output
_____no_output_____
###Markdown
Convert both X_train and X_test by the defined pipeline.
###Code
X_train_poly = line.fit_transform(X_train)
X_val_poly = line.transform(X_val)
###Output
_____no_output_____
###Markdown
Build a linear regression model using stochastic gradient descent, at constant learning rate of 0.0005, and no penalty term, and only one epoch. We would fit this model iteratively, so keep it warm
###Code
reg_sgd = SGDRegressor(penalty=None, learning_rate='constant', eta0=0.0005, max_iter=1, warm_start=True)
###Output
_____no_output_____
###Markdown
Fit the model 1000 epoches. Record the validation error (MSE) after each epoch. Find out the best epoch and best model.
###Code
from sklearn.metrics import mean_squared_error
from sklearn.base import clone
best_epoch = None
best_model = None
best_MSE = np.inf
val_mse_list = []
n_epoch = 1000
for idx_epoch in range(n_epoch):
reg_sgd.fit(X_train_poly, y_train)
y_pred = reg_sgd.predict(X_val_poly)
mse = mean_squared_error(y_val, y_pred)
val_mse_list.append(mse)
if mse < best_MSE:
best_MSE = mse
best_model = clone(reg_sgd)
best_epoch = idx_epoch
###Output
_____no_output_____
###Markdown
Plot epoch vs. validation error. Verify the best epoch.
###Code
fig, axes = plt.subplots(1, 1, figsize=(10, 6))
axes.plot(range(n_epoch), val_mse_list)
axes.set_xlabel('Epoch')
axes.set_ylabel('Validation MSE')
best_epoch
###Output
_____no_output_____
###Markdown
Logistic Regression Load Iris data from sklearn and denote the result as iris.
###Code
from sklearn.datasets import load_iris
iris = load_iris()
###Output
_____no_output_____
###Markdown
Run next cell to define X and y
###Code
#
X = iris["data"][:, 3:] # petal width
y = (iris["target"] == 2).astype(np.int) # 1 if Virginia, else 0
###Output
_____no_output_____
###Markdown
Now fit a Logistic regression model by default settings.
###Code
from sklearn.linear_model import LogisticRegression
clf_log = LogisticRegression()
clf_log.fit(X, y)
###Output
_____no_output_____
###Markdown
Now plot model estimated probility of Virginia with penal widths varying from 0 to 3 cm.
###Code
X_test = np.linspace(0, 3, num=100).reshape((-1, 1))
pro_X_test = clf_log.predict_proba(X_test)
###Output
_____no_output_____
###Markdown
Plot a figure showing: 1. Penal width vs. probability of Virginia2. Penal width vs. probability of non-Virginia
###Code
plt.figure(figsize=(10, 4))
plt.plot(X_test, pro_X_test[:, 1], 'r-', label='Probability of Virginia')
plt.plot(X_test, pro_X_test[:, 0], 'b-', label='Probability of Non-Virginia')
plt.legend()
plt.xlabel('Penal width')
plt.ylabel('Probability')
###Output
_____no_output_____
###Markdown
Run next cell to generate data of a multi-class problem.
###Code
#
X =iris['data'][:, (2, 3)]
y = iris['target'] # 3 classes
###Output
_____no_output_____
###Markdown
Fit a softmax model for this multilabel classification problem
###Code
clf_softmax = LogisticRegression(solver='lbfgs', multi_class='multinomial')
clf_softmax.fit(X, y)
###Output
_____no_output_____
###Markdown
Predict at [[5, 2]]. Also compute the probability of each class.
###Code
clf_softmax.predict([[5, 2]])
clf_softmax.predict_proba([[5, 2]])
###Output
_____no_output_____
###Markdown
Chapter 4 1. Python uses the object model abstraction for data storage. Any construct thatcontains any type of value is an object.All Python objects have the following three characteristics: an identity, atype, and a value. * IDENTITY:Any object’s identifier can be obtained using the id() builtin function (BIF). * TYPE: type(obj),You can use the type() BIF to reveal the type of a Python object * VALUE: Data item that is represented by an object Note that All three are assigned on object creation and are read-only with oneexception, the value * the type of all type objects is type. The type type object is also the mother of all types and is the default metaclass for all standard Python classes. * The type of None is NoneType. It does not have any operators or BIFs. None has no (useful) attributes and always evaluates to having a Boolean False value. 2. False value and True ValueObjects take a False value when they are empty, any numeric representation of zero, or the Null object None. The following are defined as having false values in Python: * None * False (Boolean) * Any numeric zero: * 0 (integer) * 0.0 (float) * 0L (long integer) * 0.0+0.0j (complex) * "" (empty string) * [] (empty list) * () (empty tuple) * {} (empty dictionary) Any value for an object other than those above is considered to have a true value, i.e., non-empty, non-zero, etc. User-created class instances have a false value when their nonzero (__nonzero__()) or length (__len__()) special methods, if defined, return a zero value. 3. internal object * code object: usually as return values from calling the compile() BIF.Code objects themselves do not contain any information regarding their execution environment * Frame object: These are objects representing execution stack frames in Python. Frame objects contain all the information the Python interpreter needs to know during a runtime execution environment. * Traceback object: The traceback object is just a data item that holds the stack trace informationfor an exception and is created when an exception occurs. * Slice object: Slice objects are created using the Python extended slice syntax. These various types of indexing include stride indexing, multi-dimensional indexing, and indexing using the Ellipsis type. * The syntax for multi-dimensional indexing issequence[start1 : end1, start2 : end2], * or using the ellipsis, sequence[..., start1 : end1]. Slice objects can also be generated by the slice() BIF. * Stride indexing for sequence types allows for a third slice element that allows for “step”-like access with a syntax of sequence[starting_index: ending_index : stride] * Ellipsis objects are used in extended slice notations as demonstrated above. These objects are used to represent the actual ellipses in the slice syntax (. . .). Like the Null object None, ellipsis objects also have a single name, Ellipsis, and have a Boolean True value at all times 4.value comparison * Comparison operators are used to determine equality of two data values between members of the same type. Note that comparisons performed are those that are appropriate for each data type * multiple comparisons can be made onthe same line, evaluated in left-to-right order: e.g: 3<4<5 same as (3<4) and (4<5) * We would like to note here that comparisons are strictly between object values, meaning that the comparisons are between the data values and not the actual data objects themselves
###Code
3<4<5
###Output
_____no_output_____
###Markdown
4. object comparison * Python provides the is, is not operators to test if a pair of variables do indeed refer to the same object. a is b is an equivalent expression to id(a) == id(b). * Python caches or interns only simple integers that it believes will be used frequently in any Python application. At the time of this writing, Python interns integers in the range(-5, 257) but this is subject to change, so do not code your application to expect this. * This means that without that reference, interned strings are no longer immortal and subject to garbage collection like everything else. * The not operator has the highest precedence and is immediately one level below all the comparison operators. The and and or operators follow, respectively. 5.Standard Type Built-in Functions * cmp(obj1, obj2) Compares obj1 and obj2, returns integer i where: i 0 if obj1 > obj2, i == 0 if obj1 == obj2 * repr(obj) Returns evaluatable string representation of obj * str(obj) Returns printable string representation of obj * type(obj) Determines type of obj and return type object 6. str(), repr() and ` content ` operator: re-create an object through evaluation or obtain a human-readable view of the contents of objects, data values, object types, * str(): string. str() has the job of delivering a “printable” string representation of an object, which may not necessarily be acceptable by eval(), but will look nice in a print statement. * repr(): representation. * ``(reverse quote) and ''(single quote):Note that: * only repr() and `` do exactly the same thing, and using them will deliver the “official” string representation of an object that can be evaluated as a valid Python expression (using the eval() BIF) * The executive summary is that repr() is Python-friendly while str() produces human-friendly output. * statement. There is a caveat that while most return values from repr() can be evaluated, not all can: eval(`type(type))`)
###Code
eval(`type(type))`)
###Output
_____no_output_____
###Markdown
7. type() and isinstance() * type() returns the type for any Python object, not just the standard types * isinstance(): type() returns the type for any Python object, not just the standard types. This Boolean function takes an object and one or more type objects and returns True if the object in question is an instance of one of the type objects. if isinstance(num, int)...
###Code
x=2
if isinstance(x,(int,float,complex)):
print('a number of type',type(x).__name__)
###Output
a number of type int
|
DataProcess/db_creation_yfinance.ipynb | ###Markdown
Stock Price
###Code
# stock price
# pull historical data for a symbol
def get_history_stock_price(symbol, max_num=1000):
# get adjusted daily time series value
stock = yf.Ticker(symbol)
df = stock.history(period="max", auto_adjust=True)[-(max_num+50):].reset_index()
df["Symbol"] = symbol
df["last_close"] = df["Close"].shift(+1)
df["Change"] = (df["Close"] - df["last_close"]) / df["last_close"]
if df.empty:
return None
# macd
df["shortEMA"] = df.Close.ewm(span=12, adjust=False).mean()
df["longEMA"] = df.Close.ewm(span=26, adjust=False).mean()
df["DIF"] = df["shortEMA"]-df["longEMA"]
df["DEA"] = df["DIF"].ewm(span=9, adjust=False).mean()
df["MACD"] = (df["DIF"]-df["DEA"])*2
# RSI
intl = 6
df[['upMove', 'downMove', 'avgUp', 'avgDown', 'RS', 'RSI']] = np.nan
## Calculate upMove & downMove
for x in range(1, len(df)):
df['upMove'][x] = 0
df['downMove'][x] = 0
if df['Close'][x] > df['Close'][x-1]:
df['upMove'][x] = df['Close'][x] - df['Close'][x-1]
if df['Close'][x] < df['Close'][x-1]:
df['downMove'][x] = abs(df['Close'][x] - df['Close'][x-1])
## Calculate initial avgUp & Down, RS and RSI
df['avgUp'][intl] = df['upMove'][1:intl+1].mean()
df['avgDown'][intl] = df['downMove'][1:intl+1].mean()
df['RS'][intl] = df['avgUp'][intl] / df['avgDown'][intl]
df['RSI'][intl] = 100 - (100/(1+df['RS'][intl]))
## Calculate rest of avgUp, avgDown, RS, RSI
for x in range(intl+1, len(df)):
df['avgUp'][x] = (df['avgUp'][x-1]*(intl-1)+df['upMove'][x])/intl
df['avgDown'][x] = (df['avgDown'][x-1]*(intl-1)+df['downMove'][x])/intl
df['RS'][x] = df['avgUp'][x] / df['avgDown'][x]
df['RSI'][x] = 100 - (100/(1+df['RS'][x]))
# combine all
df = df[["Date", "Symbol", "Open", "High", "Low", "Close", "Volume", "Change", "DIF", "DEA", "MACD", "RSI"]][-max_num:].dropna().reset_index(drop=True)
return(df)
def get_sp500_history_stock_price(sp500_list, max_num=1000):
df = pd.DataFrame()
for symbol in sp500_list:
print(symbol)
df_symbol = get_history_stock_price(symbol, max_num=max_num)
df = df.append([df_symbol])
df = df[["Date", "Symbol", "Open", "High", "Low", "Close", "Volume", "Change", "DIF", "DEA", "MACD", "RSI"]]
df.dropna(subset=["Symbol", "Close"], inplace=True)
return(df)
df_stock_price = get_sp500_history_stock_price(sp500_list)
print(df_stock_price.shape)
df_stock_price.to_csv("yfinance_price.csv", index=False)
df_stock_price.to_sql("Price", con=engine, index=False, if_exists="replace")
engine.execute("SHOW TABLES").fetchall()
mycursor = mydb.cursor()
mycursor.execute(
"""
SELECT *
FROM Price
LIMIT 5
"""
)
print([i[0] for i in mycursor.description])
result = mycursor.fetchall()
for x in result:
print(x)
###Output
['Date', 'Symbol', 'Open', 'High', 'Low', 'Close', 'Volume', 'Change', 'DIF', 'DEA', 'MACD', 'RSI']
(datetime.datetime(2017, 11, 24, 0, 0), 'MMM', 205.993438589389, 206.00231159219456, 204.91033619806876, 205.4163818359375, 659100.0, -0.0008635275430403725, 2.144359748716937, 2.304850072856183, -0.3209806482784918, 62.33685630737739)
(datetime.datetime(2017, 11, 27, 0, 0), 'MMM', 205.7448629232553, 208.21291240663504, 205.19443871441462, 207.7423858642578, 1780800.0, 0.011323361883464833, 2.295000693506722, 2.302880196986291, -0.015759006959137345, 73.14076380699487)
(datetime.datetime(2017, 11, 28, 0, 0), 'MMM', 207.84886965546596, 209.46464990915248, 207.1919153714291, 209.18943786621094, 1871700.0, 0.0069656078894687, 2.502304719426519, 2.3427651014743365, 0.3190792359043648, 77.8781667574436)
(datetime.datetime(2017, 11, 29, 0, 0), 'MMM', 209.56235707173073, 211.46222282232958, 209.19836074583006, 211.3024139404297, 1754600.0, 0.01010077801141243, 2.804762606401937, 2.4351646024598566, 0.7391960078841606, 83.10093754196812)
(datetime.datetime(2017, 11, 30, 0, 0), 'MMM', 212.05703433216695, 216.13197784937972, 211.0183240364011, 215.8567657470703, 4003400.0, 0.02155371404287216, 3.3730780453722105, 2.6227472910423275, 1.5006615086597659, 89.50793055211375)
###Markdown
Fundamentals
###Code
def get_stock_fundamentals(symbol):
stock = yf.Ticker(symbol)
return(stock.get_info())
def get_sp500_funamentals(sp500_list):
info = []
for symbol in sp500_list:
print(symbol)
data = get_stock_fundamentals(symbol)
if "symbol" in data:
info.append(data)
df = pd.DataFrame(info)
print("done!")
return(df)
df_fundamentals = get_sp500_funamentals(sp500_list=sp500_list)
kp_list = [
"symbol",
"quoteType",
"shortName",
"longName",
"exchange",
"exchangeTimezoneName",
"currency",
"sector",
"industry",
"fullTimeEmployees",
"longBusinessSummary",
"country",
"state",
"city",
"phone",
"website",
"address1",
"fiftyTwoWeekLow",
"fiftyTwoWeekHigh",
"pegRatio",
"marketCap",
"enterpriseValue",
"bookValue",
"averageVolume",
"floatShares",
"sharesOutstanding",
"enterpriseToRevenue",
"beta",
"trailingPE",
"trailingEps",
"priceToSalesTrailing12Months",
"forwardEps",
"forwardPE",
"priceToBook",
"heldPercentInstitutions",
"debtToEquity",
"returnOnEquity",
"returnOnAssets",
"shortRatio",
"sharesShort",
"52WeekChange",
"enterpriseToEbitda",
"totalCash",
"totalDebt",
"totalRevenue",
"totalCashPerShare",
"numberOfAnalystOpinions",
"currentPrice",
"targetHighPrice",
"targetMeanPrice",
"targetMedianPrice",
"targetLowPrice",
"recommendationKey",
"recommendationMean",
"revenuePerShare",
"grossProfits",
"freeCashflow",
"ebitda",
"operatingMargins",
"revenueGrowth",
"operatingCashflow",
"grossMargins",
"profitMargins",
"ebitdaMargins",
]
df_fundamentals.dropna(subset=["currentPrice"], inplace=True)
df_fundamentals = datetime.now()
df_fundamentals.to_csv("sp500_{}_{}_{}.csv".format(dt.year, dt.month, dt.day), index=False)
df_fundamentals.to_sql("Fundamentals", con=engine, index=False, if_exists="replace")
engine.execute("SHOW TABLES").fetchall()
mycursor = mydb.cursor()
mycursor.execute(
"""
SELECT sector, count(*) as cnt
FROM Fundamentals
GROUP BY sector;
"""
)
print([i[0] for i in mycursor.description])
result = mycursor.fetchall()
for x in result:
print(x)
###Output
['sector', 'cnt']
('Industrials', 73)
('Healthcare', 65)
('Technology', 71)
('Communication Services', 27)
('Consumer Cyclical', 66)
('Utilities', 28)
('Financial Services', 68)
('Basic Materials', 21)
('Real Estate', 29)
('Consumer Defensive', 34)
('Energy', 21)
|
all_repository/New2021/cartpole_dqn_wsl.ipynb | ###Markdown
Partial Codes
###Code
%matplotlib inline
# 기본 패키지
import numpy as np
import random
from collections import deque
import matplotlib.pyplot as plt
# 강화학습 환경 패키지
import gym
# 인공지능 패키지: 텐서플로, 케라스
# 호환성을 위해 텐스플로에 포함된 케라스를 불러옴
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Model, Input
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
def create_q_model(num_states, num_actions):
inputs = Input(shape=(num_states,))
layer = Dense(32, activation="relu")(inputs)
layer = Dense(16, activation="relu")(layer)
action = Dense(num_actions, activation="linear")(layer)
return Model(inputs=inputs, outputs=action)
model = create_q_model(4,2)
model.summary()
class World_00:
def __init__(self):
self.get_env_model()
def get_env_model(self):
self.env = gym.make('CartPole-v1')
self.num_states = self.env.observation_space.shape[0]
self.num_actions = self.env.action_space.n
self.model = create_q_model(self.num_states, self.num_actions)
# print(self.model.summary())
def train(self):
states = np.zeros((10,self.num_states), dtype=np.float32)
with tf.GradientTape() as tape:
predicts = self.model(states)
new_world = World_00()
new_world.train()
print('Simple training is completed!')
def env_test_model_memory(memory, env, model, n_episodes=1000,
flag_render=False):
for e in range(n_episodes):
done = False
score = 0
s = env.reset()
while not done:
s_array = np.array(s).reshape((1,-1))
Qsa = model.predict(s_array)[0]
a = np.argmax(Qsa)
next_s, r, done, _ = env.step(a)
if flag_render:
env.render()
score += r
memory.append([s,a,r,next_s,done])
print(f'Episode: {e:5d} --> Score: {score:3.1f}')
print('Notice that the max score is set to 500.0 in CartPole-v1')
def list_rotate(l):
return list(zip(*l))
class World_01(World_00):
def __init__(self):
World_00.__init__(self)
self.memory = deque(maxlen=2000)
self.N_batch = 64
self.t_model = create_q_model(self.num_states, self.num_actions)
self.discount_factor = 0.99
self.learning_rate = 0.001
self.optimizer = Adam(lr=self.learning_rate)
def trial(self, flag_render=False):
env_test_model_memory(self.memory, self.env,
self.model, n_episodes=10, flag_render=flag_render)
print(len(self.memory))
def train_memory(self):
if len(self.memory) >= self.N_batch:
memory_batch = random.sample(self.memory, self.N_batch)
s_l,a_l,r_l,next_s_l,done_l = [np.array(x) for x in list_rotate(memory_batch)]
model_w = self.model.trainable_variables
with tf.GradientTape() as tape:
Qsa_pred_l = self.model(s_l.astype(np.float32))
a_l_onehot = tf.one_hot(a_l, self.num_actions)
Qs_a_pred_l = tf.reduce_sum(a_l_onehot * Qsa_pred_l, axis=1)
Qsa_tpred_l = self.t_model(next_s_l.astype(np.float32))
Qsa_tpred_l = tf.stop_gradient(Qsa_tpred_l)
max_Q_next_s_a_l = np.amax(Qsa_tpred_l, axis=-1)
Qs_a_l = r_l + (1 - done_l) * self.discount_factor * max_Q_next_s_a_l
loss = tf.reduce_mean(tf.square(Qs_a_l - Qs_a_pred_l))
grads = tape.gradient(loss, model_w)
self.optimizer.apply_gradients(zip(grads, model_w))
new_world = World_01()
new_world.trial()
new_world.train_memory()
new_world.env.close()
print('Completed!')
class World_02(World_01):
def __init__(self):
World_01.__init__(self)
self.epsilon = 0.2
def update_t_model(self):
self.t_model.set_weights(self.model.get_weights())
def best_action(self, s):
if random.random() <= self.epsilon:
return random.randrange(self.num_actions)
else:
s_array = np.array(s).reshape((1,-1))
Qsa = self.model.predict(s_array)[0]
return np.argmax(Qsa)
def trials(self, n_episodes=100, flag_render=False):
memory = self.memory
env = self.env
model = self.model
score_l = []
for e in range(n_episodes):
done = False
score = 0
s = env.reset()
while not done:
a = self.best_action(s)
next_s, r, done, _ = env.step(a)
if flag_render:
env.render()
score += r
memory.append([s,a,r,next_s,done])
# self.train_memory()
s = next_s
self.train_memory()
self.update_t_model()
print(f'Episode: {e:5d} --> Score: {score:3.1f}')
score_l.append(score)
return score_l
new_world = World_02()
score_l = new_world.trials(n_episodes=50)
new_world.env.close()
np.save('score_l.npy', score_l)
###Output
Episode: 0 --> Score: 55.0
Episode: 1 --> Score: 39.0
Episode: 2 --> Score: 57.0
Episode: 3 --> Score: 33.0
Episode: 4 --> Score: 12.0
Episode: 5 --> Score: 8.0
Episode: 6 --> Score: 11.0
Episode: 7 --> Score: 11.0
Episode: 8 --> Score: 15.0
Episode: 9 --> Score: 13.0
Episode: 10 --> Score: 8.0
Episode: 11 --> Score: 8.0
Episode: 12 --> Score: 11.0
Episode: 13 --> Score: 10.0
Episode: 14 --> Score: 12.0
Episode: 15 --> Score: 16.0
Episode: 16 --> Score: 13.0
Episode: 17 --> Score: 30.0
Episode: 18 --> Score: 12.0
Episode: 19 --> Score: 10.0
Episode: 20 --> Score: 12.0
Episode: 21 --> Score: 10.0
Episode: 22 --> Score: 12.0
Episode: 23 --> Score: 10.0
Episode: 24 --> Score: 17.0
Episode: 25 --> Score: 10.0
Episode: 26 --> Score: 18.0
Episode: 27 --> Score: 14.0
Episode: 28 --> Score: 15.0
Episode: 29 --> Score: 24.0
Episode: 30 --> Score: 23.0
Episode: 31 --> Score: 22.0
Episode: 32 --> Score: 27.0
Episode: 33 --> Score: 24.0
Episode: 34 --> Score: 76.0
Episode: 35 --> Score: 35.0
Episode: 36 --> Score: 75.0
Episode: 37 --> Score: 61.0
Episode: 38 --> Score: 39.0
Episode: 39 --> Score: 46.0
Episode: 40 --> Score: 38.0
Episode: 41 --> Score: 51.0
Episode: 42 --> Score: 40.0
Episode: 43 --> Score: 34.0
Episode: 44 --> Score: 53.0
Episode: 45 --> Score: 73.0
Episode: 46 --> Score: 50.0
Episode: 47 --> Score: 57.0
Episode: 48 --> Score: 56.0
Episode: 49 --> Score: 57.0
|
Experiment Scripts/Data_Visualisation_Code/.ipynb_checkpoints/Covid-19_Data_Visualisation-Bulk_Data-checkpoint.ipynb | ###Markdown
Covid-19 Data Visualisation
###Code
## Importing important libraries
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rc('xtick', labelsize=12)
matplotlib.rc('ytick', labelsize=30)
matplotlib.rcParams.update({'font.size': 28})
import math
import datetime as dt
import os
import sys
###Output
_____no_output_____
###Markdown
Utility Function
###Code
## Visulalization function
def Visualize(dataset,List_of_count_to_print,title1,ylab,vx=50,vy=30,w=.80):
df = dataset
n = 0
for i in List_of_count_to_print:
filter1 = df['Country'] == i
df = df[filter1]
labels = df['Date']
conf = df['Confirmed']
Recov = df['Recovered']
Death = df['Deaths']
#high = max(conf)
#low = min(conf)
x = np.arange(len(labels)) # the x label locations
width = w # the width of the bars
fig, ax = plt.subplots(figsize=(vx,vy))
rects1 = ax.bar(x - width, conf, width, label='confirmed')
rects2 = ax.bar(x , Recov, width, label='Recovered')
rects3 = ax.bar(x + width , Death, width, label='Death')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel(ylab)
ax.set_title(title1)
ax.set_xticks(x)
plt.xticks(rotation=90)
#plt.ylim([math.ceil(low-0.5*(high-low)), math.ceil(high+0.5*(high-low))])
ax.set_xticklabels(labels)
ax.legend()
n = n + 1
plt.show()
## function to Check the List of Countries avaialable
def count_avalaible(dataframe,country_coul_rep = 'Country'):
x = 0
for i in set(dataframe.loc[:,country_coul_rep]):
print(i,end=' | ')
x = x + 1
if(x > 6):
x = 0
print()
print("\n\n##Total No of Countries = " + str(len(set(dataframe.loc[:,country_coul_rep]))))
###Output
_____no_output_____
###Markdown
Loading Covid Data
###Code
Covid_19_Countires_Wise_Bulk = pd.read_csv('../../Covid-19-Data(selected-dataset)/time-series-19-covid-combined.csv')
Covid_19_Countires_Wise_Bulk
Covid_19_Countires_Wise_Bulk = Covid_19_Countires_Wise_Bulk.rename(columns = {'Country/Region': 'Country'}, inplace = False)
## Droping unnecessary Data
Covid_19_Countires_Wise_Bulk = Covid_19_Countires_Wise_Bulk.drop(['Province/State', 'Lat','Long'], axis=1)
Covid_19_Countires_Wise_Bulk = Covid_19_Countires_Wise_Bulk.fillna(0)
Covid_19_Countires_Wise_Bulk
###Output
_____no_output_____
###Markdown
Covid_19_Countires_Wise Analysis
###Code
## Check the List of Countries avaialable
count_avalaible(Covid_19_Countires_Wise_Bulk,'Country')
#print(set(Covid_19_Countires_Wise['Country']))
filter1 = Covid_19_Countires_Wise_Bulk['Country'] == 'Japan'
Covid_19_Countires_Wise_country_specific = Covid_19_Countires_Wise_Bulk[filter1]
Covid_19_Countires_Wise_country_specific
#Covid_19_Countires_Wise_Bulk ## Uncomment this to view for all countires at once
Visualize(Covid_19_Countires_Wise_Bulk , ['Japan'],'Spread of Covid 19','No of Patients in (millions)')
###Output
_____no_output_____
###Markdown
Data Preprocessing
###Code
data = np.array(Covid_19_Countires_Wise_Bulk)
date_range = data[:,0]
Country_name = data[:,1]
time_series_data = data[:,[2,3,4]]
print(time_series_data)
Xs = time_series_data
n = len(set(list(data[:,0])))
print(n)
###Output
[[0 0.0 0]
[0 0.0 0]
[0 0.0 0]
...
[8021 7627.0 230]
[8036 7632.0 230]
[8055 7640.0 231]]
267
###Markdown
Creating Custom Time Series Model
###Code
## No of days data collected so far
n = len(set(list(data[:,0])))
def time_series_forecaste(data,pred):
fix_latest_data = np.amax(data,axis=0)[0]
w1 = []
fix_latest_data = np.datetime64(fix_latest_data) + np.timedelta64(1,'D')
predict_frame = np.zeros(shape=(1,5))
filter1 = []
#latest_data = type(dt.datetime(latest_data))
countries = set(list(data[:,1]))
p = 0
x = 0
for i in countries:
latest_data = fix_latest_data
filter1.clear()
p = p + 1
completed = (p/len(countries))*100
if(p == n/4 or p == n/2 or p == (n*3)/4 or p == n):
print(f'% completed = {completed}', flush=True)
count = pred
for j in range(0,len(data[:,0])):
if(data[j,1] == i):
filter1.append(True)
else:
filter1.append(False)
data2 = data[filter1]
w1.clear()
for j in range(0,len(data2[:,0])):
if(data2[j,1] == i):
x = x + 1
for k in range(len(data2[:,[2]])):
w1.append((len(data2[:,[2]])- k)/len(data2[:,[2]]))
count = count - 1
val1 = ((sum(data2[:,2]*w1))/len(data2[:,[2]]))
val2 = ((sum(data2[:,3]*w1))/len(data2[:,[3]]))
val3 = ((sum(data2[:,4]*w1))/len(data2[:,[4]]))
predict_frame = np.append(predict_frame,[str(latest_data),str(i),str(int(val1)),str(int(val2)),str(int(val3))])
predict_frame = predict_frame.reshape((int(len(predict_frame)/5)),5)
latest_data = np.datetime64(latest_data) + np.timedelta64(1,'D')
data2 = np.append(data2,[str(latest_data),str(i),int(val1),int(val2),int(val3)])
data2 = data2.reshape((int(len(data2)/5)),5)
data2[:,[2,3,4]] = data2[:,[2,3,4]].astype(np.int)
w1.clear()
if(count < 0):
break
new_val = pd.DataFrame(predict_frame[1:,[0,1,2,3,4]])
new_val = new_val.rename(columns = {0:'Date',1:'Country',2:'Confirmed',3:'Recovered',4:'Deaths'}, inplace = False)
return new_val
No_of_days_to_predict_in_future = 10
val123 = time_series_forecaste(data,No_of_days_to_predict_in_future)
val123
val = np.array(val123)
val[:,[2]].astype(np.int)
w1 = []
for i in range(len(val[:,[2]])):
w1.append((len(val[:,[2]])-i)/len(val[:,[2]]))
print(val[:,[2]].astype(np.int))
print(val[:,[2]].astype(np.int)*w1)
#val[:,[2]].astype(np.int)*(val[:,[2]].astype(np.int)/63794)
Complete_Data = val123
filter1 = val123['Country'] == 'US'
val123 = val123[filter1]
val123
#print(Covid_19_Countires_Wise)
Visualize(val123,
['US'],'Spread of Covid 19','No of Patients in (millions)',vx=50,vy=25,w=.3)
# Normalizing the Covid Population data
Population_Data_Countires_Wise_Processed = pd.read_csv('../Pre_Processed_Data/Population_Data_Countires_Wise_Processed.csv')
Population_Data_Countires_Wise_Processed
###Output
_____no_output_____
###Markdown
Normalising Covid data based on population using Percentage
###Code
Covid_19_Countires_Wise_Bulk
###Output
_____no_output_____ |
10_pipeline/kubeflow/00_02_Create_S3_Bucket.ipynb | ###Markdown
1. Create S3 Bucket (If Not Already Created)
###Code
%%bash
export S3_BUCKET=sagemaker-${AWS_REGION}-$(aws sts get-caller-identity --query 'Account' --output text)
echo "export S3_BUCKET=${S3_BUCKET}" | tee -a ~/.bash_profile
# Create a new S3 bucket and upload the dataset.
aws s3 ls s3://$S3_BUCKET || aws s3 mb s3://${S3_BUCKET}
echo "Completed"
###Output
_____no_output_____
###Markdown
2. Verify S3_BUCKET Env Variable
###Code
%%bash
. ~/.bash_profile
echo "${S3_BUCKET}"
###Output
_____no_output_____
###Markdown
3. Verify S3_BUCKET Bucket Creation
###Code
%%bash
. ~/.bash_profile
aws s3 ls s3://${S3_BUCKET}
###Output
_____no_output_____
###Markdown
Release Resources
###Code
%%html
<p><b>Shutting down your kernel for this notebook to release resources.</b></p>
<button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button>
<script>
try {
els = document.getElementsByClassName("sm-command-button");
els[0].click();
}
catch(err) {
// NoOp
}
</script>
%%javascript
try {
Jupyter.notebook.save_checkpoint();
Jupyter.notebook.session.delete();
}
catch(err) {
// NoOp
}
###Output
_____no_output_____
###Markdown
1. Create S3 Bucket (If Not Already Created)
###Code
%%bash
export S3_BUCKET=sagemaker-${AWS_REGION}-$(aws sts get-caller-identity --query 'Account' --output text)
echo "export S3_BUCKET=${S3_BUCKET}" | tee -a ~/.bash_profile
# Create a new S3 bucket and upload the dataset.
aws s3 ls s3://$S3_BUCKET || aws s3 mb s3://${S3_BUCKET}
echo "Completed"
###Output
_____no_output_____
###Markdown
2. Verify S3_BUCKET Env Variable
###Code
%%bash
. ~/.bash_profile
echo "${S3_BUCKET}"
###Output
_____no_output_____
###Markdown
3. Verify S3_BUCKET Bucket Creation
###Code
%%bash
. ~/.bash_profile
aws s3 ls s3://${S3_BUCKET}
###Output
_____no_output_____
###Markdown
Release Resources
###Code
%%html
<p><b>Shutting down your kernel for this notebook to release resources.</b></p>
<button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button>
<script>
try {
els = document.getElementsByClassName("sm-command-button");
els[0].click();
}
catch(err) {
// NoOp
}
</script>
%%javascript
try {
Jupyter.notebook.save_checkpoint();
Jupyter.notebook.session.delete();
}
catch(err) {
// NoOp
}
###Output
_____no_output_____ |
notebooks/Step_2_ETL_loading_data_from_SQL_and_text_data_processing.ipynb | ###Markdown
**Notes:**1. Loading data from PostgreSQL DB file2. Creating custom functions to tidy up text data for analysis 1. Importing packages and Data Loadings
###Code
# Importing necessary packages
import re
import os
import random, string
import pandas as pd
from string import punctuation
from google.colab import data_table
from sqlalchemy import create_engine
from sqlalchemy.sql import text
from sqlalchemy import inspect
data_table.enable_dataframe_formatter()
###Output
_____no_output_____
###Markdown
1.1 SQL DBs
###Code
# Install postgresql server
!sudo apt-get -y -qq update
!sudo apt-get -y -qq install postgresql
!sudo service postgresql start
# Setup a password `postgres` for username `postgres`
!sudo -u postgres psql -U postgres -c "ALTER USER postgres PASSWORD 'postgres';"
# Setup a database with name `tfio_demo` to be used
!sudo -u postgres psql -U postgres -c 'DROP DATABASE IF EXISTS uohyd_pgdaiml_project_db;'
!sudo -u postgres psql -U postgres -c 'CREATE DATABASE uohyd_pgdaiml_project_db;'
%env TFIO_DEMO_DATABASE_NAME=uohyd_pgdaiml_project_db
%env TFIO_DEMO_DATABASE_HOST=localhost
%env TFIO_DEMO_DATABASE_PORT=5432
%env TFIO_DEMO_DATABASE_USER=postgres
%env TFIO_DEMO_DATABASE_PASS=postgres
endpoint="postgresql://{}:{}@{}?port={}&dbname={}".format(
os.environ['TFIO_DEMO_DATABASE_USER'],
os.environ['TFIO_DEMO_DATABASE_PASS'],
os.environ['TFIO_DEMO_DATABASE_HOST'],
os.environ['TFIO_DEMO_DATABASE_PORT'],
os.environ['TFIO_DEMO_DATABASE_NAME'],
)
# PSQL engine and connection creations
my_psql_engine = create_engine(endpoint)
my_psql_conexion = my_psql_engine.connect()
!cp /content/drive/MyDrive/PG_Diploma_AI_ML_2021_UOHYD/PGDAIML_Project_Spam_Clustering/SQL_datos/los_mensajes_class_7726_db .
# SQL DB Importing
!PGPASSWORD=$TFIO_DEMO_DATABASE_PASS psql -q -h $TFIO_DEMO_DATABASE_HOST -p $TFIO_DEMO_DATABASE_PORT -U $TFIO_DEMO_DATABASE_USER -d $TFIO_DEMO_DATABASE_NAME < los_mensajes_class_7726_db
inspector = inspect(my_psql_engine)
schemas = inspector.get_schema_names()
db_dict = dict({'relation_name':[],'no_of_records':[]})
for r in my_psql_conexion.execute(text('SELECT schemaname,relname,n_live_tup FROM pg_stat_user_tables ORDER BY n_live_tup DESC')):
db_dict['no_of_records'].append(r[2])
db_dict['relation_name'].append(r[1])
db_tables_df = pd.DataFrame(db_dict)
pd_df_names = []
for _ in range(6):
pd_df_names.append(f'df_{_}')
uohyd_sql_dataframes = [ ]
for df,table in zip(pd_df_names,db_dict['relation_name']):
sql_query = text("SELECT *FROM {}".format(table))
df = pd.read_sql(sql_query,my_psql_conexion)
uohyd_sql_dataframes.append(df)
messages_SC = pd.concat(uohyd_sql_dataframes)
###Output
_____no_output_____
###Markdown
2. Custom Functions for tidying text data
###Code
def my_custom_url_check(texto):
"""Check if a message contains any URL or not"""
txt_tidy = re.sub(r'\n|\r'," ",str(texto))
rgx_url = re.compile('[a-zA-Z0-9]+([\-\.]{1}[a-zA-Z0-9]+)*\.[a-zA-Z]{2,5}(:[0-9]{1,5})?(\/.*)?$')
return bool(re.search(rgx_url,str(txt_tidy)))
def my_custom_url_extractor(texto):
"""Extract a URL from a message"""
url_ext_reg = re.compile('((http(s)?\:\/\/)?[a-zA-Z0-9\@]+([\-\.]{1}[a-zA-Z0-9\@]+)*\.[a-zA-Z]{2,5}(:[0-9]{1,2})?((\/)[\w\d\!|\"|\#|\$|\%|\&|\'|\(|\)|\*|\+|\,|\-|\/|\:|\;|\<|\=|\>|\?|\[|\\|\]|\^|\_|\`|\{|\||\}|\~|\.]+)?)')
urls = [ url[0] for url in url_ext_reg.findall(texto) ]
return urls
def my_url_replacer(rgx,texto):
"""Replace an extracted URL position with a UNIQUE Keyword"""
if len(rgx)>=2:
patrn = '|'.join(map(re.escape, rgx))
#return patrn
return re.sub(patrn," MYURLEXTRACTED ",texto)
elif len(rgx)==1:
return re.sub(re.escape(rgx[0])," MYURLEXTRACTED ",texto)
else:
pass
def my_string_tidy(texto):
"""Clean up all the punctuations from a message"""
rgx_ltrs = re.compile("[\!|\"|\#|\$|\%|\&|\'|\(|\)|\*|\+|\,|\-|\/|\:|\;|\<|\=|\>|\?|\[|\\|\]|\^|\_|\`|\{|\||\}|\~|\.]+|FRM\:[\w\W]+(SUBJ\:)?MSG\:|\@|http(s)?|HTTP(S)?|\n|\r")
return re.sub(rgx_ltrs,"",str(texto))
def my_url_tidy(url):
"""Clean up an extracted URL Domain and collect a Main URL domain"""
rgx_pat = r'http(s)?\:\/\/|(?<=[\w])\/([\w\W]+)?'
# added \: and \?
rgx_url_short=r'[\w|\-]+\.[\w\:\?]+\Z'
if len(url)>=2:
url_more_two = []
for item in url:
url_more_two.extend(re.findall(rgx_url_short,re.sub(rgx_pat,"",str(item))))
return [re.sub('\.','dot',u) for u in url_more_two]
elif len(url)==1:
url_list = re.findall(rgx_url_short,re.sub(rgx_pat,"",str(url[0])))
return [ re.sub('\.','dot',item) for item in url_list ]
return url_list
else:
pass
def my_tidy_URL_replacer(text,replacer):
"""Replace an URL Unique Keyword with a tidy URL domain"""
if len(replacer)>=2:
rep_url = ' '.join(replacer)
return re.sub('MYURLEXTRACTED',rep_url,text)
elif len(replacer)==1:
return re.sub('MYURLEXTRACTED',replacer[0],str(text))
else:
pass
def my_custom_cta_email_check(texto):
"""Check if a message contains any EMAIL Call to Actions"""
cta_rgx = r'(\b[A-Za-z-0-9\_\%\+\-.]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\s)'
return bool(re.search(cta_rgx,str(texto)))
def my_custom_cta_email_extract(texto):
"""Extract an EMAIL Call to Action from a message"""
return re.sub(r'(\b[A-Za-z-1-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b)'," MYCTAEMAILEXTRACTED ",str(texto))
def my_custom_cta_phone(texto):
"""Extract a PHONE NUMBER from message"""
return re.sub(r'(1\s?)?((\([0-9]{3}\))|[0-9]{3})[\s\-]?[\0-9]{3}[\s\-]?[0-9]{4}(\:|\,|\.)?\s'," MYCTAPHONEEXTRACTED ",str(texto))
def my_numerical_cleaner(texto):
"""Replace any digits with a unique keyword"""
return re.sub(r'([\d]+(\s)?(\,)?(\-)?)?[\d]+'," DIGITEXTRACTED ",str(texto))
def my_non_ascii(txt):
"""Check if a message contains any Non-ASCII words and replace them with a Unique Keywords"""
return re.sub(r'[^\x00-\x7F]+'," NONASCII ",str(txt))
my_stop_words_df = pd.read_excel('/content/drive/MyDrive/PG_Diploma_AI_ML_2021_UOHYD/PGDAIML_Project_Spam_Clustering/datos/my_smart_stop_words.xlsx')
my_stop_words_df_list = list(my_stop_words_df.stop_word)
def my_custom_stop_word_removal(txt):
stop_word_rgx = re.compile(r'\b(' + r'|'.join(my_stop_words_df_list) + r')\b\s*')
return stop_word_rgx.sub('', txt)
def my_custom_word_case_lower_upper(txt,caso="lower"):
if caso =="lower":
return txt.lower()
else:
return txt.upper()
###Output
_____no_output_____
###Markdown
3. Text Processings and Feature creations using above custom functions
###Code
# Check a message contains any URL or not
messages_SC['has_URL'] = messages_SC['message_preview'].apply(
lambda txt : my_custom_url_check(txt)
)
# Check a message contains any EMAIL Call to Action or Not
messages_SC['has_EMAIL_CTA'] = messages_SC['message_preview'].apply(
lambda txt : my_custom_cta_email_check(txt)
)
# Create a new field URL DOMAIN to have an extracted URL from a message
messages_SC['URL_Domain'] = messages_SC.apply(
lambda cols : my_custom_url_extractor(cols['message_preview']) if ( (cols['has_URL'] is True) and (cols['has_EMAIL_CTA'] is False)) else None,axis=1
)
# Removing unnecessary words of PH00 from messages
messages_SC['message_preview'] = messages_SC.message_preview.str.replace("\[\#\@\#PH00?[0-9]{1,2}\#\@\#\]","")
# Replacing a URL with a UNIQUE KEYWORD: MYURLEXTRACTED
messages_SC['message_preview'] = messages_SC.apply(
lambda cols: my_url_replacer(cols['URL_Domain'],cols['message_preview']) if ( (cols['has_URL'] is True) and (cols['has_EMAIL_CTA'] is False)) else cols['message_preview'],axis=1
)
# Find any CALL TO ACTION PHONE NUMBERS and Replace it with UNIQUE KEYWORD
messages_SC['message_preview'] = messages_SC['message_preview'].apply(lambda x: my_custom_cta_phone(x))
# Remove all punctuations
messages_SC['message_preview'] = messages_SC['message_preview'].apply(lambda x: my_string_tidy(x))
# Remove not useful DIGITS
messages_SC['message_preview'] = messages_SC['message_preview'].apply(lambda x: my_numerical_cleaner(x))
# Remove NON ASCII Words
messages_SC['message_preview'] = messages_SC['message_preview'].apply(lambda txt: my_non_ascii(txt))
# Extract Main URL domain from an extracted URL part
messages_SC['url_tidy'] = messages_SC.apply(
lambda cols: my_url_tidy(cols['URL_Domain']) if ( (cols['has_URL'] is True) and (cols['has_EMAIL_CTA'] is False)) else None,axis=1
)
# Replace a URL keyword with an string based URL format like googledotcom or fbdotcom
messages_SC['message_preview'] = messages_SC.apply(
lambda cols : my_tidy_URL_replacer(cols['message_preview'],cols['url_tidy']) if ( (cols['has_URL'] is True) and (cols['has_EMAIL_CTA'] is False)) else cols['message_preview'],axis=1
)
# converting text to lower case letters
messages_SC['message_preview'] = messages_SC['message_preview'].apply(
lambda msg : my_custom_word_case_lower_upper(msg)
)
# Stop word removals
messages_SC['message_preview'] = messages_SC['message_preview'].apply(
lambda msg : my_custom_stop_word_removal(msg)
)
# random records checks on messages dataframe
messages_SC.iloc[4500:5000,:]
# Save a tidy format messages dataframe into CSV
messages_SC.to_csv('messages_classified_tidy_version_1.csv')
###Output
_____no_output_____ |
08. 미적분과 최적화/03. 최적화 기초.ipynb | ###Markdown
최적화 기초 데이터 분석 문제 특히 예측 문제의 최종 목표는 실제 자료와 가장 유사한 값을 출력하는 예측기를 설계하는 것이다. 이 과정에서 예측 오차를 최소화하는 예측기 계수를 찾는 최적화(optimization) 문제를 풀어야 한다. 최적화 문제 최적화 문제는 특정한 제한 조건(constraint)을 만족시키면서 함수 $f$의 값을 최소화하는 변수 $x$의 값 $x^{\ast}$를 찾는 것이다. 최대화 문제는 $f(x)$ 를 $-f(x)$ 로 바꾸면 풀 수 있다.$$ x^{\ast} = \underset{x}{\operatorname{arg\,min}} \; f(x) $$$$ \text{subject to } g(x) \geq 0, \;\;\; h(x) = 0 $$이 때 제한 조건이 없으면 uncontrained optimzation, 제한 조건이 있으면 constrained optimization 이라고 한다. 또 최소화 혹은 최대화 하려는 함수를 목표 함수(objective function)이라고 한다. 기울기 필요 조건 어떤 종속 변수 값 $x^{\ast}$ 가 최소점이 되기 위해서는 일단 다음과 같이 값 $x^{\ast}$에서 함수의 기울기(slope) $\dfrac{df}{dx}$ 가 0이라는 조건을 만족해야 한다.* 단일 변수에 대한 함수인 경우, 미분값이 0$$ \dfrac{df(x)}{dx} = 0 $$* 다변수 함수인 경우 모든 변수에 대한 편미분값이 0$$ \dfrac{\partial f(x_1, x_2, \cdots , x_N)}{\partial x_1} = 0 $$$$ \dfrac{\partial f(x_1, x_2, \cdots , x_N)}{\partial x_2} = 0 $$$$ \vdots $$$$ \dfrac{\partial f(x_1, x_2, \cdots , x_N)}{\partial x_N} = 0 $$ 수치적 최적화 반복적 시행 착오(trial and error)에 의해 최적화 필요조건을 만족하는 값 $x^{\ast}$를 찾는 방법을 수치적 최적화(numerical optimization)이라고 한다. 일반적인 수치적 최적화 알고리즘들은 다음과 같은 것들이 있다.* Steepest Gradient Descent* Conjuated Gradient * Quasi-Newton (BFGS: Broyden-Fletcher-Goldfarb-Shanno)이 방법들은 모두 임의의 해 $x_k$를 가정하고 이 위치에서의 1차 도함수(gradient) 값 $g(x_k)$ 및 2차 도함수(Hessian) 값 $H(x_k)$ 를 사용하여 다음 위치를 $x_{k+1}$ 를 추정하는 과정을 반복하여 최소점을 찾아낸다. Steepest Gradient Descent 방법 Steepest Gradient Descent 방법은 다음과 같이 단순히 현재 위치에서의 기울기 값 $g(x_k)$ 만을 이용한다. $$ x_{k+1} = x_{k} - \alpha_k g(x_k) $$이 방법은 곡면의 모양이 좋지 않을 경우에는 수렴하는데 시간이 오래 걸린다. CG & BFGS CG(conjujated grdient) 방법이나 BFGS 방법은 모두 최적화하려는 영역을 2차 함수로 가정하고 2차 도함수인 헤시안 행렬 정보를 이용하여 더 빠르고 안정적으로 수렴하도록 한다. SciPy의 optimize 서브 패키지는 최적화 명령 `minimize` 를 제공한다. 세부적인 알고리즘은 `method` 인수로 선택할 수 있다. 디폴트 알고리즘은 BFGS 방법이다. 전역 최적화 문제 만약 최적화 하려는 함수가 복수의 국소 최저점(local minima)을 가지고 있는 경우에는 수치적 최적화 방법으로 전역 최저점(global minimum)에 도달한다는 보장이 없다. 결과는 초기 추정값 및 알고리즘, 파라미터 등에 의존한다.
###Code
def f(x):
return x**2 + 10*np.sin(x)
x = np.arange(-10, 10, 0.1)
plt.plot(x, f(x));
result = sp.optimize.minimize(f, 4)
print(result)
x0 = result['x']
x0
plt.plot(x, f(x));
plt.hold(True)
plt.scatter(x0, f(x0), s=200);
###Output
_____no_output_____
###Markdown
등식 제한 조건이 있는 최적화 다음과 같이 $ g(x) = 0 $ 이라는 등식(equality) 제한 조건이 있는 최소화 문제를 생각하자 $$ x^{\ast} = \text{arg} \min_x f(x) ,\,\,\,\ \text{subject to } \;\; g(x)=0$$이렇게 등식 제한 조건이 있는 최적화 문제는 **라그랑주 승수법(Lagrange multiplier)**을 사용하여 최적화 할 수 있다.라그랑주 승수 방법에서는 $f$가 아닌 $h = f + \lambda g$를 최적화한다. $h$ 는 독립 변수 $\lambda$가 추가된 함수 $h(x_1, x_2, \cdots , x_N, \lambda) $가 되므로 다음 조건을 만족해야 한다.$$ \dfrac{\partial (f + \lambda g)}{\partial x_1} = \dfrac{\partial f}{\partial x_1} + \lambda\dfrac{\partial g}{\partial x_1} = 0 $$$$ \dfrac{\partial (f + \lambda g)}{\partial x_2} = \dfrac{\partial f}{\partial x_2} + \lambda\dfrac{\partial g}{\partial x_2}= 0 $$$$ \vdots $$$$ \dfrac{\partial (f + \lambda g)}{\partial x_N} = \dfrac{\partial f}{\partial x_N} + \lambda\dfrac{\partial g}{\partial x_N}= 0 $$$$ \dfrac{\partial (f + \lambda g)}{\partial \lambda} = g = 0 $$위에서 구한 $N+1$개의 연립 방정식을 풀면 $N+1$개의 미지수 $x_1, x_2, \cdots, x_N, \lambda$를 구할 수 있다. 여기에서 $x_1, x_2, \cdots, x_N$ 이 제한 조건을 만족하는 최소값 위치를 나타낸다. 예를 들어 다음과 같은 함수 $f$를 최적화하는 문제를 라그랑주 승수법으로 풀어보자.$$ f(x_1, x_2) = \log{x_1} + \log{x_2} $$여기에서 $x_1$, $x_2$는? 단 $ x_1 + x_2 = 1 $ 을 만족해야 한다. $$ h = f + \lambda g = \log{x_1} + \log{x_2} + \lambda ( x_1 + x_2 - 1 ) $$$$ \dfrac{\partial (f + \lambda g)}{\partial x_1} = \dfrac{1}{x_1} + \lambda = 0$$$$ \dfrac{\partial (f + \lambda g)}{\partial x_2} = \dfrac{1}{x_2} + \lambda = 0 $$$$ \dfrac{\partial (f + \lambda g)}{\partial \lambda } = x_1 + x_2 - 1 = 0 $$$$ x_1 = x_2 = \dfrac{1}{2}, \;\;\; \lambda = -2 $$ 부등식 제한 조건이 있는 최적화 다음과 같이 $ g(x) \geq 0 $ 이라는 등식(equality) 제한 조건이 있는 최소화 문제를 생각하자 $$ x^{\ast} = \text{arg} \min_x f(x) ,\,\,\,\ \text{subject to } \;\; g(x) \geq 0$$이렇게 부등식 제한 조건이 있는 최적화 문제는 최소점 위치에서의 조건을 변형한 KKT(Karush-Kuhn-Tucker) 방법으로 최적화 한다. SciPy의 optimize 서브패키지에서는 제한 최적화 문제를 풀기위한 `fmin_slsqp` 명령을 제공한다.
###Code
def f(x):
return np.sqrt((x[0] - 3)**2 + (x[1] - 2)**2)
def constraint(x):
return np.atleast_1d(1.5 - np.sum(np.abs(x)))
sp.optimize.fmin_slsqp(f, np.array([0, 0]), ieqcons=[constraint, ])
###Output
Optimization terminated successfully. (Exit mode 0)
Current function value: 2.47487373504
Iterations: 5
Function evaluations: 20
Gradient evaluations: 5
|
notebooks/nb064_pandas_dataframes_2.ipynb | ###Markdown
<img src="img/python-logo-notext.svg" style="display:block;margin:auto;width:10%"/>Python: Pandas Data Frames 2Coding Akademie München GmbHDr. Matthias HölzlAllaithy Raed
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Fehlende Werte
###Code
def create_data_frame_with_nans():
return pd.DataFrame({'A': [1, 2, np.nan, np.nan, 0],
'B': [5, 6, 7, np.nan, 0],
'C': [9, 10, 11, 12, 0],
'D': [13, 14, 15, 16, 0],
'E': [np.nan, 18, 19, 20, 0]})
df = create_data_frame_with_nans()
df
df.isna()
df.count()
df
df.count(axis=1)
df.isna().sum()
df.isna().sum(axis=1)
df
df.dropna()
df
df.dropna(axis=1, inplace=True)
df
df = create_data_frame_with_nans()
df
df.fillna(value=1000)
df.fillna(value=df.mean())
df.mean()
# df.fillna(value=df.mean(), axis=1)
###Output
_____no_output_____
###Markdown
Gruppierung
###Code
def create_course_df():
data = {'Course':['Python','Python','Python','Python','Java','Java','Java','C++','C++'],
'Person':['Jack', 'Jill', 'John', 'Bill', 'Jack', 'Bill', 'Davy', 'Jack', 'Diane'],
'Score':[97, 92, 38, 73, 81, 52, 62, 86, 98]}
return pd.DataFrame(data)
df = create_course_df()
df
df.groupby('Course')
df_by_course = df.groupby('Course')
df_by_course.count()
df_by_course['Person'].count()
df_by_course.mean()
df_by_course.std()
df_by_course.aggregate(['mean', 'std'])
df_by_course.aggregate(['min', 'max'])
df_by_course['Score'].aggregate(['min', 'max', 'mean', 'std'])
df.groupby('Person').mean()
df_by_course.describe()
###Output
_____no_output_____
###Markdown
Nochmal Operationen
###Code
df = create_course_df()
df
df.columns
df.index
df.sort_values(by='Course')
df['Course'].values
df['Person'].values
df['Course'].unique()
df['Person'].unique()
df['Person'].nunique()
df['Person'].value_counts()
###Output
_____no_output_____
###Markdown
Selektion
###Code
df[df['Score'] > 80]
df[(df['Score'] > 60) & (df['Score'] <= 80)]
###Output
_____no_output_____
###Markdown
Transformation von Daten
###Code
df = pd.DataFrame([[1, 2], [3, 4], [5, 6]], columns = ['A', 'B'])
df
df.apply(np.square)
df.apply(np.sum)
df.apply(np.sum, axis=1)
df.apply(lambda n: [np.sum(n), np.mean(n)], axis=1)
df.apply(lambda n: [np.sum(n), np.mean(n)], axis=1, result_type='expand')
###Output
_____no_output_____
###Markdown
Elementweises Anwenden einer Funktion:
###Code
df.applymap(lambda x: f"Value: {x}")
###Output
_____no_output_____ |
Final_code/CNN/.ipynb_checkpoints/Music_auto_tagging-checkpoint.ipynb | ###Markdown
for GPU only
###Code
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
###Output
cuda:0
###Markdown
CHOICE 1 : Network without dropout or batch norm- comment line `net.to(device)` to run on CPU
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1,128,3)
self.conv2 = nn.Conv2d(128,384,3)
self.conv3 = nn.Conv2d(384,768,3)
self.conv4 = nn.Conv2d(768,2048,3)
self.conv5 = nn.Conv2d(2048,50,1) # network with 50 outputs
# self.conv5 = nn.Conv2d(2048,10,1) # network with 10 outputs
self.pool1 = nn.MaxPool2d((2,4),(2,4))
self.pool2 = nn.MaxPool2d((2,4),(2,4))
self.pool3 = nn.MaxPool2d((2,6),(2,6))
self.pool4 = nn.MaxPool2d((1,7),(1,7))
# self.sig = nn.Sigmoid()
def forward(self,x):
x = self.pool1(F.relu(self.conv1(x)))
x = self.pool2(F.relu(self.conv2(x)))
x = self.pool3(F.relu(self.conv3(x)))
x = self.pool4(F.relu(self.conv4(x)))
x = self.conv5(x)
x = x.view(-1, 50) # network with 50 outputs
# x = x.view(-1,10) # network with 10 outputs
# x = self.sig(x)
return x
net = Net()
net.to(device)
###Output
_____no_output_____
###Markdown
CHOICE 2 : Network with drop out and batch norm- comment the line `net.to(device)` to run on CPU
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1,128,3)
self.norm1 = nn.BatchNorm2d(128)
self.conv2 = nn.Conv2d(128,384,3)
self.norm2 = nn.BatchNorm2d(384)
self.conv3 = nn.Conv2d(384,768,3)
self.norm3 = nn.BatchNorm2d(768)
self.conv4 = nn.Conv2d(768,2048,3)
self.conv5 = nn.Conv2d(2048,50,1) # network with 50 outputs
# self.conv5 = nn.Conv2d(2048,10,1) # network with 10 outputs
self.pool1 = nn.MaxPool2d((2,4),(2,4))
self.pool2 = nn.MaxPool2d((2,4),(2,4))
self.pool3 = nn.MaxPool2d((2,6),(2,6))
self.pool4 = nn.MaxPool2d((1,7),(1,7))
self.drop = nn.Dropout2d(.5)
# self.sig = nn.Sigmoid()
def forward(self,x):
x = self.pool1(F.relu(self.norm1(self.conv1(x))))
x = self.pool2(self.drop(F.relu(self.norm2(self.conv2(x)))))
x = self.pool3(self.drop(F.relu(self.norm3(self.conv3(x)))))
x = self.pool4(F.relu(self.conv4(x)))
x = self.conv5(x)
x = x.view(-1, 50) # network with 50 outputs
# x = x.view(-1,10) # network with 10 outputs
# x = self.sig(x)
return x
net = Net()
net.to(device)
###Output
_____no_output_____
###Markdown
CHOICE 3: new feature space
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv0 = nn.Conv2d(1,40,(40,2))
self.conv1 = nn.Conv2d(1,128,3)
self.norm1 = nn.BatchNorm2d(128)
self.conv2 = nn.Conv2d(128,384,3)
self.norm2 = nn.BatchNorm2d(384)
self.conv3 = nn.Conv2d(384,768,3)
self.norm3 = nn.BatchNorm2d(768)
self.conv4 = nn.Conv2d(768,2048,3)
self.conv5 = nn.Conv2d(2048,50,1) # network with 50 outputs
# self.conv5 = nn.Conv2d(2048,10,1) # network with 10 outputs
self.pool1 = nn.MaxPool2d((2,4),(2,4))
self.pool2 = nn.MaxPool2d((2,4),(2,4))
self.pool3 = nn.MaxPool2d((2,6),(2,6))
self.pool4 = nn.MaxPool2d((1,7),(1,7))
self.drop = nn.Dropout2d(.5)
# self.sig = nn.Sigmoid()
def forward(self,x):
x = F.relu(self.conv0(x))
# print(x.shape)
x = torch.reshape(torch.squeeze(x),(4,1,40,909))
x = self.pool1(F.relu(self.norm1(self.conv1(x))))
x = self.pool2(self.drop(F.relu(self.norm2(self.conv2(x)))))
x = self.pool3(self.drop(F.relu(self.norm3(self.conv3(x)))))
x = self.pool4(F.relu(self.conv4(x)))
x = self.conv5(x)
x = x.view(-1, 50) # network with 50 outputs
# x = x.view(-1,10) # network with 10 outputs
# x = self.sig(x)
return x
net = Net()
net.to(device)
###Output
_____no_output_____
###Markdown
Specify file to load pre-saved network parameters IF NETWORK PARAMETERS SAVED- specify filename in `filename`- following trained networks available - network_10_epoch_10_output_norm_all_ex.pt - network_10_epoch_BNDO_norm_all_ex.pt - network_shuffled.pt - network_10_epoch_1.pt - network_10_epoch_norm_all_ex.pt - network_10_epoch_2.pt - network_10_epoch_norm_binwise.pt
###Code
filename = 'network_10_epoch_BNDO_norm_all_ex.pt'
net.load_state_dict(torch.load(os.path.join('./networks',filename)))
net.eval()
###Output
_____no_output_____
###Markdown
creating the training , test and validation dataset
###Code
# indices = list(np.arange(len(keys)))
# shuffle(indices)
# train_ind = indices[0:15516]
# val_ind = indices[15517:20689]
# test_ind = indices[20690:]
# print(len(train_ind))
# print(len(val_ind))
# print(len(test_ind))
# with open('train_ind.pickle','wb') as handle:
# pickle.dump(train_ind,handle)
# with open('val_ind.pickle','wb') as handle:
# pickle.dump(val_ind,handle)
# with open('test_ind.pickle','wb') as handle:
# pickle.dump(test_ind,handle)
###Output
_____no_output_____
###Markdown
- available datasets: - combined_dict_norm_all_examples.pickle - combined_dict_norm_binwise.pickle - combined_dict_norm_per.pickle - combined_dict_updated.pickle - network_10_epoch_1 - combined_dict_norm_all_examples_newSpecs.pkl
###Code
with open('../database/train_ind.pickle','rb') as handle:
train_ind = pickle.load(handle)
with open('../database/val_ind.pickle','rb') as handle:
val_ind = pickle.load(handle)
with open('../database/test_ind.pickle','rb') as handle:
test_ind = pickle.load(handle)
with open('../database/combined_dict_norm_all_examples.pickle','rb') as handle:
combined_dict = pickle.load(handle)
with open('../database/sorted_tags.pickle', 'rb') as handle:
sorted_stats = pickle.load(handle)
###Output
_____no_output_____
###Markdown
TRAINING loading the stored training, test and validation data
###Code
# TEST TO CHECK CONTENTS OF DICTIONARY
plt.imshow(combined_dict['2']['mel_spectrum'][0],aspect=20)
plt.show()
###Output
_____no_output_____
###Markdown
Calculating weights for weighted inary Cross Entropy Loss
###Code
pos_weight = []
for i in range(50): # make 50 for 50 output
pos_weight.append(sorted_stats[0][1]/sorted_stats[i][1])
pos_weight = np.array(pos_weight).reshape(1,50) # make 50 for 50 output
print(pos_weight)
pos_weight = torch.from_numpy(pos_weight)
pos_weight = pos_weight.float()
print(type(pos_weight))
###Output
[[ 1. 1.13576779 1.36791655 1.64251862 1.77794064 1.86759045
1.92616118 2.04639393 2.10407632 2.35992218 2.4805726 2.54564533
2.65717415 2.80624639 2.82585906 3.2917232 3.4781362 3.74382716
3.79358874 4.00660611 4.09797297 4.18998273 4.43915828 4.46777164
4.59905213 4.73365854 4.77559055 4.84231537 4.87638191 4.87638191
4.92588832 5.23974082 5.87409201 6.54790823 7.02170767 7.05232558
7.2962406 7.46461538 7.4761171 7.53416149 7.53416149 7.65299685
7.78812199 8.43826087 8.46771379 8.71095153 8.96857671 9.60792079
9.60792079 9.90204082]]
<class 'torch.Tensor'>
###Markdown
Loss function and Optimization functions
###Code
# criterion = nn.CrossEntropyLoss()
criterion = nn.BCEWithLogitsLoss(pos_weight=pos_weight).cuda()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
keys = list(combined_dict.keys())
print(len(keys))
###Output
25863
###Markdown
- comment the line `inputs = inputs.to(device)` and `labels = labels.to(device)` to run on CPU
###Code
# remember to change the name of file before execution
batch_size=4
num_channels=1
start_time = time.time()
loss_hist=[]
for epoch in range(10): # loop over the dataset multiple times
running_loss = 0.0
loss_epoch = []
#creating a mini batch
for i in range(0,len(train_ind),4):
shp = combined_dict[keys[train_ind[i]]]['mel_spectrum'][0].shape
lab_shp = combined_dict[keys[train_ind[i]]]['output'].shape[0] # outputs 50 labels
# lab_shp=10 # outputs 10 labels
inputs = np.zeros((batch_size,num_channels,shp[0],shp[1]+1))
labels = np.zeros((batch_size,lab_shp))
for j in range(4):
inputs[j,0,:,0:-1] = combined_dict[keys[train_ind[i+j]]]['mel_spectrum'][0]
labels[j,:] = combined_dict[keys[train_ind[i+j]]]['output'] # remove last indexing if output is 50 labels
# labels[j,:] = labels[j,:]*np.arange(50) # was done for crossentropyloss
inputs = torch.from_numpy(inputs)
inputs = inputs.float()
labels = torch.from_numpy(labels)
labels = labels.float()
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs,labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
# print(i)
if i % 20 == 0: # every 50 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 5))
loss_epoch.append(running_loss/5)
running_loss = 0.0
loss_hist.append(loss_epoch)
torch.save(net.state_dict(),'./net_ep/network_'+str(epoch)+'.pt')
end_time = time.time()
print('Finished Training')
print('Exe. Time:',end_time-start_time)
###Output
[1, 1] loss: 0.151
[1, 21] loss: 0.765
[1, 41] loss: 0.782
[1, 61] loss: 0.812
[1, 81] loss: 0.761
[1, 101] loss: 0.809
[1, 121] loss: 0.762
[1, 141] loss: 0.746
[1, 161] loss: 0.713
[1, 181] loss: 0.759
[1, 201] loss: 0.708
[1, 221] loss: 0.700
[1, 241] loss: 0.727
[1, 261] loss: 0.655
[1, 281] loss: 0.673
[1, 301] loss: 0.587
[1, 321] loss: 0.644
[1, 341] loss: 0.564
[1, 361] loss: 0.575
[1, 381] loss: 0.571
[1, 401] loss: 0.568
[1, 421] loss: 0.616
[1, 441] loss: 0.506
[1, 461] loss: 0.521
[1, 481] loss: 0.576
[1, 501] loss: 0.565
[1, 521] loss: 0.587
[1, 541] loss: 0.553
[1, 561] loss: 0.507
[1, 581] loss: 0.427
[1, 601] loss: 0.574
[1, 621] loss: 0.533
[1, 641] loss: 0.540
[1, 661] loss: 0.508
[1, 681] loss: 0.507
[1, 701] loss: 0.554
[1, 721] loss: 0.483
[1, 741] loss: 0.637
[1, 761] loss: 0.499
[1, 781] loss: 0.593
[1, 801] loss: 0.544
[1, 821] loss: 0.503
[1, 841] loss: 0.540
[1, 861] loss: 0.500
[1, 881] loss: 0.569
[1, 901] loss: 0.414
[1, 921] loss: 0.555
[1, 941] loss: 0.537
[1, 961] loss: 0.508
[1, 981] loss: 0.453
[1, 1001] loss: 0.404
[1, 1021] loss: 0.379
[1, 1041] loss: 0.583
[1, 1061] loss: 0.466
[1, 1081] loss: 0.586
[1, 1101] loss: 0.444
[1, 1121] loss: 0.487
[1, 1141] loss: 0.547
[1, 1161] loss: 0.475
[1, 1181] loss: 0.472
[1, 1201] loss: 0.493
[1, 1221] loss: 0.500
[1, 1241] loss: 0.445
[1, 1261] loss: 0.480
[1, 1281] loss: 0.581
[1, 1301] loss: 0.506
[1, 1321] loss: 0.507
[1, 1341] loss: 0.562
[1, 1361] loss: 0.459
[1, 1381] loss: 0.603
[1, 1401] loss: 0.670
[1, 1421] loss: 0.522
[1, 1441] loss: 0.471
[1, 1461] loss: 0.443
[1, 1481] loss: 0.505
[1, 1501] loss: 0.485
[1, 1521] loss: 0.494
[1, 1541] loss: 0.565
[1, 1561] loss: 0.439
[1, 1581] loss: 0.564
[1, 1601] loss: 0.502
[1, 1621] loss: 0.468
[1, 1641] loss: 0.581
[1, 1661] loss: 0.506
[1, 1681] loss: 0.500
[1, 1701] loss: 0.534
[1, 1721] loss: 0.452
[1, 1741] loss: 0.532
[1, 1761] loss: 0.641
[1, 1781] loss: 0.549
[1, 1801] loss: 0.485
[1, 1821] loss: 0.562
[1, 1841] loss: 0.649
[1, 1861] loss: 0.618
[1, 1881] loss: 0.519
[1, 1901] loss: 0.562
[1, 1921] loss: 0.502
[1, 1941] loss: 0.442
[1, 1961] loss: 0.548
[1, 1981] loss: 0.597
[1, 2001] loss: 0.524
[1, 2021] loss: 0.535
[1, 2041] loss: 0.438
[1, 2061] loss: 0.468
[1, 2081] loss: 0.556
[1, 2101] loss: 0.547
[1, 2121] loss: 0.414
[1, 2141] loss: 0.439
[1, 2161] loss: 0.513
[1, 2181] loss: 0.510
[1, 2201] loss: 0.480
[1, 2221] loss: 0.569
[1, 2241] loss: 0.341
[1, 2261] loss: 0.488
[1, 2281] loss: 0.440
[1, 2301] loss: 0.479
[1, 2321] loss: 0.481
[1, 2341] loss: 0.596
[1, 2361] loss: 0.484
[1, 2381] loss: 0.506
[1, 2401] loss: 0.432
[1, 2421] loss: 0.497
[1, 2441] loss: 0.444
[1, 2461] loss: 0.519
[1, 2481] loss: 0.535
[1, 2501] loss: 0.463
[1, 2521] loss: 0.533
[1, 2541] loss: 0.538
[1, 2561] loss: 0.409
[1, 2581] loss: 0.524
[1, 2601] loss: 0.495
[1, 2621] loss: 0.523
[1, 2641] loss: 0.466
[1, 2661] loss: 0.439
[1, 2681] loss: 0.564
[1, 2701] loss: 0.489
[1, 2721] loss: 0.519
[1, 2741] loss: 0.504
[1, 2761] loss: 0.531
[1, 2781] loss: 0.369
[1, 2801] loss: 0.656
[1, 2821] loss: 0.617
[1, 2841] loss: 0.502
[1, 2861] loss: 0.485
[1, 2881] loss: 0.527
[1, 2901] loss: 0.499
[1, 2921] loss: 0.465
[1, 2941] loss: 0.495
[1, 2961] loss: 0.444
[1, 2981] loss: 0.501
[1, 3001] loss: 0.382
[1, 3021] loss: 0.590
[1, 3041] loss: 0.506
[1, 3061] loss: 0.546
[1, 3081] loss: 0.392
[1, 3101] loss: 0.518
[1, 3121] loss: 0.388
[1, 3141] loss: 0.562
[1, 3161] loss: 0.412
[1, 3181] loss: 0.476
[1, 3201] loss: 0.486
[1, 3221] loss: 0.351
[1, 3241] loss: 0.411
[1, 3261] loss: 0.453
[1, 3281] loss: 0.439
[1, 3301] loss: 0.488
[1, 3321] loss: 0.452
[1, 3341] loss: 0.465
[1, 3361] loss: 0.477
[1, 3381] loss: 0.385
[1, 3401] loss: 0.450
[1, 3421] loss: 0.514
[1, 3441] loss: 0.571
[1, 3461] loss: 0.509
[1, 3481] loss: 0.610
[1, 3501] loss: 0.484
[1, 3521] loss: 0.488
[1, 3541] loss: 0.503
[1, 3561] loss: 0.498
[1, 3581] loss: 0.454
[1, 3601] loss: 0.474
[1, 3621] loss: 0.519
[1, 3641] loss: 0.454
[1, 3661] loss: 0.408
[1, 3681] loss: 0.409
[1, 3701] loss: 0.546
[1, 3721] loss: 0.510
[1, 3741] loss: 0.473
[1, 3761] loss: 0.432
[1, 3781] loss: 0.493
[1, 3801] loss: 0.372
[1, 3821] loss: 0.407
[1, 3841] loss: 0.563
[1, 3861] loss: 0.456
[1, 3881] loss: 0.455
[1, 3901] loss: 0.437
[1, 3921] loss: 0.602
[1, 3941] loss: 0.593
[1, 3961] loss: 0.456
[1, 3981] loss: 0.455
[1, 4001] loss: 0.399
[1, 4021] loss: 0.496
[1, 4041] loss: 0.501
[1, 4061] loss: 0.486
[1, 4081] loss: 0.456
[1, 4101] loss: 0.569
[1, 4121] loss: 0.518
[1, 4141] loss: 0.497
[1, 4161] loss: 0.610
[1, 4181] loss: 0.480
[1, 4201] loss: 0.411
[1, 4221] loss: 0.406
[1, 4241] loss: 0.503
[1, 4261] loss: 0.375
[1, 4281] loss: 0.633
[1, 4301] loss: 0.480
[1, 4321] loss: 0.428
[1, 4341] loss: 0.561
[1, 4361] loss: 0.439
[1, 4381] loss: 0.430
[1, 4401] loss: 0.469
[1, 4421] loss: 0.407
[1, 4441] loss: 0.426
[1, 4461] loss: 0.500
[1, 4481] loss: 0.544
[1, 4501] loss: 0.440
[1, 4521] loss: 0.415
[1, 4541] loss: 0.618
[1, 4561] loss: 0.450
[1, 4581] loss: 0.360
[1, 4601] loss: 0.474
[1, 4621] loss: 0.490
[1, 4641] loss: 0.461
[1, 4661] loss: 0.523
[1, 4681] loss: 0.476
[1, 4701] loss: 0.383
[1, 4721] loss: 0.510
[1, 4741] loss: 0.458
[1, 4761] loss: 0.466
[1, 4781] loss: 0.521
[1, 4801] loss: 0.399
[1, 4821] loss: 0.486
[1, 4841] loss: 0.573
[1, 4861] loss: 0.361
[1, 4881] loss: 0.493
[1, 4901] loss: 0.501
[1, 4921] loss: 0.510
[1, 4941] loss: 0.528
[1, 4961] loss: 0.477
[1, 4981] loss: 0.466
[1, 5001] loss: 0.457
[1, 5021] loss: 0.632
[1, 5041] loss: 0.372
[1, 5061] loss: 0.455
[1, 5081] loss: 0.451
[1, 5101] loss: 0.524
[1, 5121] loss: 0.510
[1, 5141] loss: 0.440
[1, 5161] loss: 0.483
[1, 5181] loss: 0.554
[1, 5201] loss: 0.507
[1, 5221] loss: 0.438
[1, 5241] loss: 0.390
[1, 5261] loss: 0.521
[1, 5281] loss: 0.575
[1, 5301] loss: 0.582
[1, 5321] loss: 0.502
[1, 5341] loss: 0.598
[1, 5361] loss: 0.616
[1, 5381] loss: 0.584
[1, 5401] loss: 0.405
[1, 5421] loss: 0.400
[1, 5441] loss: 0.618
[1, 5461] loss: 0.463
[1, 5481] loss: 0.423
[1, 5501] loss: 0.434
[1, 5521] loss: 0.551
[1, 5541] loss: 0.517
[1, 5561] loss: 0.448
[1, 5581] loss: 0.463
[1, 5601] loss: 0.575
[1, 5621] loss: 0.517
[1, 5641] loss: 0.620
[1, 5661] loss: 0.524
[1, 5681] loss: 0.442
[1, 5701] loss: 0.515
[1, 5721] loss: 0.463
[1, 5741] loss: 0.590
[1, 5761] loss: 0.429
[1, 5781] loss: 0.368
[1, 5801] loss: 0.489
[1, 5821] loss: 0.413
[1, 5841] loss: 0.391
[1, 5861] loss: 0.560
[1, 5881] loss: 0.503
[1, 5901] loss: 0.455
[1, 5921] loss: 0.463
[1, 5941] loss: 0.620
[1, 5961] loss: 0.514
[1, 5981] loss: 0.513
[1, 6001] loss: 0.637
[1, 6021] loss: 0.456
[1, 6041] loss: 0.569
[1, 6061] loss: 0.525
[1, 6081] loss: 0.461
[1, 6101] loss: 0.410
[1, 6121] loss: 0.503
[1, 6141] loss: 0.484
[1, 6161] loss: 0.560
[1, 6181] loss: 0.502
[1, 6201] loss: 0.448
[1, 6221] loss: 0.546
[1, 6241] loss: 0.421
[1, 6261] loss: 0.512
[1, 6281] loss: 0.441
[1, 6301] loss: 0.457
[1, 6321] loss: 0.362
[1, 6341] loss: 0.402
[1, 6361] loss: 0.372
[1, 6381] loss: 0.436
[1, 6401] loss: 0.508
[1, 6421] loss: 0.483
[1, 6441] loss: 0.518
[1, 6461] loss: 0.310
[1, 6481] loss: 0.388
[1, 6501] loss: 0.449
[1, 6521] loss: 0.425
[1, 6541] loss: 0.427
[1, 6561] loss: 0.532
[1, 6581] loss: 0.498
[1, 6601] loss: 0.483
[1, 6621] loss: 0.421
[1, 6641] loss: 0.455
[1, 6661] loss: 0.558
[1, 6681] loss: 0.486
[1, 6701] loss: 0.530
[1, 6721] loss: 0.526
[1, 6741] loss: 0.649
[1, 6761] loss: 0.400
[1, 6781] loss: 0.556
[1, 6801] loss: 0.430
[1, 6821] loss: 0.458
[1, 6841] loss: 0.457
[1, 6861] loss: 0.387
[1, 6881] loss: 0.568
[1, 6901] loss: 0.400
[1, 6921] loss: 0.467
[1, 6941] loss: 0.538
[1, 6961] loss: 0.514
[1, 6981] loss: 0.549
[1, 7001] loss: 0.430
[1, 7021] loss: 0.335
[1, 7041] loss: 0.477
[1, 7061] loss: 0.461
[1, 7081] loss: 0.468
[1, 7101] loss: 0.346
[1, 7121] loss: 0.386
[1, 7141] loss: 0.465
###Markdown
to save the network parameters onto disk
###Code
# change name before saving the network
torch.save(net.state_dict(),'./networks/network_10_epoch_new_model.pt')
###Output
_____no_output_____
###Markdown
CODE TO VALIDATION SET PERFORMANCE
###Code
keys = list(combined_dict.keys())
print(len(keys))
###Output
25863
###Markdown
- comment line `inputs = inputs.to(device)` to run on CPU
###Code
batch_size=4
num_channels=1
out_all = np.zeros((1,50))# change for 50
labels_all = np.zeros((1,50)) # change for 50
for i in range(0,len(val_ind),4):
shp = combined_dict[keys[val_ind[i]]]['mel_spectrum'][0].shape
lab_shp = combined_dict[keys[val_ind[i]]]['output'].shape[0]
# lab_shp = 10 # uncomment for 10 output
inputs = np.zeros((batch_size,num_channels,shp[0],shp[1]+1))
labels = np.zeros((batch_size,lab_shp))
for j in range(4):
inputs[j,0,:,0:-1] = combined_dict[keys[val_ind[i+j]]]['mel_spectrum'][0]
labels[j,:] = combined_dict[keys[val_ind[i+j]]]['output'] # remove last indexing if output is 50 labels
labels_all = np.concatenate((labels_all,labels),axis=0)
inputs = torch.from_numpy(inputs)
inputs = inputs.float()
inputs = inputs.to(device)
outputs = net(inputs)
temp = outputs.cpu().detach().numpy()
out_all = np.concatenate((out_all,temp),axis=0)
if i%100 == 0:
print(i)
print('Finished')
labels_all = labels_all[1:,:]
out_all = out_all[1:,:]
print(labels_all.shape)
print(out_all.shape)
out_all_tensor = torch.from_numpy(out_all)
out_all_prob = torch.sigmoid(out_all_tensor).numpy() # for auc_roc
out_all_bin_tensor = torch.round(torch.sigmoid(out_all_tensor))
# out_all_bin = np.where(out_all_prob<.3,0,1) # to change the threshold of rounding
out_all_bin = out_all_bin_tensor.numpy() # for other metrics
print(out_all_bin.shape)
###Output
_____no_output_____
###Markdown
calculating per tag metrics
###Code
metrics_dict={}
for i in range(50): # change for 50
# print(i)
precision,recall,fscore,support = metrics.precision_recall_fscore_support(labels_all[:,i],out_all_bin[:,i])
metrics_dict[sorted_stats[i][0]] = {}
metrics_dict[sorted_stats[i][0]]['precision'] = precision
metrics_dict[sorted_stats[i][0]]['recall'] = recall
metrics_dict[sorted_stats[i][0]]['fscore'] = fscore
metrics_dict[sorted_stats[i][0]]['support'] = support
for key in metrics_dict.keys():
print(key,':',metrics_dict[key])
print('\n')
###Output
_____no_output_____
###Markdown
caluclating the metrics(precision,recall,fscore) using all tags at once
###Code
precision,recall,fscore,support = metrics.precision_recall_fscore_support(labels_all,out_all_bin)
###Output
_____no_output_____
###Markdown
calculating the AUC-ROC curve
###Code
auc_roc = metrics.roc_auc_score(labels_all,out_all_prob)
print(auc_roc)
###Output
_____no_output_____
###Markdown
CODE FOR TEST SET
###Code
batch_size=4
num_channels=1
out_all = np.zeros((1,50))# change for 50
labels_all = np.zeros((1,50)) # change for 50
auc_all = []
for j in range(10):
print(j)
filename = 'network_'+str(j)+'.pt'
net.load_state_dict(torch.load(os.path.join('./net_ep',filename)))
net.eval()
for i in range(0,len(test_ind)-4,4):
shp = combined_dict[keys[test_ind[i]]]['mel_spectrum'][0].shape
lab_shp = combined_dict[keys[test_ind[i]]]['output'].shape[0]
# lab_shp = 10 # uncomment for 10 output
inputs = np.zeros((batch_size,num_channels,shp[0],shp[1]+1))
labels = np.zeros((batch_size,lab_shp))
for j in range(4):
inputs[j,0,:,0:-1] = combined_dict[keys[test_ind[i+j]]]['mel_spectrum'][0]
labels[j,:] = combined_dict[keys[test_ind[i+j]]]['output'] # remove last indexing if output is 50 labels
labels_all = np.concatenate((labels_all,labels),axis=0)
inputs = torch.from_numpy(inputs)
inputs = inputs.float()
inputs = inputs.to(device)
outputs = net(inputs)
temp = outputs.cpu().detach().numpy()
out_all = np.concatenate((out_all,temp),axis=0)
if i%100 == 0:
print(i)
labels_all = labels_all[1:,:]
out_all = out_all[1:,:]
print(labels_all.shape)
print(out_all.shape)
out_all_tensor = torch.from_numpy(out_all)
out_all_prob = torch.sigmoid(out_all_tensor).numpy() # for auc_roc
out_all_bin_tensor = torch.round(torch.sigmoid(out_all_tensor))
# out_all_bin = np.where(out_all_prob<.3,0,1) # to change the threshold of rounding
out_all_bin = out_all_bin_tensor.numpy() # for other metrics
print(out_all_bin.shape)
auc_roc = metrics.roc_auc_score(labels_all,out_all_prob)
auc_all.append(auc_roc)
print('Finished')
plt.figure()
plt.plot(auc_all)
plt.xlabel('epochs')
plt.ylabel('auc score')
labels_all = labels_all[1:,:]
out_all = out_all[1:,:]
print(labels_all.shape)
print(out_all.shape)
out_all_tensor = torch.from_numpy(out_all)
out_all_prob = torch.sigmoid(out_all_tensor).numpy() # for auc_roc
out_all_bin_tensor = torch.round(torch.sigmoid(out_all_tensor))
# out_all_bin = np.where(out_all_prob<.3,0,1) # to change the threshold of rounding
out_all_bin = out_all_bin_tensor.numpy() # for other metrics
print(out_all_bin.shape)
labels_all = labels_all[1:,:]
out_all = out_all[1:,:]
print(labels_all.shape)
print(out_all.shape)
out_all_tensor = torch.from_numpy(out_all)
out_all_prob = torch.sigmoid(out_all_tensor).numpy() # for auc_roc
out_all_bin_tensor = torch.round(torch.sigmoid(out_all_tensor))
# out_all_bin = np.where(out_all_prob<.3,0,1) # to change the threshold of rounding
out_all_bin = out_all_bin_tensor.numpy() # for other metrics
print(out_all_bin.shape)
auc_roc = metrics.roc_auc_score(labels_all,out_all_prob)
print(auc_roc)
###Output
_____no_output_____
###Markdown
- storing all metrics in the dictionary
###Code
metrics_dict['all_precesion'] = precision
metrics_dict['all_recall'] = recall
metrics_dict['all_fscore'] = fscore
metrics_dict['all_support'] = support
metrics_dict['auc_roc'] = auc_roc
metrics_dict['loss_hist'] = loss_hist
# saving the metric to disk
with open('./metrics/metrics_10_epoch_BNDO_norm_all_ex_test.pickle','wb') as handle:
pickle.dump(metrics_dict,handle)
###Output
_____no_output_____ |
Data Science/Python Module Quick Overview/Pandas - Manipulation.ipynb | ###Markdown
Series Math
###Code
ser1 = Series([0,1,2],index=['A','B','C'])
ser2 = Series([3,4,5,6],index=['A','B','C','D'])
print(ser1)
print(ser2)
#Note the NaN values are added in automatically
ser1 + ser2
###Output
_____no_output_____
###Markdown
Dataframe Math
###Code
dframe1 = DataFrame(np.arange(4).reshape(2,2),columns=list('AB'),index=['NYC','LA'])
dframe1
dframe2 = DataFrame(np.arange(9).reshape(3,3),columns=list('ADC'),index=['NYC','SF','LA'])
dframe2
# better approch
dframe1.add(dframe2,fill_value=0)
ser3 = dframe2.ix[0]
ser3
dframe2-ser3
###Output
_____no_output_____
###Markdown
Ordering
###Code
ser1 = Series(range(3),index=['C','A','B'])
ser1
#Now sort_index
ser1.sort_index()
#Can sort a Series by its values
ser1.sort_values()
###Output
_____no_output_____
###Markdown
Merging More info: http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.merge.html
###Code
dframe1 = DataFrame({'key':['X','Z','Y','Z','X','X'],'data_set_1': np.arange(6)})
dframe1
dframe2 = DataFrame({'key':['Q','Y','Z'],'data_set_2':[1,2,3]})
dframe2
# Merge will automatically choose overlapping columns to merge on
# this is a "many-to-one" situation
pd.merge(dframe1,dframe2)
#Note no overlapping 'X's
#choose which DataFrame's keys to use, this will choose left (dframe1)
pd.merge(dframe1,dframe2,on='key',how='left')
pd.merge(dframe1,dframe2,on='key',how='outer')
# many to many merge
pd.merge(dframe1, dframe2)
###Output
_____no_output_____
###Markdown
Joins
###Code
df_left = DataFrame({'key': ['X','Y','Z','X','Y'],
'data': range(5)})
df_right = DataFrame({'group_data': [10, 20]}, index=['X', 'Y'])
print(df_left)
print(df_right)
# use key for left DF and index for the right
pd.merge(df_left,df_right,left_on='key',right_index=True)
# union by using outer
pd.merge(df_left,df_right,left_on='key',right_index=True,how='outer')
df_left.join(df_right)
df_right.join(df_left)
df_left_hr = DataFrame({'key1': ['SF','SF','SF','LA','LA'],
'key2': [10, 20, 30, 20, 30],
'data_set': np.arange(5.)})
df_right_hr = DataFrame(np.arange(10).reshape((5, 2)),
index=[['LA','LA','SF','SF','SF'],
[20, 10, 10, 10, 20]],
columns=['col_1', 'col_2'])
print(df_left_hr)
print(df_right_hr)
# merge the left by using keys and the right by its index
pd.merge(df_left_hr,df_right_hr,left_on=['key1','key2'],right_index=True)
# union by choosing 'outer' method
pd.merge(df_left_hr,df_right_hr,left_on=['key1','key2'],right_index=True,how='outer')
###Output
_____no_output_____
###Markdown
Pivoting
###Code
import pandas.util.testing as tm; tm.N = 3
#Create a unpivoted function
def unpivot(frame):
N, K = frame.shape
data = {'value' : frame.values.ravel('F'),
'variable' : np.asarray(frame.columns).repeat(N),
'date' : np.tile(np.asarray(frame.index), K)}
# Return the DataFrame
return DataFrame(data, columns=['date', 'variable', 'value'])
#Set the DataFrame we'll be using
dframe = unpivot(tm.makeTimeDataFrame())
dframe
# First two value spassed are teh row and column indexes, then finally an optional fill value
dframe_piv = dframe.pivot('date','variable','value')
#Show
dframe_piv
###Output
_____no_output_____
###Markdown
Binning One last thing to note, jus tlike in standard math notation, when setting up bins: () means open, while [] means closed/inclusive
###Code
years = [1990,1991,1992,2008,2012,2015,1987,1969,2013,2008,1999]
decade_bins = [1960,1970,1980,1990,2000,2010,2020]
#Now we'll use cut to get somethign called a Category object
decade_cat = pd.cut(years,decade_bins)
decade_cat
decade_cat.categories
pd.value_counts(decade_cat)
# pass data values to the cut.
#For instance, if we just wanted to make two bins, evenly spaced based on max and min year, with a 1 year precision
pd.cut(years,2,precision=1)
###Output
_____no_output_____
###Markdown
Group By
###Code
dframe = DataFrame({'k1':['X','X','Y','Y','Z'],
'k2':['alpha','beta','alpha','beta','alpha'],
'dataset1':np.random.randn(5),
'dataset2':np.random.randn(5)})
dframe
# dataset1 column and group it by the k1 key
group1 = dframe.dataset1.groupby(dframe.k1)
group1
group1.mean()
#We'll make some arrays for use as keys
cities = np.array(['NY','LA','LA','NY','NY'])
months = np.array(['JAN','FEB','JAN','FEB','JAN'])
# using the data from dataset1, group the means by city and month
dframe.dataset1.groupby([cities,months]).mean()
#pass column names as group keys
dframe.groupby('k1').mean()
# multiple column names
dframe.groupby(['k1','k2']).mean()
# groupby method is getting the group sizes
dframe.groupby(['k1']).size()
# iterate over groups
#For example:
for name,group in dframe.groupby('k1'):
print ("This is the %s group" %name)
print (group)
print ('\n')
animals = DataFrame(np.arange(16).reshape(4, 4),
columns=['W', 'X', 'Y', 'Z'],
index=['Dog', 'Cat', 'Bird', 'Mouse'])
#Now lets add some NAN values
animals.ix[1:2, ['W', 'Y']] = np.nan
#Show
animals
behavior_map = {'W': 'good', 'X': 'bad', 'Y': 'good','Z': 'bad'}
#groupby using that mapping
animal_col = animals.groupby(behavior_map, axis=1)
# Show the sum accroding to the groupby with the mapping
animal_col.sum()
animals.groupby(behavior_map, axis=1).count()
animals.groupby(behavior_map, axis=1).describe()
###Output
_____no_output_____ |
examples/ImageNet Pretrained Network (VGG_S).ipynb | ###Markdown
IntroductionThis example demonstrates using a network pretrained on ImageNet for classification. The model used was converted from the VGG_CNN_S model (http://arxiv.org/abs/1405.3531) in [Caffe's Model Zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo). For details of the conversion process, see the example notebook "Using a Caffe Pretrained Network - CIFAR10". LicenseThe model is licensed for non-commercial use only Download the model (393 MB)
###Code
!wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg_cnn_s.pkl
###Output
_____no_output_____
###Markdown
Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import lasagne
from lasagne.layers import InputLayer, DenseLayer, DropoutLayer
from lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer
from lasagne.layers import MaxPool2DLayer as PoolLayer
from lasagne.layers import LocalResponseNormalization2DLayer as NormLayer
from lasagne.utils import floatX
###Output
_____no_output_____
###Markdown
Define the network
###Code
net = {}
net['input'] = InputLayer((None, 3, 224, 224))
net['conv1'] = ConvLayer(net['input'], num_filters=96, filter_size=7, stride=2)
net['norm1'] = NormLayer(net['conv1'], alpha=0.0001) # caffe has alpha = alpha * pool_size
net['pool1'] = PoolLayer(net['norm1'], pool_size=3, stride=3, ignore_border=False)
net['conv2'] = ConvLayer(net['pool1'], num_filters=256, filter_size=5)
net['pool2'] = PoolLayer(net['conv2'], pool_size=2, stride=2, ignore_border=False)
net['conv3'] = ConvLayer(net['pool2'], num_filters=512, filter_size=3, pad=1)
net['conv4'] = ConvLayer(net['conv3'], num_filters=512, filter_size=3, pad=1)
net['conv5'] = ConvLayer(net['conv4'], num_filters=512, filter_size=3, pad=1)
net['pool5'] = PoolLayer(net['conv5'], pool_size=3, stride=3, ignore_border=False)
net['fc6'] = DenseLayer(net['pool5'], num_units=4096)
net['drop6'] = DropoutLayer(net['fc6'], p=0.5)
net['fc7'] = DenseLayer(net['drop6'], num_units=4096)
net['drop7'] = DropoutLayer(net['fc7'], p=0.5)
net['fc8'] = DenseLayer(net['drop7'], num_units=1000, nonlinearity=lasagne.nonlinearities.softmax)
output_layer = net['fc8']
###Output
_____no_output_____
###Markdown
Load the model parameters and metadata
###Code
import pickle
model = pickle.load(open('vgg_cnn_s.pkl'))
CLASSES = model['synset words']
MEAN_IMAGE = model['mean image']
lasagne.layers.set_all_param_values(output_layer, model['values'])
###Output
_____no_output_____
###Markdown
Trying it out Get some test imagesWe'll download the ILSVRC2012 validation URLs and pick a few at random
###Code
import urllib
index = urllib.urlopen('http://www.image-net.org/challenges/LSVRC/2012/ori_urls/indexval.html').read()
image_urls = index.split('<br>')
np.random.seed(23)
np.random.shuffle(image_urls)
image_urls = image_urls[:5]
###Output
_____no_output_____
###Markdown
Helper to fetch and preprocess images
###Code
import io
import skimage.transform
def prep_image(url):
ext = url.split('.')[-1]
im = plt.imread(io.BytesIO(urllib.urlopen(url).read()), ext)
# Resize so smallest dim = 256, preserving aspect ratio
h, w, _ = im.shape
if h < w:
im = skimage.transform.resize(im, (256, w*256/h), preserve_range=True)
else:
im = skimage.transform.resize(im, (h*256/w, 256), preserve_range=True)
# Central crop to 224x224
h, w, _ = im.shape
im = im[h//2-112:h//2+112, w//2-112:w//2+112]
rawim = np.copy(im).astype('uint8')
# Shuffle axes to c01
im = np.swapaxes(np.swapaxes(im, 1, 2), 0, 1)
# Convert to BGR
im = im[::-1, :, :]
im = im - MEAN_IMAGE
return rawim, floatX(im[np.newaxis])
###Output
_____no_output_____
###Markdown
Process test images and print top 5 predicted labels
###Code
for url in image_urls:
try:
rawim, im = prep_image(url)
prob = np.array(lasagne.layers.get_output(output_layer, im, deterministic=True).eval())
top5 = np.argsort(prob[0])[-1:-6:-1]
plt.figure()
plt.imshow(rawim.astype('uint8'))
plt.axis('off')
for n, label in enumerate(top5):
plt.text(250, 70 + n * 20, '{}. {}'.format(n+1, CLASSES[label]), fontsize=14)
except IOError:
print('bad url: ' + url)
###Output
_____no_output_____
###Markdown
IntroductionThis example demonstrates using a network pretrained on ImageNet for classification. The model used was converted from the VGG_CNN_S model (http://arxiv.org/abs/1405.3531) in [Caffe's Model Zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo). For details of the conversion process, see the example notebook "Using a Caffe Pretrained Network - CIFAR10". LicenseThe model is licensed for non-commercial use only Download the model (393 MB)
###Code
!wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg_cnn_s.pkl
###Output
_____no_output_____
###Markdown
Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import lasagne
from lasagne.layers import InputLayer, DenseLayer, DropoutLayer
from lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer
from lasagne.layers import MaxPool2DLayer as PoolLayer
from lasagne.layers import LocalResponseNormalization2DLayer as NormLayer
from lasagne.utils import floatX
###Output
_____no_output_____
###Markdown
Define the network
###Code
net = {}
net['input'] = InputLayer((None, 3, 224, 224))
net['conv1'] = ConvLayer(net['input'], num_filters=96, filter_size=7, stride=2, flip_filters=False)
net['norm1'] = NormLayer(net['conv1'], alpha=0.0001) # caffe has alpha = alpha * pool_size
net['pool1'] = PoolLayer(net['norm1'], pool_size=3, stride=3, ignore_border=False)
net['conv2'] = ConvLayer(net['pool1'], num_filters=256, filter_size=5, flip_filters=False)
net['pool2'] = PoolLayer(net['conv2'], pool_size=2, stride=2, ignore_border=False)
net['conv3'] = ConvLayer(net['pool2'], num_filters=512, filter_size=3, pad=1, flip_filters=False)
net['conv4'] = ConvLayer(net['conv3'], num_filters=512, filter_size=3, pad=1, flip_filters=False)
net['conv5'] = ConvLayer(net['conv4'], num_filters=512, filter_size=3, pad=1, flip_filters=False)
net['pool5'] = PoolLayer(net['conv5'], pool_size=3, stride=3, ignore_border=False)
net['fc6'] = DenseLayer(net['pool5'], num_units=4096)
net['drop6'] = DropoutLayer(net['fc6'], p=0.5)
net['fc7'] = DenseLayer(net['drop6'], num_units=4096)
net['drop7'] = DropoutLayer(net['fc7'], p=0.5)
net['fc8'] = DenseLayer(net['drop7'], num_units=1000, nonlinearity=lasagne.nonlinearities.softmax)
output_layer = net['fc8']
###Output
_____no_output_____
###Markdown
Load the model parameters and metadata
###Code
import pickle
model = pickle.load(open('vgg_cnn_s.pkl'))
CLASSES = model['synset words']
MEAN_IMAGE = model['mean image']
lasagne.layers.set_all_param_values(output_layer, model['values'])
###Output
_____no_output_____
###Markdown
Trying it out Get some test imagesWe'll download the ILSVRC2012 validation URLs and pick a few at random
###Code
import urllib
index = urllib.urlopen('http://www.image-net.org/challenges/LSVRC/2012/ori_urls/indexval.html').read()
image_urls = index.split('<br>')
np.random.seed(23)
np.random.shuffle(image_urls)
image_urls = image_urls[:5]
###Output
_____no_output_____
###Markdown
Helper to fetch and preprocess images
###Code
import io
import skimage.transform
def prep_image(url):
ext = url.split('.')[-1]
im = plt.imread(io.BytesIO(urllib.urlopen(url).read()), ext)
# Resize so smallest dim = 256, preserving aspect ratio
h, w, _ = im.shape
if h < w:
im = skimage.transform.resize(im, (256, w*256/h), preserve_range=True)
else:
im = skimage.transform.resize(im, (h*256/w, 256), preserve_range=True)
# Central crop to 224x224
h, w, _ = im.shape
im = im[h//2-112:h//2+112, w//2-112:w//2+112]
rawim = np.copy(im).astype('uint8')
# Shuffle axes to c01
im = np.swapaxes(np.swapaxes(im, 1, 2), 0, 1)
# Convert to BGR
im = im[::-1, :, :]
im = im - MEAN_IMAGE
return rawim, floatX(im[np.newaxis])
###Output
_____no_output_____
###Markdown
Process test images and print top 5 predicted labels
###Code
for url in image_urls:
try:
rawim, im = prep_image(url)
prob = np.array(lasagne.layers.get_output(output_layer, im, deterministic=True).eval())
top5 = np.argsort(prob[0])[-1:-6:-1]
plt.figure()
plt.imshow(rawim.astype('uint8'))
plt.axis('off')
for n, label in enumerate(top5):
plt.text(250, 70 + n * 20, '{}. {}'.format(n+1, CLASSES[label]), fontsize=14)
except IOError:
print('bad url: ' + url)
###Output
_____no_output_____ |
Source Code/Light GBM classifier (17 features).ipynb | ###Markdown
Hypothyroid prediction using Light GBM classifier
###Code
import pandas as pd
import numpy as np
cd F:\Thyroid Final
import pandas as pd
import warnings
warnings.filterwarnings("ignore")
file_handler = open("hypothyroid.csv", "r")
df = pd.read_csv(file_handler, sep = ",")
file_handler.close()
df.head(2)
df.loc[df['Age'] == '455', 'Age'] = '45'
df.dropna(inplace=True)
df.replace(to_replace='?', inplace=True)
df.dropna(inplace=True)
df = df.replace(to_replace={'f':0,'t':1, 'y':1, 'n':0,'M':0,'F':1})
df = df.replace(to_replace={'?':True})
df.dropna(inplace=True)
from sklearn.preprocessing import LabelEncoder
lb_make = LabelEncoder()
df["class"] = lb_make.fit_transform(df["class"])
x=df.iloc[:,[1,2,3,4,6,9,10,11,12,13,14,15,16,17,18,19]]
y=df["class"]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y,test_size=0.4,random_state=1)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
import lightgbm as ltb
model = ltb.LGBMClassifier(boosting_type='dart',max_depth=10,num_leaves=90,binary="log loss",learning_rate=0.199,
objective="cross_entropy",extra_trees="True",tree_learner="data",metric="binary_logloss")
model.fit(x_train, y_train)
from sklearn import metrics
expected_y = y_test
y_pred= model.predict(x_test)
# summarize the fit of the model
print(); print(metrics.classification_report(expected_y, y_pred))
print(); print(metrics.confusion_matrix(expected_y,y_pred))
print("Accuracy: ",metrics.accuracy_score(expected_y, y_pred))
# examine the class distribution of the testing set (using a Pandas Series method)
y_test.value_counts()
# calculate the percentage of ones
# because y_test only contains ones and zeros, we can simply calculate the mean = percentage of ones
y_test.mean()
# calculate the percentage of zeros
1 - y_test.mean()
# calculate null accuracy (for multi-class classification problems)
y_test.value_counts().head(1) / len(y_test)
print("classification_error")
print(1 - metrics.accuracy_score(y_test, y_pred))
import matplotlib.pyplot as plt
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.rcParams['font.size'] = 12
plt.title('ROC curve Using Light GBM(17 attributes)')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
# calculate accuracy
from sklearn import metrics
print("ACCURACY:")
print(metrics.accuracy_score(y_test, y_pred))
confusion = metrics.confusion_matrix(y_test, y_pred)
print(confusion)
#[row, column]
TP = confusion[1, 1]
TN = confusion[0, 0]
FP = confusion[0, 1]
FN = confusion[1, 0]
print("classification_error")
print(1 - metrics.accuracy_score(y_test, y_pred))
print("sensitivity")
print(metrics.recall_score(y_test, y_pred))
print("True Positive Rate")
specificity = TN / (TN + FP)
print(specificity)
print("precision")
print(metrics.precision_score(y_test, y_pred))
print("roc_auc_score")
print(metrics.roc_auc_score(y_test, y_pred))
from sklearn.metrics import f1_score
score = f1_score(y_test, y_pred, average='binary')
print('F-Measure: %.3f' % score)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, model.predict(x_test))
fig, ax = plt.subplots(figsize=(6,6))
ax.imshow(cm)
ax.grid(False)
ax.xaxis.set(ticks=(0, 1), ticklabels=('Predicted Hypothyroid', 'Predicted Negative'))
ax.yaxis.set(ticks=(0, 1), ticklabels=('Actual Hypothyroid', 'Actual Negative'))
ax.set_ylim(1.5, -0.5)
for i in range(2):
for j in range(2):
ax.text(j, i, cm[i, j], ha='center', va='center', color='red')
plt.title("Confusion matrix using Light GBM (17 attributes)")
plt.show()
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
kfold = StratifiedKFold(n_splits=10, random_state=1, shuffle=True)
cv_results = cross_val_score(model, x_train, y_train, cv=kfold, scoring='accuracy')
cv_results
###Output
_____no_output_____ |
Python_Stock/Options_Strategies/Black_Scholes_Stock_Puts.ipynb | ###Markdown
Black Scholes Stock Puts Inputs
###Code
import numpy as np
import scipy.stats as ss
import matplotlib.pyplot as plt
import yfinance as yf
dfo = yf.Ticker("AAPL")
dfo.options
dfo_exp = dfo.option_chain('2020-05-28')
dfo_exp.puts
symbol = 'AAPL'
start = '2019-12-01'
end = '2020-04-02'
df = yf.download(symbol,start,end)
df.head()
df.tail()
returns = df['Adj Close'].pct_change().dropna()
from datetime import datetime
from dateutil import relativedelta
d1 = datetime.strptime(start, "%Y-%m-%d")
d2 = datetime.strptime('2020-05-28', "%Y-%m-%d")
delta = relativedelta.relativedelta(d2,d1)
print('How many years of investing?')
print('%s years' % delta.years)
maturity_days = (df.index[-1] - df.index[0]).days
print('%s days' % maturity_days)
S0 = df['Adj Close'][-1]
K = dfo_exp.puts['strike'][6]
r = 0.1
sigma = returns.std()
T = maturity_days/252
print("S0\tCurrent Stock Price:", S0)
print("K\tStrike Price:", K)
print("r\tContinuously compounded risk-free rate:", r)
print("sigma\tVolatility of the stock price per year:", sigma)
print("T\tTime to maturity in trading years:", T)
def d1(S0, K, r, sigma, T):
d1 = (np.log(S0/K) + (r + sigma**2 / 2) * T)/(sigma * np.sqrt(T))
return d1
def d2(S0, K, r, sigma, T):
d2 = (np.log(S0 / K) + (r - sigma**2 / 2) * T) / (sigma * np.sqrt(T))
return d2
def BlackScholesCall(S0, K, r, sigma, T):
BSC = S0 * ss.norm.cdf(d1(S0, K, r, sigma, T)) - K * np.exp(-r * T) * ss.norm.cdf(d2(S0, K, r, sigma, T))
return BSC
def BlackScholesPut(S0, K, r, sigma, T):
BSP = K * np.exp(-r * T) * ss.norm.cdf(-d2(S0, K, r, sigma, T)) - S0 * ss.norm.cdf(-d1(S0, K, r, sigma, T))
return BSP
Put_BS = BlackScholesPut(S0, K, r, sigma, T)
Put_BS
###Output
_____no_output_____ |
Modulo1/3. Tipo de Datos, operadores y variables.ipynb | ###Markdown
TIPOS DE DATOS, OPERADORES Y VARIBLES EN PYTHON 1. Tipo de Datos En la programación todo se resume a datos que representan información. - Números- textos- fechas- imágenes- sonidos- vídeos- Etc. 1.1.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Entero
3
int(3.55)
int('3')
# float
12.3545
float(3)
float('3.245')
###Output
_____no_output_____
###Markdown
1.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
"""
###Output
_____no_output_____
###Markdown
1.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
bool(1)
bool(0)
False
True
###Output
_____no_output_____
###Markdown
Conociendo el Tipo de Dato
###Code
type('hola')
type(3.45)
###Output
_____no_output_____
###Markdown
2. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. La variable me permite almacenar en la memoria de la computadora un valor (numero, textp, etc) 2.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 2.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
# declaración de variable en python
numero = 3.14
numero
# Jupyter Notebook no es necesario poner print
numero
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto)
type(numero_texto)
###Output
_____no_output_____
###Markdown
3. Operadores 3.1 Operadores Aritméticos Representan el conjunto de operaciones básicas de tipo numéricas Ejemplos
###Code
# Suma
3+2
# Resta
a = 5.2
b= 3.2
a - b
# Modulo (Nos brinda la parte entera que no puede ser dividida entre dos números)
a=3
b=2
a%b
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a= 3
b='s'
# Comparando a y b
a==b
a != b
c= 5
# Mayor que
a > c
###Output
_____no_output_____
###Markdown
3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
3.4 Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a+2
a
# Aumento dos a la variable a (a = a+2)
a += 2
a
###Output
_____no_output_____
###Markdown
EJERCICIOS 1.Escribir un programa que almacene la cadena ¡Hola Mundo! en una variable y luego muestre por pantalla el contenido de la variable.
###Code
message = "¡Hola Mundo!"
print(message)
###Output
¡Hola Mundo!
###Markdown
2.Escribir un programa que realice la siguiente operación aritmética (3+2 / 2x5)2 .
###Code
print(((3+2)/(2*5))**2)
###Output
0.25
###Markdown
VARIABLES, TIPOS DE DATOS SIMPLES Y OPERADORES En programación este apartado aprenderas a utilizar los diferentes tipos de datos que se encuentran disponibles en python, asi como declarar variables y realizar operaciones. 1. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. **La variable me permite almacenar en la memoria de la computadora un valor (numero, texto, etc)** 1.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 1.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
## 1. Declaremos una variable para mejorar que nos ayude a enviar un mensaje al usuario
print('Hola Mundo')
## 2. Declarando una variable, la cual contiene un texto
mensaje = "Hola Mundo"
print(mensaje)
###Output
_____no_output_____
###Markdown
Ejercicio Cree una variable llamada msg la cual almacene un mensaje. Luego imprima dicho mensaje
###Code
msg=""
msg=input("ingrese un mensaje")
print(msg)
###Output
_____no_output_____
###Markdown
2. Tipo de Datos En la programación todo se resume a datos que representan información. Dicha información en terminos simples la podemos ver como:- Cadenas de texto o strings- Números- fechas- imágenes- sonidos- vídeos- Etc. 2.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
dasdas
"""
###Output
_____no_output_____
###Markdown
- Almacenando cadenas en variables
###Code
# Recuerda que podemos utilizar variables para almacenar una cadena de texto
texto = "Este es un texto"
###Output
_____no_output_____
###Markdown
Algunas Operaciones Basicas de Cadena Una vez que ya conocemos que es una cadena veamos algunas operaciones básicas con las cadenas de texto
###Code
cad1 = "Hola"
cad2 = "Mundo"
## Concadenar dos cadenas de texto
print("Un divertido "+"programa "+"de "+ "radio")
## Usando variables
# Esto es igual a : "HolaMundo"
cad1 = "Un divertido "
cad2 = "programa"
print(cad1 + cad2)
## Multiplicar una cadena
print('Hola')
print('Hola'*3)
## Conocer largo de una cadena
len('Hola') # cadena de 4 caracteres
## Identificando el tipo de dato
y = 'h'
type(y)
## Convirtiendo a string
str(3)
###Output
_____no_output_____
###Markdown
2.2.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Int -> Valores enteros
3
x = 5
print(x)
# Float -> Representan valores con coma decimal
3.14
y = 8.17
print(y)
###Output
_____no_output_____
###Markdown
Identificando los tipos de datos
###Code
type(8)
p = 12.34234
type(p)
###Output
_____no_output_____
###Markdown
Operaciones sobre números Representan el conjunto de operaciones básicas de tipo numéricas
###Code
## Indistintamente si un numero es "int" or "float" es posible realizar operaciones sobre ellos
numero_1 = 12
numero_2 = 8.5
# sumando
print(numero_1 + numero_2)
# Restando
print(numero_1 - numero_2)
# Multiplicando: 12 * 2 = 24
print(numero_1 * 2)
# Diviendo : 12 / 2 = 6
print(numero_1 / 2)
# Potencia : 12 ** 2 = 144
print(numero_1 ** 2)
## Módulo de un número -> Me brinda el residuo de la división entre dos numeros
## 7 = 3 * 2 + 1 (1 es el residuo de la división)
7 % 3
###Output
_____no_output_____
###Markdown
Otras Operaciones Básicas
###Code
int(3.55) # int() -> convierte otro tipo de dato a entero
int('3') # Convierto String a entero
float(3) # Convierto "int" a float
float('3.245') # Convierto "String" a float
###Output
_____no_output_____
###Markdown
Ejercicios 1. Escribir un programa que realice la siguiente operación aritmética (Usar variables) $$ ({3 +}\frac{2}{2 x 5}) {^2}$$
###Code
aritmetica=(3+(2/2*5))**2
aritmetica
###Output
_____no_output_____
###Markdown
2. Calcular la Fuerza en Newtons de un cuerpo con masa m = 4 kg y una aceleracion a= 3 m/s \begin{equation} F = m * a\end{equation}
###Code
masa=4
aceleracion=3
fuerza=masa*aceleracion
fuerza
###Output
_____no_output_____
###Markdown
2.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
True
cargado=False
a=10
b=2
a==b
# cualquier número excepto el 0 se interpreta como verdadero
bool(1)
False
bool(0)
## Conociendo el tipo de dato
type(True)
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a = 3
b ='s'
# Comparando a y b
a == b
a != b
c = 5
# Mayor que
a > c
b=10
a=2
((a>c) and (b>c))
###Output
_____no_output_____
###Markdown
**Cuidado:**Tener en cuenta lo siguiente para implementar sus lógicas 3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
Ejercicios 1. Expresión a evaluar: 2 > ppara el valor p = 5
###Code
p=5
2>p
###Output
_____no_output_____
###Markdown
2. Expresión a evaluar: 8 == b and not b < 0para el valor b = 0
###Code
b=0
(8==b) and not (b<0)
###Output
_____no_output_____
###Markdown
Notas Adicionales
###Code
# Jupyter Notebook no es necesario poner print
numero =3
numero
## Para este caso si es necesario el print
numero = 7
numero
x = 2
## El valor que quiera imprimir debe estar al final de todo
print(numero)
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto) # reasingacion de variable
type(numero_texto)
###Output
_____no_output_____
###Markdown
Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a + 2
print(a)
# Aumento dos a la variable a (a = a+2)
a += 2
a
# a = a * 10
a *= 10
a
###Output
_____no_output_____
###Markdown
TIPOS DE DATOS, OPERADORES Y VARIBLES EN PYTHON 1. Tipo de Datos En la programación todo se resume a datos que representan información. - Números- textos- fechas- imágenes- sonidos- vídeos- Etc. 1.1.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Entero
3
int(3.55) #int() ->convierte otro tipo de dato a entero
int('3') ### con shif + tap : podemos ver todo lo que se puede hacer , convertir
# float
12.3545
float(3)
float('3.245')
int(float('3.245')) ## como se sabe que es un numero solo se deja en float('3.245')
###Output
_____no_output_____
###Markdown
1.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
"""
###Output
_____no_output_____
###Markdown
1.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
bool(1) # 1 es verdadero
bool(0) # 0 es falso
False
True
###Output
_____no_output_____
###Markdown
Conociendo el Tipo de Dato
###Code
Si en caso no sepa tipo de dato es : coloco type('')
type('hola')
###Output
_____no_output_____
###Markdown
>>> str significa string -> texto
###Code
type(3.45)
###Output
_____no_output_____
###Markdown
>>> float significa -> numero decimal
###Code
type(3)
###Output
_____no_output_____
###Markdown
>>> int significa -> numero entero 2. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. La variable me permite almacenar en la memoria de la computadora un valor (numero, textp, etc) 2.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 2.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
# declaración de variable en python
numero = 3.14
numero
# Jupyter Notebook no es necesario poner print
numero
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto)
type(numero_texto)
###Output
_____no_output_____
###Markdown
3. Operadores 3.1 Operadores Aritméticos Representan el conjunto de operaciones básicas de tipo numéricas Ejemplos
###Code
# Suma
3+2
# Resta
a = 5.2
b= 3.2
a - b
# Modulo (Nos brinda la parte entera que no puede ser dividida entre dos números)
a=3
b=2
a%b
('TEXTO')
'texto'*2
'texto' + ' ' + 'hola'
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a= 3
b='s'
# Comparando a y b
a==b
a != b
c= 5
# Mayor que
a > c
###Output
_____no_output_____
###Markdown
3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
3.4 Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a+2
a
# Aumento dos a la variable a (a = a+2)
a += 2
a
# Multiplico x2 a la variable a (a = a*2)
a*=2
a
###Output
_____no_output_____
###Markdown
EJERCICIOS 1.Escribir un programa que almacene la cadena ¡Hola Mundo! en una variable y luego muestre por pantalla el contenido de la variable.
###Code
cadena ='¡hola mundo!'
print(cadena)
###Output
¡hola mundo!
###Markdown
2.Escribir un programa que realice la siguiente operación aritmética (3+2 / 2x5)2 .
###Code
### la principal
operacion = ((3+2) / (2*5))**2 ## el ** es porque esta al cuadrado
print (operacion)
###otra forma que yo hice:
a= 3
b=2
c= a+b
c
d= b*c
d
r=c%d
r
###Output
_____no_output_____
###Markdown
VARIABLES, TIPOS DE DATOS SIMPLES Y OPERADORES En programación este apartado aprenderas a utilizar los diferentes tipos de datos que se encuentran disponibles en python, asi como declarar variables y realizar operaciones. 1. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. **La variable me permite almacenar en la memoria de la computadora un valor (numero, texto, etc)** 1.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 1.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
## 1. Declaremos una variable para mejorar que nos ayude a enviar un mensaje al usuario
print('Hola Mundo')
## 2. Declarando una variable, la cual contiene un texto
mensaje = "Hola Mundo"
print(mensaje)
###Output
_____no_output_____
###Markdown
Ejercicio Cree una variable llamada msg la cual almacene un mensaje. Luego imprima dicho mensaje
###Code
msg = 'hola a todos!!'
print(msg)
###Output
_____no_output_____
###Markdown
2. Tipo de Datos En la programación todo se resume a datos que representan información. Dicha información en terminos simples la podemos ver como:- Cadenas de texto o strings- Números- fechas- imágenes- sonidos- vídeos- Etc. 2.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
dasdas
"""
print('hola\na todos')
###Output
_____no_output_____
###Markdown
- Almacenando cadenas en variables
###Code
# Recuerda que podemos utilizar variables para almacenar una cadena de texto
texto = "Este es un texto"
###Output
_____no_output_____
###Markdown
Algunas Operaciones Basicas de Cadena Una vez que ya conocemos que es una cadena veamos algunas operaciones básicas con las cadenas de texto
###Code
cad1 = "Hola"
cad2 = "Mundo"
## Concadenar dos cadenas de texto
print("Un divertido "+"programa "+"de "+ "radio")
## Usando variables
# Esto es igual a : "HolaMundo"
cad1 = "Un divertido "
cad2 = "programa"
print(cad1 + cad2)
## Multiplicar una cadena
print('Hola')
print('Hola'*3)
## Conocer largo de una cadena
len('Hola') # cadena de 4 caracteres
len(cad1)
## Identificando el tipo de dato
y = 'h'
type(y)
## Convirtiendo a string
str(3)
###Output
_____no_output_____
###Markdown
2.2.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Int -> Valores enteros
3
x = 5
print(x)
# Float -> Representan valores con coma decimal
3.14
y = 8.17
print(y)
###Output
_____no_output_____
###Markdown
Identificando los tipos de datos
###Code
type(8)
p = 12.34234
type(p)
###Output
_____no_output_____
###Markdown
Operaciones sobre números Representan el conjunto de operaciones básicas de tipo numéricas
###Code
## Indistintamente si un numero es "int" or "float" es posible realizar operaciones sobre ellos
numero_1 = 12
numero_2 = 8.5
7 // 3 # solo muestra la parte entera de la división
# sumando
print(numero_1 + numero_2)
# Restando
print(numero_1 - numero_2)
# Multiplicando: 12 * 2 = 24
print(numero_1 * 2)
# Diviendo : 12 / 2 = 6
print(numero_1 / 2)
# Potencia : 12 ** 2 = 144
print(numero_1 ** 2)
## Módulo de un número -> Me brinda el residuo de la división entre dos numeros
## 7 = 3 * 2 + 1 (1 es el residuo de la división)
7 % 3 # -> 3 no es divisor de 7
###Output
_____no_output_____
###Markdown
Otras Operaciones Básicas
###Code
int(3.55) # int() -> convierte otro tipo de dato a entero
int('3') # Convierto String a entero
float(3) # Convierto "int" a float
float('3.245') # Convierto "String" a float
###Output
_____no_output_____
###Markdown
Ejercicios 1. Escribir un programa que realice la siguiente operación aritmética (Usar variables) $$ ({3 +}\frac{2}{2 x 5}) {^2}$$
###Code
a = 3
b = 2 / (2 * 5)
r = (a + b) ** 2
print(int(r))
###Output
_____no_output_____
###Markdown
2. Calcular la Fuerza en Newtons de un cuerpo con masa m = 4 kg y una aceleracion a= 3 m/s \begin{equation} F = m * a\end{equation}
###Code
# 1. recupero valores de m y a
m = 4
a = 3
# 2. realizo el calculo
f = m * a
# 3. muetro en pantalla
print(f)
###Output
_____no_output_____
###Markdown
2.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
True
# cualquier número excepto el 0 se interpreta como verdadero
bool(1)
False
bool(0)
## Conociendo el tipo de dato
type(True)
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a = 3
b ='s'
# Comparando a y b
a == b
a != b
c = 5
# Mayor que
a > c
's' > 5
###Output
_____no_output_____
###Markdown
**Cuidado:**Tener en cuenta lo siguiente para implementar sus lógicas 3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
Ejercicios 1. Expresión a evaluar: 2 > ppara el valor p = 5
###Code
p = 5
2 > p # 2 no es mayor que 5
###Output
_____no_output_____
###Markdown
2. Expresión a evaluar: 8 == b and not b < 0para el valor b = 0
###Code
b = 0
(8 == b) and not(b<0)
True or not(True)
###Output
_____no_output_____
###Markdown
Notas Adicionales
###Code
# Jupyter Notebook no es necesario poner print
numero =3
numero
## Para este caso si es necesario el print
numero = 7
numero
x = 2
## El valor que quiera imprimir debe estar al final de todo
print(numero)
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto) # reasingacion de variable
type(numero_texto)
###Output
_____no_output_____
###Markdown
Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a + 2
print(a)
# Aumento dos a la variable a (a = a+2)
a += 2
a
# a = a * 10
a *= 10
a
###Output
_____no_output_____
###Markdown
TIPOS DE DATOS, OPERADORES Y VARIBLES EN PYTHON 1. Tipo de Datos En la programación todo se resume a datos que representan información. - Números- textos- fechas- imágenes- sonidos- vídeos- Etc. 1.1.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Entero
3
int(3.55)
int('3')
# float
12.3545
float(3)
float('3.245')
type(3.0)
###Output
_____no_output_____
###Markdown
1.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
"""
###Output
_____no_output_____
###Markdown
1.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
bool(1)
bool(0)
False
True
###Output
_____no_output_____
###Markdown
Conociendo el Tipo de Dato
###Code
type('hola')
type(3.45)
###Output
_____no_output_____
###Markdown
2. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. La variable me permite almacenar en la memoria de la computadora un valor (numero, textp, etc) 2.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 2.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
# declaración de variable en python
numero = 3.14
numero
# Jupyter Notebook no es necesario poner print
numero
type(numero)
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto)
type(numero_texto)
###Output
_____no_output_____
###Markdown
3. Operadores 3.1 Operadores Aritméticos Representan el conjunto de operaciones básicas de tipo numéricas Ejemplos
###Code
# Suma
3+2
# Resta
a = 5.2
b= 3.2
a - b
# Modulo (Nos brinda la parte entera que no puede ser dividida entre dos números)
a=3
b=2
a%b
'hola ' + 'alumnos'
'hola'*2
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a= 3
b='s'
# Comparando a y b
a==b
a != b
c= 5
# Mayor que
a > c
###Output
_____no_output_____
###Markdown
3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
3.4 Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a+2
a
# Aumento dos a la variable a (a = a+2)
a += 2
a
###Output
_____no_output_____
###Markdown
EJERCICIOS 1.Escribir un programa que almacene la cadena ¡Hola Mundo! en una variable y luego muestre por pantalla el contenido de la variable.
###Code
variable = '¡Hola Mundo!'
print(variable)
###Output
¡Hola Mundo!
###Markdown
2.Escribir un programa que realice la siguiente operación aritmética ((3+2) / (2x5))2 .
###Code
a = ((3+2) / (2*5))**2
a
###Output
_____no_output_____
###Markdown
TIPOS DE DATOS, OPERADORES Y VARIBLES EN PYTHON 1. Tipo de Datos En la programación todo se resume a datos que representan información. - Números- textos- fechas- imágenes- sonidos- vídeos- Etc. 1.1.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Entero
3
int(3.55)
int('3')
# float
12.3545
float(3)
float('3.245')
type(3.0)
###Output
_____no_output_____
###Markdown
1.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
"""
###Output
_____no_output_____
###Markdown
1.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
bool(1)
bool(0)
False
True
###Output
_____no_output_____
###Markdown
Conociendo el Tipo de Dato
###Code
type('hola')
type(3.45)
###Output
_____no_output_____
###Markdown
2. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. La variable me permite almacenar en la memoria de la computadora un valor (numero, textp, etc) 2.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 2.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
# declaración de variable en python
numero = 3.14
numero
# Jupyter Notebook no es necesario poner print
numero
type(numero)
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto)
type(numero_texto)
###Output
_____no_output_____
###Markdown
3. Operadores 3.1 Operadores Aritméticos Representan el conjunto de operaciones básicas de tipo numéricas Ejemplos
###Code
# Suma
3+2
# Resta
a = 5.2
b= 3.2
a - b
# Modulo (Nos brinda la parte entera que no puede ser dividida entre dos números)
a=3
b=2
a%b
'hola ' + 'alumnos'
'hola'*2
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a= 3
b='s'
# Comparando a y b
a==b
a != b
c= 5
# Mayor que
a > c
###Output
_____no_output_____
###Markdown
3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
3.4 Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a+2
a
# Aumento dos a la variable a (a = a+2)
a += 2
a
###Output
_____no_output_____
###Markdown
EJERCICIOS 1.Escribir un programa que almacene la cadena ¡Hola Mundo! en una variable y luego muestre por pantalla el contenido de la variable.
###Code
variable = '¡Hola Mundo!'
print(variable)
###Output
¡Hola Mundo!
###Markdown
2.Escribir un programa que realice la siguiente operación aritmética ((3+2) / (2x5))2 .
###Code
a = ((3+2) / (2*5))**2
a
###Output
_____no_output_____
###Markdown
TIPOS DE DATOS, OPERADORES Y VARIBLES EN PYTHON 1. Tipo de Datos En la programación todo se resume a datos que representan información. - Números- textos- fechas- imágenes- sonidos- vídeos- Etc. 1.1.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Entero
3
int(3.55)
int('3')
# float
12.3545
float(3)
float('3.245')
type(3.00)
###Output
_____no_output_____
###Markdown
1.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
"""
###Output
_____no_output_____
###Markdown
1.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
bool(1)
bool(0)
False
True
###Output
_____no_output_____
###Markdown
Conociendo el Tipo de Dato
###Code
type('hola')
type(3.45)
###Output
_____no_output_____
###Markdown
2. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. La variable me permite almacenar en la memoria de la computadora un valor (numero, textp, etc) 2.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 2.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
# declaración de variable en python
numero = 3.14
numero
# Jupyter Notebook no es necesario poner print
numero
type(numero)
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto)
type(numero_texto)
numero="3.78"
numero
numero=float(numero)
type(numero)
###Output
_____no_output_____
###Markdown
3. Operadores 3.1 Operadores Aritméticos Representan el conjunto de operaciones básicas de tipo numéricas Ejemplos
###Code
# Suma
3+2
# Resta
a = 5.2
b= 3.2
a - b
# Modulo (Nos brinda la parte entera que no puede ser dividida entre dos números)
a=3
b=2
a%b
'hola ' + 'alumnos'
'hola'*2
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a= 3
b='s'
# Comparando a y b
a==b
a != b
c= 5
# Mayor que
a > c
###Output
_____no_output_____
###Markdown
3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
3.4 Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a+2
a
# Aumento dos a la variable a (a = a+2)
a += 2
a
###Output
_____no_output_____
###Markdown
EJERCICIOS 1.Escribir un programa que almacene la cadena ¡Hola Mundo! en una variable y luego muestre por pantalla el contenido de la variable.
###Code
variable = '¡Hola Mundo!'
print(variable)
###Output
¡Hola Mundo!
###Markdown
2.Escribir un programa que realice la siguiente operación aritmética ((3+2) / (2x5))2 .
###Code
a = ((3+2) / (2*5))**2
a
###Output
_____no_output_____
###Markdown
TIPOS DE DATOS, OPERADORES Y VARIBLES EN PYTHON 1. Tipo de Datos En la programación todo se resume a datos que representan información. - Números- textos- fechas- imágenes- sonidos- vídeos- Etc. 1.1.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Entero
3
int(3.55)
int('3')
# float
12.3545
float(3)
float('3.245')
type(3.0)
###Output
_____no_output_____
###Markdown
1.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
"""
###Output
_____no_output_____
###Markdown
1.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
bool(1)
bool(0)
False
True
###Output
_____no_output_____
###Markdown
Conociendo el Tipo de Dato
###Code
type('hola')
type(3.45)
###Output
_____no_output_____
###Markdown
2. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. La variable me permite almacenar en la memoria de la computadora un valor (numero, textp, etc) 2.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 2.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
# declaración de variable en python
numero = 3.14
numero
# Jupyter Notebook no es necesario poner print
numero
type(numero)
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto)
type(numero_texto)
###Output
_____no_output_____
###Markdown
3. Operadores 3.1 Operadores Aritméticos Representan el conjunto de operaciones básicas de tipo numéricas Ejemplos
###Code
# Suma
3+2
# Resta
a = 5.2
b= 3.2
a - b
# Modulo (Nos brinda la parte entera que no puede ser dividida entre dos números)
a=3
b=2
a%b
'hola ' + 'alumnos'
'hola'*2
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a= 3
b='s'
# Comparando a y b
a==b
a != b
c= 5
# Mayor que
a > c
###Output
_____no_output_____
###Markdown
3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
3.4 Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a+2
a
# Aumento dos a la variable a (a = a+2)
a += 2
a
###Output
_____no_output_____
###Markdown
EJERCICIOS 1.Escribir un programa que almacene la cadena ¡Hola Mundo! en una variable y luego muestre por pantalla el contenido de la variable.
###Code
variable = '¡Hola Mundo!'
print(variable)
###Output
¡Hola Mundo!
###Markdown
2.Escribir un programa que realice la siguiente operación aritmética ((3+2) / (2x5))2 .
###Code
a = ((3+2) / (2*5))**2
a
###Output
_____no_output_____
###Markdown
TIPOS DE DATOS, OPERADORES Y VARIBLES EN PYTHON 1. Tipo de Datos En la programación todo se resume a datos que representan información. - Números- textos- fechas- imágenes- sonidos- vídeos- Etc. 1.1.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Entero
3
int(3.55)
int('3')
# float
12.3545
float(3)
float('3.245')
###Output
_____no_output_____
###Markdown
1.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
"""
###Output
_____no_output_____
###Markdown
1.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
bool(1)
bool(0)
False
True
###Output
_____no_output_____
###Markdown
Conociendo el Tipo de Dato
###Code
type('hola')
type(3.45)
###Output
_____no_output_____
###Markdown
2. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. La variable me permite almacenar en la memoria de la computadora un valor (numero, textp, etc) 2.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 2.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
# declaración de variable en python
numero = 3.14
numero
# Jupyter Notebook no es necesario poner print
numero
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto)
type(numero_texto)
###Output
_____no_output_____
###Markdown
3. Operadores 3.1 Operadores Aritméticos Representan el conjunto de operaciones básicas de tipo numéricas Ejemplos
###Code
# Suma
3+2
# Resta
a = 5.2
b= 3.2
a - b
# Modulo (Nos brinda la parte entera que no puede ser dividida entre dos números)
a=3
b=2
a%b
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a= 3
b='s'
# Comparando a y b
a==b
a != b
c= 5
# Mayor que
a > c
###Output
_____no_output_____
###Markdown
3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
3.4 Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a+2
a
# Aumento dos a la variable a (a = a+2)
a += 2
a
###Output
_____no_output_____
###Markdown
EJERCICIOS 1.Escribir un programa que almacene la cadena ¡Hola Mundo! en una variable y luego muestre por pantalla el contenido de la variable.
###Code
variable = '¡Hola Mundo!'
print(variable)
###Output
¡Hola Mundo!
###Markdown
2.Escribir un programa que realice la siguiente operación aritmética (3+2 / 2x5)2 .
###Code
a = ((3+2) / (2*5))**2
a
###Output
_____no_output_____
###Markdown
TIPOS DE DATOS, OPERADORES Y VARIBLES EN PYTHON 1. Tipo de Datos En la programación todo se resume a datos que representan información. - Números- textos- fechas- imágenes- sonidos- vídeos- Etc. 1.1.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Entero
3
int(3.55)
int('3')
# float
12.3545
float(3)
float('3.245')
###Output
_____no_output_____
###Markdown
1.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
"""
###Output
_____no_output_____
###Markdown
1.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
bool(1)
bool(0)
False
True
###Output
_____no_output_____
###Markdown
Conociendo el Tipo de Dato
###Code
type('hola')
type(3.45)
###Output
_____no_output_____
###Markdown
2. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. La variable me permite almacenar en la memoria de la computadora un valor (numero, textp, etc) 2.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 2.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
# declaración de variable en python
numero = 3.14
numero
# Jupyter Notebook no es necesario poner print
numero
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto)
type(numero_texto)
###Output
_____no_output_____
###Markdown
3. Operadores 3.1 Operadores Aritméticos Representan el conjunto de operaciones básicas de tipo numéricas Ejemplos
###Code
# Suma
3+2
# Resta
a = 5.2
b= 3.2
a - b
# Modulo (Nos brinda la parte entera que no puede ser dividida entre dos números)
a=3
b=2
a%b
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a= 3
b='s'
# Comparando a y b
a==b
a != b
c= 5
# Mayor que
a > c
###Output
_____no_output_____
###Markdown
3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
3.4 Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a+2
a
# Aumento dos a la variable a (a = a+2)
a += 2
a
###Output
_____no_output_____
###Markdown
EJERCICIOS 1.Escribir un programa que almacene la cadena ¡Hola Mundo! en una variable y luego muestre por pantalla el contenido de la variable.
###Code
message = "¡Hola Mundo!"
print(message)
###Output
¡Hola Mundo!
###Markdown
2.Escribir un programa que realice la siguiente operación aritmética (3+2 / 2x5)2 .
###Code
print(((3+2)/(2*5))**2)
###Output
0.25
###Markdown
TIPOS DE DATOS, OPERADORES Y VARIBLES EN PYTHON 1. Tipo de Datos En la programación todo se resume a datos que representan información. - Números- textos- fechas- imágenes- sonidos- vídeos- Etc. 1.1.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Entero
3
int(3.55)
int('3')
# float
12.3545
float(3)
float('3.245')
type(3.0)
###Output
_____no_output_____
###Markdown
1.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
"""
###Output
_____no_output_____
###Markdown
1.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
bool(1)
bool(0)
False
True
###Output
_____no_output_____
###Markdown
Conociendo el Tipo de Dato
###Code
type('hola')
type(3.45)
###Output
_____no_output_____
###Markdown
2. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. La variable me permite almacenar en la memoria de la computadora un valor (numero, textp, etc) 2.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 2.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
# declaración de variable en python
numero = 3.14
numero
# Jupyter Notebook no es necesario poner print
numero
type(numero)
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto)
type(numero_texto)
###Output
_____no_output_____
###Markdown
3. Operadores 3.1 Operadores Aritméticos Representan el conjunto de operaciones básicas de tipo numéricas Ejemplos
###Code
# Suma
3+2
# Resta
a = 5.2
b= 3.2
a - b
# Modulo (Nos brinda la parte entera que no puede ser dividida entre dos números)
a=3
b=2
a%b
'hola ' + 'alumnos'
'hola'*2
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a= 3
b='s'
# Comparando a y b
a==b
a != b
c= 5
# Mayor que
a > c
###Output
_____no_output_____
###Markdown
3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
3.4 Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a+2
a
# Aumento dos a la variable a (a = a+2)
a += 2
a
###Output
_____no_output_____
###Markdown
EJERCICIOS 1.Escribir un programa que almacene la cadena ¡Hola Mundo! en una variable y luego muestre por pantalla el contenido de la variable.
###Code
variable = '¡Hola Mundo!'
print(variable)
###Output
¡Hola Mundo!
###Markdown
2.Escribir un programa que realice la siguiente operación aritmética ((3+2) / (2x5))2 .
###Code
a = ((3+2) / (2*5))**2
print(a)
###Output
_____no_output_____
###Markdown
TIPOS DE DATOS, OPERADORES Y VARIABLES EN PYTHON 1. Tipo de Datos En la programación todo se resume a datos que representan información. - Números- textos- fechas- imágenes- sonidos- vídeos- Etc. 1.1.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Entero
3
int(3.55)
int('3')
# float
12.3545
float(3)
float('3.245')
###Output
_____no_output_____
###Markdown
1.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
"""
###Output
_____no_output_____
###Markdown
1.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
bool(1)
bool(0)
False
True
###Output
_____no_output_____
###Markdown
Conociendo el Tipo de Dato
###Code
type('hola')
type(3.45)
###Output
_____no_output_____
###Markdown
2. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. La variable me permite almacenar en la memoria de la computadora un valor (numero, textp, etc) 2.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 2.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
# declaración de variable en python
numero = 3.14
numero
# Jupyter Notebook no es necesario poner print
numero
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto)
type(numero_texto)
###Output
_____no_output_____
###Markdown
3. Operadores 3.1 Operadores Aritméticos Representan el conjunto de operaciones básicas de tipo numéricas Ejemplos
###Code
# Suma
3+2
# Resta
a = 5.2
b= 3.2
a - b
# Modulo (Nos brinda la parte entera que no puede ser dividida entre dos números)
a=3
b=2
a%b
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a= 3
b='s'
# Comparando a y b
a==b
a != b
c= 5
# Mayor que
a > c
###Output
_____no_output_____
###Markdown
3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
3.4 Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a+2
a
# Aumento dos a la variable a (a = a+2)
a += 2
a
###Output
_____no_output_____
###Markdown
EJERCICIOS 1.Escribir un programa que almacene la cadena ¡Hola Mundo! en una variable y luego muestre por pantalla el contenido de la variable.
###Code
variable = '¡Hola Mundo!'
variable
###Output
_____no_output_____
###Markdown
2.Escribir un programa que realice la siguiente operación aritmética (3+2 / 2x5)2 .
###Code
variable=(3+2/2*5)**2
variable
###Output
_____no_output_____
###Markdown
TIPOS DE DATOS, OPERADORES Y VARIBLES EN PYTHON 1. Tipo de Datos En la programación todo se resume a datos que representan información. - Números- textos- fechas- imágenes- sonidos- vídeos- Etc. 1.1.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Entero
3
type(3)
int(3.55) # int() -> convierte otro tipo de dato a entero
int('3')
# float
12.3545
type(12.3545)
float(3)
float('3.245')
type(3) # - > te indica que tipo de dato es la varible
type(3.45)
###Output
_____no_output_____
###Markdown
1.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
dasdas
"""
type("It's a good day")
###Output
_____no_output_____
###Markdown
1.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
bool(1) # cualquier número excepto el 0 se interpreta como verdadero
bool(0)
False
True
###Output
_____no_output_____
###Markdown
Conociendo el Tipo de Dato
###Code
type('hola')
type(3.45)
type(False)
###Output
_____no_output_____
###Markdown
2. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. La variable me permite almacenar en la memoria de la computadora un valor (numero, texto, etc) 2.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 2.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
# declaración de variable en python
numero = 3.14
numero
type(numero)
# Jupyter Notebook no es necesario poner print
numero
print(numero)
###Output
3.14
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto) # reasingacion de variable
type(numero_texto)
###Output
_____no_output_____
###Markdown
3. Operadores 3.1 Operadores Aritméticos Representan el conjunto de operaciones básicas de tipo numéricas Ejemplos
###Code
# Suma
3+2
# Resta
a = 5.2
b = 3.2
a - b
# Modulo (Nos brinda la parte entera que no puede ser dividida entre dos números)
a=27
b=5
a % b
'texto'+' '+'hola'
a = 'texto1'
b = "texto2"
c = a + ' ' + b
print(c)
# Este es un comentario
# En python es posible multiplicar un texto
a * 3
a = 'texto'
a * 3
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a = 3
b ='s'
# Comparando a y b
a == b
a != b
c = 5
# Mayor que
a > c
###Output
_____no_output_____
###Markdown
3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
3.4 Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a + 2
print(a)
# Aumento dos a la variable a (a = a+2)
a += 2
a
# a = a * 10
a *= 10
a
###Output
_____no_output_____ |
demos/stocks/gen-demo-data.ipynb | ###Markdown
Genarate Stocks Demo DataRun the code below to generate the key/value table, time-series table and stream used in the demo> Note, for the notebook to run you need to set the need to set the WorldTradingData token as described below
###Code
# import required libraries
import pandas as pd
import numpy as np
import os
from datetime import datetime
import v3io_frames as v3f
# initialize iguazio multi-model DB dataframe client library
client = v3f.Client('framesd:8081')
###Output
_____no_output_____
###Markdown
Delete KV, TSDB, and Stream tabelsin case we want to start things from scratch (delete current tabels), uncomment the following line(s) and run them
###Code
#client.delete('kv','stocks')
#client.delete('tsdb','stock_metrics')
#client.delete('stream','stock_stream')
###Output
_____no_output_____
###Markdown
Create TSDB, KV, and Stream tabels
###Code
client.create(backend='tsdb', table='stock_metrics',attrs={'rate':'1/m'})
client.create(backend='stream', table='stock_stream',attrs={'retention_hours':48,'shards':1})
# fill the key/value table with some data (KV tables are automatically created on write and have a dynamic schema)
kvtbl = '{"price":{"GOOG":1039.55,"AMZN":1641.03,"AAPL":169.6,"MSFT":107.59,"INTC":47.21},"volume":{"GOOG":1807725,"AMZN":7494808,"AAPL":62025994,"MSFT":40801525,"INTC":23289000},"symbol":{"GOOG":"GOOG","AMZN":"AMZN","AAPL":"AAPL","MSFT":"MSFT","INTC":"INTC"},"exchange":{"GOOG":"NASDAQ","AMZN":"NASDAQ","AAPL":"NASDAQ","MSFT":"NASDAQ","INTC":"NASDAQ"},"last_trade":{"GOOG":"2018-12-10 16:00:01","AMZN":"2018-12-10 16:00:02","AAPL":"2018-12-10 16:00:02","MSFT":"2018-12-10 16:00:02","INTC":"2018-12-10 16:00:02"},"name":{"GOOG":"Alphabet Inc Class C","AMZN":"Amazon.com, Inc.","AAPL":"Apple Inc.","MSFT":"Microsoft Corporation","INTC":"Intel Corporation"},"currency":{"GOOG":"USD","AMZN":"USD","AAPL":"USD","MSFT":"USD","INTC":"USD"},"timezone":{"GOOG":"EST","AMZN":"EST","AAPL":"EST","MSFT":"EST","INTC":"EST"}}'
client.write(backend='kv', table='stocks',dfs=pd.read_json(kvtbl))
###Output
_____no_output_____
###Markdown
Fill the time-series table with a week worth or historical data from WorldTradingData API Require obtaining a (free) API token from [World Trading Data](https://www.worldtradingdata.com) and setting the environment variable below
###Code
%env API_TOKEN = <Insert world trading data token>
# read the stocks kv table (to get the Symboles)
sdf = client.read(backend='kv', table='stocks')
stocklist = sdf.index.tolist()
# create all stocks data based on stocks table & WTD history API
# need the symbol & exchange name from stocks table
urlt = 'https://www.worldtradingdata.com/api/v1/intraday?symbol={0}&range=7&sort=asc&interval=1&output=csv&api_token=' + os.getenv('API_TOKEN')
for sym in stocklist:
if not sym:
continue
url = urlt.format(sym)
df = pd.read_csv(url,skiprows=[0])
df.drop(['Open','High','Low'], axis=1, inplace=True)
df.rename(columns={'Close': 'price', 'Volume': 'volume'}, inplace=True)
# generate random sentiment series per stock
df['sentiment'] = np.random.uniform(low=0.0, high=2, size=(len(df),))-1
# set the index to date, symbol, exchange (will be marked as TSDB labels)
df.Date = pd.to_datetime(df.Date)
df['exchange']=sdf.loc[sym].exchange
df['symbol']=sym
newdf =df.set_index(['Date','symbol','exchange'])
# write to the TSDB
print(newdf.head())
client.write(backend='tsdb', table='stock_metrics',dfs=newdf)
###Output
price volume sentiment
Date symbol exchange
2019-02-22 09:30:00 AMZN NASDAQ 1623.41 95538 -0.340258
2019-02-22 09:31:00 AMZN NASDAQ 1623.14 21638 0.833875
2019-02-22 09:32:00 AMZN NASDAQ 1624.82 19221 0.372363
2019-02-22 09:33:00 AMZN NASDAQ 1626.69 39614 -0.115489
2019-02-22 09:34:00 AMZN NASDAQ 1627.88 17280 -0.598354
price volume sentiment
Date symbol exchange
2019-02-22 09:30:00 MSFT NASDAQ 109.92 965418 -0.900306
2019-02-22 09:31:00 MSFT NASDAQ 110.23 442196 0.662188
2019-02-22 09:32:00 MSFT NASDAQ 110.05 118172 0.781094
2019-02-22 09:33:00 MSFT NASDAQ 110.04 84788 -0.812989
2019-02-22 09:34:00 MSFT NASDAQ 110.12 98387 -0.084124
price volume sentiment
Date symbol exchange
2019-02-22 09:30:00 INTC NASDAQ 52.79 1628254 0.174202
2019-02-22 09:31:00 INTC NASDAQ 52.69 235698 0.645822
2019-02-22 09:32:00 INTC NASDAQ 52.70 205778 -0.445044
2019-02-22 09:33:00 INTC NASDAQ 52.78 195973 -0.775436
2019-02-22 09:34:00 INTC NASDAQ 52.89 293069 -0.597580
price volume sentiment
Date symbol exchange
2019-02-22 09:30:00 AAPL NASDAQ 172.06 539444 -0.427435
2019-02-22 09:31:00 AAPL NASDAQ 172.21 192174 -0.586166
2019-02-22 09:32:00 AAPL NASDAQ 172.21 101887 -0.888759
2019-02-22 09:33:00 AAPL NASDAQ 172.15 79448 -0.613805
2019-02-22 09:34:00 AAPL NASDAQ 172.11 101459 -0.526258
price volume sentiment
Date symbol exchange
2019-02-22 09:30:00 GOOG NASDAQ 1101.56 28896 0.205737
2019-02-22 09:31:00 GOOG NASDAQ 1102.59 3331 -0.453520
2019-02-22 09:32:00 GOOG NASDAQ 1102.08 4866 -0.666475
2019-02-22 09:33:00 GOOG NASDAQ 1099.80 3484 -0.629386
2019-02-22 09:34:00 GOOG NASDAQ 1099.98 1938 0.726823
###Markdown
Fill dummy tweet data in the stream
###Code
import json
record = {'text': 'bla bla bla',
'user': '@supermen',
'id': 1102722594429132545,
'created_at':'Tue Mar 02 00:08:48 +0000 2019',
'polarity':0.3,
'subjectivity':0.1,
}
client.execute('stream', 'stock_stream', 'put', args={'data': json.dumps(record)})
###Output
_____no_output_____ |
ml4trading-2ed/08_ml4t_workflow/04_ml4t_workflow_with_zipline/02_backtesting_with_zipline.ipynb | ###Markdown
Backtesting with zipline - Pipeline API with Custom Data The [Pipeline API](https://www.quantopian.com/docs/user-guide/tools/pipeline) facilitates the definition and computation of alpha factors for a cross-section of securities from historical data. The Pipeline significantly improves efficiency because it optimizes computations over the entire backtest period rather than tackling each event separately. In other words, it continues to follow an event-driven architecture but vectorizes the computation of factors where possible. A Pipeline uses Factors, Filters, and Classifiers classes to define computations that produce columns in a table with PIT values for a set of securities. Factors take one or more input arrays of historical bar data and produce one or more outputs for each security. There are numerous built-in factors, and you can also design your own `CustomFactor` computations.The following figure depicts how loading the data using the `DataFrameLoader`, computing the predictive `MLSignal` using the Pipeline API, and various scheduled activities integrate with the overall trading algorithm executed via the `run_algorithm()` function. We go over the details and the corresponding code in this section.![The Pipeline Workflow](../../assets/zip_pipe_flow.png)You need to register your Pipeline with the `initialize()` method and can then execute it at each time step or on a custom schedule. Zipline provides numerous built-in computations such as moving averages or Bollinger Bands that can be used to quickly compute standard factors, but it also allows for the creation of custom factors as we will illustrate next. Most importantly, the Pipeline API renders alpha factor research modular because it separates the alpha factor computation from the remainder of the algorithm, including the placement and execution of trade orders and the bookkeeping of portfolio holdings, values, and so on. The goal is to combine the daily return predictions with the OHCLV data from our Quandl bundle and then to go long on up to 10 equities with the highest predicted returns and short on those with the lowest predicted returns, requiring at least five stocks on either side similar to the backtrader example above. See comments in the notebook for implementation details. > This notebook requires the `conda` environment `backtest`. Please see the [installation instructions](../installation/README.md) for running the latest Docker image or alternative ways to set up your environment. Imports & Settings
###Code
import warnings
warnings.filterwarnings('ignore')
from collections import defaultdict
from time import time
import numpy as np
import pandas as pd
import pandas_datareader.data as web
from logbook import Logger, StderrHandler, INFO
import matplotlib.pyplot as plt
import seaborn as sns
from zipline import run_algorithm
from zipline.api import (attach_pipeline,
pipeline_output,
date_rules,
time_rules,
record,
schedule_function,
commission,
slippage,
set_slippage,
set_commission,
order_target,
order_target_percent)
from zipline.data import bundles
from zipline.utils.run_algo import load_extensions
from zipline.pipeline import Pipeline, CustomFactor
from zipline.pipeline.data import Column, DataSet
from zipline.pipeline.domain import US_EQUITIES
from zipline.pipeline.filters import StaticAssets
from zipline.pipeline.loaders.frame import DataFrameLoader
import pyfolio as pf
from pyfolio.plotting import plot_rolling_returns, plot_rolling_sharpe
from pyfolio.timeseries import forecast_cone_bootstrap
sns.set_style('whitegrid')
pd.set_option('display.expand_frame_repr', False)
np.random.seed(42)
###Output
_____no_output_____
###Markdown
Load zipline extensions Only need this in notebook to find bundle.
###Code
load_extensions(default=True,
extensions=[],
strict=True,
environ=None)
log_handler = StderrHandler(format_string='[{record.time:%Y-%m-%d %H:%M:%S.%f}]: ' +
'{record.level_name}: {record.func_name}: {record.message}',
level=INFO)
log_handler.push_application()
log = Logger('Algorithm')
###Output
_____no_output_____
###Markdown
Algo Params We plan to hold up to 20 long and 20 short positions whenever there are at least 10 on either side that meet the criteria (positive/negative prediction for long/short position).
###Code
N_LONGS = 20
N_SHORTS = 20
MIN_POSITIONS = 10
###Output
_____no_output_____
###Markdown
Load Data Quandl Wiki Bundle Load the Wiki Quandl `bundle` data that we ingested earlier using `zipline ingest`. This gives us access to the security SID values, among other things.
###Code
bundle_data = bundles.load('quandl')
###Output
_____no_output_____
###Markdown
ML Predictions We load our predictions for the 2015-17 period and extract the Zipline IDs for the ~250 stocks in our universe during this period using the `bundle.asset_finder.lookup_symbols()` method:
###Code
def load_predictions(bundle):
predictions = pd.read_hdf('../00_data/backtest.h5', 'data')[['predicted']].dropna()
tickers = predictions.index.get_level_values(0).unique().tolist()
assets = bundle.asset_finder.lookup_symbols(tickers, as_of_date=None)
predicted_sids = pd.Int64Index([asset.sid for asset in assets])
ticker_map = dict(zip(tickers, predicted_sids))
return (predictions
.unstack('ticker')
.rename(columns=ticker_map)
.predicted
.tz_localize('UTC')), assets
predictions, assets = load_predictions(bundle_data)
###Output
_____no_output_____
###Markdown
Define Custom Dataset To merge additional columns with our bundle, we define a custom `SignalData` class that inherits from `zipline.pipeline.DataSset` and contains a single `zipline.pipeline.Column` of type `float` and has the domain `US_EQUITIES`:
###Code
class SignalData(DataSet):
predictions = Column(dtype=float)
domain = US_EQUITIES
###Output
_____no_output_____
###Markdown
Define Pipeline Loaders While the bundle’s OHLCV data can rely on the built-in `USEquityPricingLoader`, we need to define our own `zipline.pipeline.loaders.frame.DataFrameLoader`:
###Code
signal_loader = {SignalData.predictions: DataFrameLoader(SignalData.predictions,
predictions)}
###Output
_____no_output_____
###Markdown
In fact, we need to slightly modify the Zipline library’s source code to bypass the assumption that we will only load price data. To this end, we will add a `custom_loader` parameter to the `run_algorithm` and ensure that this loader is used when the `Pipeline` needs one of `SignalData`’s `Column` instances. Pipeline Setup Our Pipeline is going to have two Boolean columns that identify the assets we would like to trade as long and short positions. To get there, we first define a `CustomFactor` called `MLSignal` that just receives the current `SignalData.predictions`. The motivation is to allow us to use some of the convenient `Factor` methods designed to rank and filter securities. Custom ML Factor
###Code
class MLSignal(CustomFactor):
"""Converting signals to Factor
so we can rank and filter in Pipeline"""
inputs = [SignalData.predictions]
window_length = 1
def compute(self, today, assets, out, preds):
out[:] = preds
###Output
_____no_output_____
###Markdown
Create Pipeline Now we create a `compute_signals()` that returns a `zipline.pipeline.Pipeline` which filters the assets that meet our long/short criteria. We will call ths function periodically while executing the backtest. More specifically, we set up our Pipeline by instantiating the `CustomFactor` that requires no arguments other than the defaults. We combine its `top()` and `bottom()` methods with a filter to select the highest positive and lowest negative predictions:
###Code
def compute_signals():
signals = MLSignal()
# predictions = SignalData.predictions.latest
return Pipeline(columns={
'longs' : signals.top(N_LONGS, mask=signals > 0),
'shorts': signals.bottom(N_SHORTS, mask=signals < 0)},
screen=StaticAssets(assets)
)
###Output
_____no_output_____
###Markdown
Initialize Algorithm The `initialize()` function is part of the Algorithm API. It permits us to add entries to the `context` dictionary available to all backtest components, set parameters like commission and slippage, and schedule functions. We also attach our Pipeline to the algorithm:
###Code
def initialize(context):
"""
Called once at the start of the algorithm.
"""
context.n_longs = N_LONGS
context.n_shorts = N_SHORTS
context.min_positions = MIN_POSITIONS
context.universe = assets
set_slippage(slippage.FixedSlippage(spread=0.00))
set_commission(commission.PerShare(cost=0, min_trade_cost=0))
schedule_function(rebalance,
date_rules.every_day(),
time_rules.market_open(hours=1, minutes=30))
schedule_function(record_vars,
date_rules.every_day(),
time_rules.market_close())
pipeline = compute_signals()
attach_pipeline(pipeline, 'signals')
###Output
_____no_output_____
###Markdown
Get daily Pipeline results The algorithm calls the `before_trading_start()` function every day before market opens and we use it to obtain the current pipeline values, i.e., the assets suggested for long and short positions based on the ML model predictions:
###Code
def before_trading_start(context, data):
"""
Called every day before market open.
"""
output = pipeline_output('signals')
context.trades = (output['longs'].astype(int)
.append(output['shorts'].astype(int).mul(-1))
.reset_index()
.drop_duplicates()
.set_index('index')
.squeeze())
###Output
_____no_output_____
###Markdown
Define Rebalancing Logic The `rebalance()` function takes care of adjusting the portfolio positions to reflect the target long and short positions implied by the model forecasets:
###Code
def rebalance(context, data):
"""
Execute orders according to schedule_function() date & time rules.
"""
trades = defaultdict(list)
for stock, trade in context.trades.items():
if not trade:
order_target(stock, 0)
else:
trades[trade].append(stock)
context.longs, context.shorts = len(trades[1]), len(trades[-1])
if context.longs > context.min_positions and context.shorts > context.min_positions:
for stock in trades[-1]:
order_target_percent(stock, -1 / context.shorts)
for stock in trades[1]:
order_target_percent(stock, 1 / context.longs)
###Output
_____no_output_____
###Markdown
Record Data Points The `record_vars()` logs information to the `pd.DataFrame` returned by `run_algorithm()` as scheduled.
###Code
def record_vars(context, data):
"""
Plot variables at the end of each day.
"""
record(leverage=context.account.leverage,
longs=context.longs,
shorts=context.shorts)
###Output
_____no_output_____
###Markdown
Run Algorithm At this point, we have defined all ingredients for the algorithm and are ready to call `run_algorithm()` with the desired `start` and `end` dates, references to the various functions we just created, and the `custom_loader` to ensure our model predictions are available to the backtest.
###Code
dates = predictions.index.get_level_values('date')
start_date = dates.min()
end_date = (dates.max() + pd.DateOffset(1))
start_date, end_date
start = time()
results = run_algorithm(start=start_date,
end=end_date,
initialize=initialize,
before_trading_start=before_trading_start,
capital_base=1e6,
data_frequency='daily',
bundle='quandl',
custom_loader=signal_loader) # need to modify zipline
print('Duration: {:.2f}s'.format(time() - start))
###Output
[2021-03-10 22:04:22.026133]: INFO: handle_split: after split: asset: Equity(2509 [SBUX]), amount: 1078, cost_basis: 47.62, last_sale_price: 95.23
[2021-03-10 22:04:22.026718]: INFO: handle_split: returning cash: 0.0
[2021-03-10 22:04:55.492499]: INFO: handle_simulation_end: Simulated 751 trading days
first open: 2014-12-09 14:31:00+00:00
last close: 2017-11-30 21:00:00+00:00
###Markdown
Performance Analysis with PyFolio Now we can evaluate the results using `pyfolio` tearsheets or its various `pyfolio.plotting` functions.
###Code
returns, positions, transactions = pf.utils.extract_rets_pos_txn_from_zipline(results)
benchmark = web.DataReader('SP500', 'fred', '2014', '2018').squeeze()
benchmark = benchmark.pct_change().tz_localize('UTC')
LIVE_DATE = '2017-01-01'
fig, axes = plt.subplots(ncols=2, figsize=(16, 5))
plot_rolling_returns(returns,
factor_returns=benchmark,
live_start_date=LIVE_DATE,
logy=False,
cone_std=2,
legend_loc='best',
volatility_match=False,
cone_function=forecast_cone_bootstrap,
ax=axes[0])
plot_rolling_sharpe(returns, ax=axes[1], rolling_window=63)
axes[0].set_title('Cumulative Returns - In and Out-of-Sample')
axes[1].set_title('Rolling Sharpe Ratio (3 Months)')
sns.despine()
fig.tight_layout();
pf.create_full_tear_sheet(returns,
positions=positions,
transactions=transactions,
benchmark_rets=benchmark,
live_start_date=LIVE_DATE,
round_trips=True)
###Output
_____no_output_____ |
day1/01-tf2-test-setup.ipynb | ###Markdown
Notebook for testing the TensorFlow 2.0 setupThis netbook is for testing the [TensorFlow](https://www.tensorflow.org/) setup using the [Keras API](https://keras.io/). Below is a set of required imports. Run the cell, and no error messages should appear. In particular, **TensorFlow 2 is required**.Some warnings may appear, this should be fine.
###Code
%matplotlib inline
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, Flatten, MaxPooling2D
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import SimpleRNN, LSTM, GRU
from tensorflow.keras.utils import plot_model, to_categorical
from tensorflow.keras.datasets import mnist, fashion_mnist, imdb
import os
if not os.path.isfile('pml_utils.py'):
!wget https://raw.githubusercontent.com/csc-training/intro-to-dl/master/day1/pml_utils.py
from pml_utils import show_failures
from sklearn.model_selection import train_test_split
from distutils.version import LooseVersion as LV
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
print('Using Tensorflow version: {}, and Keras version: {}.'.format(tf.__version__, tf.keras.__version__))
assert(LV(tf.__version__) >= LV("2.0.0"))
###Output
_____no_output_____
###Markdown
Let's check if we have GPU available.
###Code
if len(tf.config.list_physical_devices('GPU')) > 0:
from tensorflow.python.client import device_lib
for d in device_lib.list_local_devices():
if d.device_type == 'GPU':
print('GPU', d.physical_device_desc)
else:
print('No GPU, using CPU instead.')
###Output
_____no_output_____
###Markdown
Getting started: 30 seconds to Keras(This section is adapted from https://keras.io/)The core data structure of Keras is a **model**, a way to organize layers. The main type of model is the `Sequential` model, a linear stack of layers.A model is initialized by calling `Sequential()`:
###Code
model = Sequential()
###Output
_____no_output_____
###Markdown
Stacking layers is as easy as `.add()`:
###Code
model.add(Dense(units=64, input_dim=100))
model.add(Activation("relu"))
model.add(Dense(units=10))
model.add(Activation("softmax"))
###Output
_____no_output_____
###Markdown
A summary of the model:
###Code
print(model.summary())
###Output
_____no_output_____
###Markdown
Let's draw a fancier graph of our model:*Note: This does not work in Google Colaboratory.*
###Code
plot_model(model, show_shapes=True)
###Output
_____no_output_____
###Markdown
Once your model looks good, configure its learning process with `.compile()`:
###Code
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
You can now begin training your model with `.fit()`. Let's generate some random data and use it to train the model:
###Code
X_train = np.random.rand(128, 100)
Y_train = to_categorical(np.random.randint(10, size=128))
model.fit(X_train, Y_train, epochs=5, batch_size=32, verbose=2);
###Output
_____no_output_____
###Markdown
Evaluate your performance on test data with `.evaluate():`
###Code
X_test = np.random.rand(64, 100)
Y_test = to_categorical(np.random.randint(10, size=64))
loss, acc = model.evaluate(X_test, Y_test, batch_size=32)
print()
print('loss:', loss, 'acc:', acc)
###Output
_____no_output_____
###Markdown
Notebook for testing the TensorFlow 2.0 setupThis netbook is for testing the [TensorFlow](https://www.tensorflow.org/) setup using the [Keras API](https://keras.io/). Below is a set of required imports. Run the cell, and no error messages should appear. In particular, **TensorFlow 2 is required**.Some warnings may appear, this should be fine.
###Code
%matplotlib inline
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.utils import plot_model, to_categorical
from tensorflow.keras.datasets import mnist, fashion_mnist, imdb
import os
if not os.path.isfile('pml_utils.py'):
!wget https://raw.githubusercontent.com/csc-training/intro-to-dl/master/day1/pml_utils.py
from pml_utils import show_failures
from sklearn.model_selection import train_test_split
from distutils.version import LooseVersion as LV
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
print('Using Tensorflow version: {}, and Keras version: {}.'.format(tf.__version__, tf.keras.__version__))
assert(LV(tf.__version__) >= LV("2.0.0"))
###Output
_____no_output_____
###Markdown
Let's check if we have GPU available.
###Code
gpus = tf.config.list_physical_devices('GPU')
if len(gpus) > 0:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
from tensorflow.python.client import device_lib
for d in device_lib.list_local_devices():
if d.device_type == 'GPU':
print('GPU', d.physical_device_desc)
else:
print('No GPU, using CPU instead.')
###Output
_____no_output_____
###Markdown
Getting started: 30 seconds to Keras(This section is adapted from https://keras.io/)The core data structure of Keras is a *Model*, a way to organize layers. While there are several ways to create Models in Keras, we will be using the [*functional* API](https://keras.io/guides/functional_api/).We start by creating an input layer:
###Code
inputs = keras.Input(shape=(100,))
###Output
_____no_output_____
###Markdown
We create further layers by calling a specific layer on its input object:
###Code
x = layers.Dense(units=64, activation="relu")(inputs)
outputs = layers.Dense(units=10, activation="softmax")(x)
###Output
_____no_output_____
###Markdown
Then we can create a Model by specifying its inputs and outputs:
###Code
model = keras.Model(inputs=inputs, outputs=outputs, name="test_model")
###Output
_____no_output_____
###Markdown
A summary of the model:
###Code
print(model.summary())
###Output
_____no_output_____
###Markdown
Let's draw a fancier graph of our model:
###Code
plot_model(model, show_shapes=True)
###Output
_____no_output_____
###Markdown
Once your model looks good, configure its learning process with `.compile()`:
###Code
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
You can now begin training your model with `.fit()`. Let's generate some random data and use it to train the model:
###Code
X_train = np.random.rand(128, 100)
Y_train = to_categorical(np.random.randint(10, size=128))
model.fit(X_train, Y_train, epochs=5, batch_size=32, verbose=2);
###Output
_____no_output_____
###Markdown
Evaluate your performance on test data with `.evaluate():`
###Code
X_test = np.random.rand(64, 100)
Y_test = to_categorical(np.random.randint(10, size=64))
loss, acc = model.evaluate(X_test, Y_test, batch_size=32)
print()
print('loss:', loss, 'acc:', acc)
###Output
_____no_output_____
###Markdown
Notebook for testing the TensorFlow 2.0 setupThis netbook is for testing the [TensorFlow](https://www.tensorflow.org/) setup using the [Keras API](https://keras.io/). Below is a set of required imports. Run the cell, and no error messages should appear. In particular, **TensorFlow 2 is required**.Some warnings may appear, this should be fine.
###Code
%matplotlib inline
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.utils import plot_model, to_categorical
from tensorflow.keras.datasets import mnist, fashion_mnist, imdb
import os
if not os.path.isfile('pml_utils.py'):
!wget https://raw.githubusercontent.com/csc-training/intro-to-dl/master/day1/pml_utils.py
from pml_utils import show_failures
from sklearn.model_selection import train_test_split
from distutils.version import LooseVersion as LV
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
print('Using Tensorflow version: {}, and Keras version: {}.'.format(tf.__version__, tf.keras.__version__))
assert(LV(tf.__version__) >= LV("2.0.0"))
###Output
_____no_output_____
###Markdown
Let's check if we have GPU available.
###Code
gpus = tf.config.list_physical_devices('GPU')
if len(gpus) > 0:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
from tensorflow.python.client import device_lib
for d in device_lib.list_local_devices():
if d.device_type == 'GPU':
print('GPU', d.physical_device_desc)
else:
print('No GPU, using CPU instead.')
###Output
_____no_output_____
###Markdown
Getting started: 30 seconds to Keras(This section is adapted from https://keras.io/)The core data structure of Keras is a *Model*, a way to organize layers. While there are several ways to create Models in Keras, we will be using the [*functional* API](https://keras.io/guides/functional_api/).We start by creating an input layer:
###Code
inputs = keras.Input(shape=(100,))
###Output
_____no_output_____
###Markdown
We create further layers by calling a specific layer on its input object:
###Code
x = layers.Dense(units=64, activation="relu")(inputs)
outputs = layers.Dense(units=10, activation="softmax")(x)
###Output
_____no_output_____
###Markdown
Then we can create a Model by specifying its inputs and outputs:
###Code
model = keras.Model(inputs=inputs, outputs=outputs, name="test_model")
###Output
_____no_output_____
###Markdown
A summary of the model:
###Code
print(model.summary())
###Output
_____no_output_____
###Markdown
Let's draw a fancier graph of our model:
###Code
plot_model(model, show_shapes=True)
###Output
_____no_output_____
###Markdown
Once your model looks good, configure its learning process with `.compile()`:
###Code
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
You can now begin training your model with `.fit()`. Let's generate some random data and use it to train the model:
###Code
X_train = np.random.rand(128, 100)
Y_train = to_categorical(np.random.randint(10, size=128))
model.fit(X_train, Y_train, epochs=5, batch_size=32, verbose=2);
###Output
_____no_output_____
###Markdown
Evaluate your performance on test data with `.evaluate():`
###Code
X_test = np.random.rand(64, 100)
Y_test = to_categorical(np.random.randint(10, size=64))
loss, acc = model.evaluate(X_test, Y_test, batch_size=32)
print()
print('loss:', loss, 'acc:', acc)
###Output
_____no_output_____ |
docs/_static/notebooks/sigmas.ipynb | ###Markdown
A note about sigmasWe are regularly asked about the "sigma" levels in the 2D histograms. These are not the 68%, *etc.* values that we're used to for 1D distributions. In two dimensions, a Gaussian density is given by: pdf(r) = exp(-(r/s)^2/2) / (2*pi*s^2)The integral under this density (using polar coordinates and implicitly integrating out the angle) is: cdf(x) = Integral(r * exp(-(r/s)^2/2) / s^2, {r, 0, x}) = 1 - exp(-(x/s)^2/2)This means that within "1-sigma", the Gaussian contains `1-exp(-0.5) ~ 0.393` or 39.3% of the volume. Therefore the relevant 1-sigma levels for a 2D histogram of samples is 39% not 68%. If you must use 68% of the mass, use the `levels` keyword argument when you call `corner.corner`.We can visualize the difference between sigma definitions:
###Code
import corner
import numpy as np
import matplotlib.pyplot as pl
# Generate some fake data from a Gaussian
np.random.seed(42)
x = np.random.randn(50000, 2)
###Output
_____no_output_____
###Markdown
First, plot this using the correct (default) 1-sigma level:
###Code
fig = corner.corner(x, quantiles=(0.16, 0.84), levels=(1-np.exp(-0.5),))
fig.suptitle("correct `one-sigma' level");
###Output
_____no_output_____
###Markdown
Compare this to the 68% mass level and specifically compare to how the contour compares to the marginalized 68% quantile:
###Code
fig = corner.corner(x, quantiles=(0.16, 0.84), levels=(0.68,))
fig.suptitle("incorrect `one-sigma' level");
###Output
_____no_output_____ |
05 Data Visualization/notebook.ipynb | ###Markdown
Data VisualizationOne of the best ways to communicate what is happening in your dataset is through visualizations. Visualizations make it easy for a human to understand millions of rows of data at a glance. Today we will go through basics of the [Seaborn library](https://seaborn.pydata.org/).> Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.To install seaborn we need to run either:```pip install seaborn```or:```conda install seaborn```Let's start by importing packages:
###Code
# Imports
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Let's read our PoKeMoN dataset. Below you can see the column descriptions:| Column | Description || :--- | :--- || `Name` | The name of the pokemon || `Type 1` | The type of the pokemon we will use || `Type 2` | Later generations were using dual types, we won't be using this here || `Total` | The sum of all stat columns || `HP`, `Attack`, `Defense`, `Sp. Atk`, `Sp. Def`, `Speed` | Pokemon stats || `Generation` | When was this pokemon introduced || `Legendary` | Is the pokemon a Legendary pokemon |
###Code
pokemon = pd.read_csv(
"datasets_121_280_Pokemon.csv",
index_col="#",
)
pokemon.sample(5)
###Output
_____no_output_____
###Markdown
We will setup the standard Seaborn theme by running the following command:
###Code
sns.set()
###Output
_____no_output_____
###Markdown
Line plotsOne of the basic types of plots are line plots. Those are handled in seaborn by the `lineplot` function. By default, the plot aggregates over multiple y values at each value of x and shows an estimate of the central tendency and a confidence interval for that estimate.Full documentation of `lineplot` is available [here](https://seaborn.pydata.org/generated/seaborn.lineplot.htmlseaborn.lineplot).Let's see how the Total stats changed over the generations.
###Code
sns.lineplot(
x="Generation",
y="Total",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
We can have multiple values on a line plot as well. A useful technique to visualize multiple columns is melting the dataframe. Then we can just use `value` for the actual value of the column, and `variable` for the label as `hue`. The `ci` parameter hides the estimator shade you could see above.
###Code
# We melt the values, so that it is useful for data viz
generation_values = pd.melt(
pokemon,
id_vars=["Generation"],
value_vars=["Speed", "Defense", "Sp. Def", "Sp. Atk", "HP", "Attack"],
)
fig, ax = plt.subplots()
sns.lineplot(
x="Generation",
y="value",
hue="variable",
ci=None,
data=generation_values,
)
fig.set_size_inches(12, 8)
###Output
_____no_output_____
###Markdown
Bar plotsAnother often used type of a plot is the Bar plot. You might remember the use of it in pandas:
###Code
# Bar plot using pandas
pokemon["Generation"].value_counts().plot.bar()
###Output
_____no_output_____
###Markdown
We will use Seaborn `barplot` function to create bar plots. Here you can see an example of an attack `Total` bar plot depending on the `Generation`. The whiskers (vertical black lines), show the estimation error. Ideally there would be no estimation error, but we work with what we got.Full documentation of `barplot` is available [here](https://seaborn.pydata.org/generated/seaborn.barplot.htmlseaborn.barplot).
###Code
sns.barplot(
x="Generation",
y="Total",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
A nice alternative to this, is the `countplot`, which automatically counts things in the bucket. Countplot documentation is available [here](https://seaborn.pydata.org/generated/seaborn.countplot.htmlseaborn.countplot).
###Code
sns.countplot(
"Generation",
data=pokemon,
)
sns.countplot(
y="Type 1",
order=pokemon["Type 1"].value_counts().index,
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Categorical plotA nice way to create a plot is to use the `catplot` function. This function provides access to several axes-level functions that show the relationship between a numerical and one or more categorical variables using one of several visual representations. The kind parameter selects the underlying axes-level function to use like stripplots, swarms, boxplots, barplots and more. We will use a barplot here, to show the `Total` means over `Generation` depending whether the pokemon was `Legendary` or not.Full documentation on `catplot` is available [here](https://seaborn.pydata.org/generated/seaborn.catplot.htmlseaborn.catplot).
###Code
sns.catplot(
x="Generation",
y="Total",
hue="Legendary",
kind="bar",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Scatter plotScatter plot is a relational plot, that shows the relation based on the `x` and `y` value. The relationship between x and y can be shown for different subsets of the data using the hue, size, and style parameters. These parameters control what visual semantics are used to identify the different subsets. It is possible to show up to three dimensions independently by using all three semantic types, but this style of plot can be hard to interpret and is often ineffective. Using redundant semantics (i.e. both hue and style for the same variable) can be helpful for making graphics more accessible.Full documentation on `scatterplot` is available [here](https://seaborn.pydata.org/generated/seaborn.scatterplot.htmlseaborn.scatterplot).
###Code
sns.scatterplot(
x="Attack",
y="Defense",
hue="Legendary",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Swarm plotA simillar plot to scatter is the swarm plot. It shows you the count distribution over a numerical range. Here we categorize the swarmplot by `Generation` to see what was the distribution of starter types (water, fire, grass) pokemon over the generations. We are also adding a `palette` to colour them accordingly.The full documentation for `swarmplot` is available [here](https://seaborn.pydata.org/generated/seaborn.swarmplot.htmlseaborn.swarmplot).
###Code
starter_types = ["Water", "Fire", "Grass"]
palette = {
"Water": "#6890F0",
"Fire": "#F08030",
"Grass": "#78C850",
}
g = sns.swarmplot(
x="Generation",
y="Attack",
hue="Type 1",
palette=palette,
data=pokemon[pokemon["Type 1"].isin(starter_types)],
)
g.legend_.remove()
###Output
_____no_output_____
###Markdown
Distribution plotsA good way to show what is happening with our data is to show the distribution. The basic method to do that is the `distplot`. Below we can see the distribution of values for the `Total` column. We can see that most pokemon have around 500 and 300 `Total` power. The line you can see is the KDE = Kernel Density Estimation, a non-parametric way to estimate the probability density function of a random variable. Kernel density estimation is a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample.Full documentation for the `distplot` is available [here]().
###Code
sns.distplot(pokemon["Total"])
###Output
_____no_output_____
###Markdown
To see how our data distribution is happening through the dataset, we can also use the `boxplot`. This is a fun little graph that shows us many parameters of the selected column. Here we selected the `Total` values and groupped them by `Generation`. What elements we have here:- The box is where most of the values land.- The line in the middle of the box is the mean value.- The lower end of the box is the 1st quartile.- The upper end of the box is the 3rd quartile.- Whiskers show minimum and maximum values.- The little diamond is an outlier value.The full documentation on the `boxplot` is available [here](https://seaborn.pydata.org/generated/seaborn.boxplot.html).
###Code
sns.boxplot(
x="Generation",
y="Total",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Cluster mapsCluster maps are a good way to show correlation between variables in the dataset. Here I selected all of the Pokemon stats and created a correlation matrix of them. You can easily see, which ones correlate with each other and in what way due to the clustering (above and left). Clustermaps are there to show you a hierarchical overview of the data.The full documentation for `clustermap` is available [here](https://seaborn.pydata.org/generated/seaborn.clustermap.html).
###Code
stats = ["Speed", "Defense", "Sp. Def", "Sp. Atk", "HP", "Attack"]
sns.clustermap(
pokemon[stats].corr(),
cmap="mako",
linewidths=.75,
)
###Output
_____no_output_____
###Markdown
Exercises1. Read in the `avocado.csv` dataset - Set the index properly2. Create a line plot showing the average price of avocado over months3. Create a horizontal bar plot showing 10 highest mean prices depending on region4. Create a count plot for the year of the avocado5. Create a scatter plot of average price vs `Total Volume` for year 2018, when the `Total Volume` is lower than `1e6`6. Show the `AveragePrice` distribution.7. Create a clustermap of avocado correlations.8. Show a boxplot of average price per year.
###Code
# Imports
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
#1
avocado = pd.read_csv(
"avocado.csv",
index_col="id",
)
avocado.sample(5)
sns.set()
#2
price_date = avocado[["Date", "AveragePrice"]]
price_months = price_date.copy();
price_months.Date = price_date.Date.map(lambda d: d[:7])
price_date_grouped = price_months.groupby('Date').mean()
price_date_grouped['Date'] = price_date_grouped.index
sns.lineplot(
x="Date",
y="AveragePrice",
data=price_date_grouped,
)
#3
avocado[['AveragePrice', 'region']].groupby('region').max().plot.bar()
#4
sns.countplot(
y="year",
order=avocado["year"].value_counts().index,
data=avocado,
)
#5
data5 = avocado[(avocado.year == 2018) & (avocado['Total Volume'] < 1000000)]
sns.scatterplot(
x="AveragePrice",
y="Total Volume",
hue="type",
data=data5,
)
#6
sns.distplot(avocado["AveragePrice"])
#7
stats = ["Small Bags", "Large Bags", "Total Bags"]
sns.clustermap(
avocado[stats].corr(),
cmap="mako",
linewidths=.75,
)
#8
sns.boxplot(
x="year",
y="AveragePrice",
data=avocado,
)
###Output
_____no_output_____
###Markdown
Data VisualizationOne of the best ways to communicate what is happening in your dataset is through visualizations. Visualizations make it easy for a human to understand millions of rows of data at a glance. Today we will go through basics of the [Seaborn library](https://seaborn.pydata.org/).> Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.To install seaborn we need to run either:```pip install seaborn```or:```conda install seaborn```Let's start by importing packages:
###Code
# Imports
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Let's read our PoKeMoN dataset. Below you can see the column descriptions:| Column | Description || :--- | :--- || `Name` | The name of the pokemon || `Type 1` | The type of the pokemon we will use || `Type 2` | Later generations were using dual types, we won't be using this here || `Total` | The sum of all stat columns || `HP`, `Attack`, `Defense`, `Sp. Atk`, `Sp. Def`, `Speed` | Pokemon stats || `Generation` | When was this pokemon introduced || `Legendary` | Is the pokemon a Legendary pokemon |
###Code
pokemon = pd.read_csv(
"datasets_121_280_Pokemon.csv",
index_col="#",
)
pokemon.sample(5)
###Output
_____no_output_____
###Markdown
We will setup the standard Seaborn theme by running the following command:
###Code
sns.set()
###Output
_____no_output_____
###Markdown
Line plotsOne of the basic types of plots are line plots. Those are handled in seaborn by the `lineplot` function. By default, the plot aggregates over multiple y values at each value of x and shows an estimate of the central tendency and a confidence interval for that estimate.Full documentation of `lineplot` is available [here](https://seaborn.pydata.org/generated/seaborn.lineplot.htmlseaborn.lineplot).Let's see how the Total stats changed over the generations.
###Code
sns.lineplot(
x="Generation",
y="Total",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
We can have multiple values on a line plot as well. A useful technique to visualize multiple columns is melting the dataframe. Then we can just use `value` for the actual value of the column, and `variable` for the label as `hue`. The `ci` parameter hides the estimator shade you could see above.
###Code
# We melt the values, so that it is useful for data viz
generation_values = pd.melt(
pokemon,
id_vars=["Generation"],
value_vars=["Speed", "Defense", "Sp. Def", "Sp. Atk", "HP", "Attack"],
)
fig, ax = plt.subplots()
sns.lineplot(
x="Generation",
y="value",
hue="variable",
ci=None,
data=generation_values,
)
fig.set_size_inches(12, 8)
###Output
_____no_output_____
###Markdown
Bar plotsAnother often used type of a plot is the Bar plot. You might remember the use of it in pandas:
###Code
# Bar plot using pandas
pokemon["Generation"].value_counts().plot.bar()
###Output
_____no_output_____
###Markdown
We will use Seaborn `barplot` function to create bar plots. Here you can see an example of an attack `Total` bar plot depending on the `Generation`. The whiskers (vertical black lines), show the estimation error. Ideally there would be no estimation error, but we work with what we got.Full documentation of `barplot` is available [here](https://seaborn.pydata.org/generated/seaborn.barplot.htmlseaborn.barplot).
###Code
sns.barplot(
x="Generation",
y="Total",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
A nice alternative to this, is the `countplot`, which automatically counts things in the bucket. Countplot documentation is available [here](https://seaborn.pydata.org/generated/seaborn.countplot.htmlseaborn.countplot).
###Code
sns.countplot(
"Generation",
data=pokemon,
)
sns.countplot(
y="Type 1",
order=pokemon["Type 1"].value_counts().index,
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Categorical plotA nice way to create a plot is to use the `catplot` function. This function provides access to several axes-level functions that show the relationship between a numerical and one or more categorical variables using one of several visual representations. The kind parameter selects the underlying axes-level function to use like stripplots, swarms, boxplots, barplots and more. We will use a barplot here, to show the `Total` means over `Generation` depending whether the pokemon was `Legendary` or not.Full documentation on `catplot` is available [here](https://seaborn.pydata.org/generated/seaborn.catplot.htmlseaborn.catplot).
###Code
sns.catplot(
x="Generation",
y="Total",
hue="Legendary",
kind="bar",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Scatter plotScatter plot is a relational plot, that shows the relation based on the `x` and `y` value. The relationship between x and y can be shown for different subsets of the data using the hue, size, and style parameters. These parameters control what visual semantics are used to identify the different subsets. It is possible to show up to three dimensions independently by using all three semantic types, but this style of plot can be hard to interpret and is often ineffective. Using redundant semantics (i.e. both hue and style for the same variable) can be helpful for making graphics more accessible.Full documentation on `scatterplot` is available [here](https://seaborn.pydata.org/generated/seaborn.scatterplot.htmlseaborn.scatterplot).
###Code
sns.scatterplot(
x="Attack",
y="Defense",
hue="Legendary",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Swarm plotA simillar plot to scatter is the swarm plot. It shows you the count distribution over a numerical range. Here we categorize the swarmplot by `Generation` to see what was the distribution of starter types (water, fire, grass) pokemon over the generations. We are also adding a `palette` to colour them accordingly.The full documentation for `swarmplot` is available [here](https://seaborn.pydata.org/generated/seaborn.swarmplot.htmlseaborn.swarmplot).
###Code
starter_types = ["Water", "Fire", "Grass"]
palette = {
"Water": "#6890F0",
"Fire": "#F08030",
"Grass": "#78C850",
}
g = sns.swarmplot(
x="Generation",
y="Attack",
hue="Type 1",
palette=palette,
data=pokemon[pokemon["Type 1"].isin(starter_types)],
)
g.legend_.remove()
###Output
_____no_output_____
###Markdown
Distribution plotsA good way to show what is happening with our data is to show the distribution. The basic method to do that is the `distplot`. Below we can see the distribution of values for the `Total` column. We can see that most pokemon have around 500 and 300 `Total` power. The line you can see is the KDE = Kernel Density Estimation, a non-parametric way to estimate the probability density function of a random variable. Kernel density estimation is a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample.Full documentation for the `distplot` is available [here]().
###Code
sns.distplot(pokemon["Total"])
###Output
_____no_output_____
###Markdown
To see how our data distribution is happening through the dataset, we can also use the `boxplot`. This is a fun little graph that shows us many parameters of the selected column. Here we selected the `Total` values and groupped them by `Generation`. What elements we have here:- The box is where most of the values land.- The line in the middle of the box is the mean value.- The lower end of the box is the 1st quartile.- The upper end of the box is the 3rd quartile.- Whiskers show minimum and maximum values.- The little diamond is an outlier value.The full documentation on the `boxplot` is available [here](https://seaborn.pydata.org/generated/seaborn.boxplot.html).
###Code
sns.boxplot(
x="Generation",
y="Total",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Cluster mapsCluster maps are a good way to show correlation between variables in the dataset. Here I selected all of the Pokemon stats and created a correlation matrix of them. You can easily see, which ones correlate with each other and in what way due to the clustering (above and left). Clustermaps are there to show you a hierarchical overview of the data.The full documentation for `clustermap` is available [here](https://seaborn.pydata.org/generated/seaborn.clustermap.html).
###Code
stats = ["Speed", "Defense", "Sp. Def", "Sp. Atk", "HP", "Attack"]
sns.clustermap(
pokemon[stats].corr(),
cmap="mako",
linewidths=.75,
)
###Output
_____no_output_____
###Markdown
Exercises1. Read in the `avocado.csv` dataset - Set the index properly2. Create a line plot showing the average price of avocado over months3. Create a horizontal bar plot showing 10 highest mean prices depending on region4. Create a count plot for the year of the avocado5. Create a scatter plot of average price vs `Total Volume` for year 2018, when the `Total Volume` is lower than `1e6`6. Show the `AveragePrice` distribution.7. Create a clustermap of avocado correlations.8. Show a boxplot of average price per year.
###Code
#1
avocados = pd.read_csv(
"avocado.csv",
index_col=0,
)
avocados.sample(5)
#2
avocados["Month"] = avocados['Date'].dt.strftime('%B')
months = ['January', 'February', 'March', 'April', 'May', 'June',
'July', 'August', 'September', 'October', 'November', 'December']
avocados['Month'] = pd.Categorical(avocados['Month'], categories=months, ordered=True)
avocados.sort_values(by='Month')
sns.lineplot(
x="Month",
y="AveragePrice",
data=avocados
)
#4
sns.countplot(
"year",
data=avocados,
)
#4
sns.barplot(x="year", y="Total Volume", data=avocados, estimator=sum)
#5
yearCondition = avocados['year'] == 2018
volumeCondition = avocados['Total Volume'] < 1e6
sns.scatterplot(
x="AveragePrice",
y="Total Volume",
data=avocados[yearCondition & volumeCondition],
)
#6
sns.distplot(avocados["AveragePrice"])
#7
stats = ["AveragePrice", "Total Volume", "Total Bags", "Small Bags", "XLarge Bags"]
sns.clustermap(
avocados[stats].corr(),
linewidths=.75,
)
#7
stats = ["Total Volume", "Total Bags", "Small Bags", "XLarge Bags"]
sns.clustermap(
avocados[stats].corr(),
linewidths=.75,
linecolor="k",
annot = True,
)
#8
sns.boxplot(
x="year",
y="AveragePrice",
data=avocados,
)
###Output
_____no_output_____
###Markdown
Data VisualizationOne of the best ways to communicate what is happening in your dataset is through visualizations. Visualizations make it easy for a human to understand millions of rows of data at a glance. Today we will go through basics of the [Seaborn library](https://seaborn.pydata.org/).> Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.To install seaborn we need to run either:```pip install seaborn```or:```conda install seaborn```Let's start by importing packages:
###Code
# Imports
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Let's read our PoKeMoN dataset. Below you can see the column descriptions:| Column | Description || :--- | :--- || `Name` | The name of the pokemon || `Type 1` | The type of the pokemon we will use || `Type 2` | Later generations were using dual types, we won't be using this here || `Total` | The sum of all stat columns || `HP`, `Attack`, `Defense`, `Sp. Atk`, `Sp. Def`, `Speed` | Pokemon stats || `Generation` | When was this pokemon introduced || `Legendary` | Is the pokemon a Legendary pokemon |
###Code
pokemon = pd.read_csv(
"datasets_121_280_Pokemon.csv",
index_col="#",
)
pokemon.sample(5)
###Output
_____no_output_____
###Markdown
We will setup the standard Seaborn theme by running the following command:
###Code
sns.set()
###Output
_____no_output_____
###Markdown
Line plotsOne of the basic types of plots are line plots. Those are handled in seaborn by the `lineplot` function. By default, the plot aggregates over multiple y values at each value of x and shows an estimate of the central tendency and a confidence interval for that estimate.Full documentation of `lineplot` is available [here](https://seaborn.pydata.org/generated/seaborn.lineplot.htmlseaborn.lineplot).Let's see how the Total stats changed over the generations.
###Code
sns.lineplot(
x="Generation",
y="Total",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
We can have multiple values on a line plot as well. A useful technique to visualize multiple columns is melting the dataframe. Then we can just use `value` for the actual value of the column, and `variable` for the label as `hue`. The `ci` parameter hides the estimator shade you could see above.
###Code
# We melt the values, so that it is useful for data viz
generation_values = pd.melt(
pokemon,
id_vars=["Generation"],
value_vars=["Speed", "Defense", "Sp. Def", "Sp. Atk", "HP", "Attack"],
)
fig, ax = plt.subplots()
sns.lineplot(
x="Generation",
y="value",
hue="variable",
ci=None,
data=generation_values,
)
fig.set_size_inches(12, 8)
###Output
_____no_output_____
###Markdown
Bar plotsAnother often used type of a plot is the Bar plot. You might remember the use of it in pandas:
###Code
# Bar plot using pandas
pokemon["Generation"].value_counts().plot.bar()
###Output
_____no_output_____
###Markdown
We will use Seaborn `barplot` function to create bar plots. Here you can see an example of an attack `Total` bar plot depending on the `Generation`. The whiskers (vertical black lines), show the estimation error. Ideally there would be no estimation error, but we work with what we got.Full documentation of `barplot` is available [here](https://seaborn.pydata.org/generated/seaborn.barplot.htmlseaborn.barplot).
###Code
sns.barplot(
x="Generation",
y="Total",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
A nice alternative to this, is the `countplot`, which automatically counts things in the bucket. Countplot documentation is available [here](https://seaborn.pydata.org/generated/seaborn.countplot.htmlseaborn.countplot).
###Code
sns.countplot(
"Generation",
data=pokemon,
)
sns.countplot(
y="Type 1",
order=pokemon["Type 1"].value_counts().index,
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Categorical plotA nice way to create a plot is to use the `catplot` function. This function provides access to several axes-level functions that show the relationship between a numerical and one or more categorical variables using one of several visual representations. The kind parameter selects the underlying axes-level function to use like stripplots, swarms, boxplots, barplots and more. We will use a barplot here, to show the `Total` means over `Generation` depending whether the pokemon was `Legendary` or not.Full documentation on `catplot` is available [here](https://seaborn.pydata.org/generated/seaborn.catplot.htmlseaborn.catplot).
###Code
sns.catplot(
x="Generation",
y="Total",
hue="Legendary",
kind="bar",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Scatter plotScatter plot is a relational plot, that shows the relation based on the `x` and `y` value. The relationship between x and y can be shown for different subsets of the data using the hue, size, and style parameters. These parameters control what visual semantics are used to identify the different subsets. It is possible to show up to three dimensions independently by using all three semantic types, but this style of plot can be hard to interpret and is often ineffective. Using redundant semantics (i.e. both hue and style for the same variable) can be helpful for making graphics more accessible.Full documentation on `scatterplot` is available [here](https://seaborn.pydata.org/generated/seaborn.scatterplot.htmlseaborn.scatterplot).
###Code
sns.scatterplot(
x="Attack",
y="Defense",
hue="Legendary",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Swarm plotA simillar plot to scatter is the swarm plot. It shows you the count distribution over a numerical range. Here we categorize the swarmplot by `Generation` to see what was the distribution of starter types (water, fire, grass) pokemon over the generations. We are also adding a `palette` to colour them accordingly.The full documentation for `swarmplot` is available [here](https://seaborn.pydata.org/generated/seaborn.swarmplot.htmlseaborn.swarmplot).
###Code
starter_types = ["Water", "Fire", "Grass"]
palette = {
"Water": "#6890F0",
"Fire": "#F08030",
"Grass": "#78C850",
}
g = sns.swarmplot(
x="Generation",
y="Attack",
hue="Type 1",
palette=palette,
data=pokemon[pokemon["Type 1"].isin(starter_types)],
)
g.legend_.remove()
###Output
_____no_output_____
###Markdown
Distribution plotsA good way to show what is happening with our data is to show the distribution. The basic method to do that is the `distplot`. Below we can see the distribution of values for the `Total` column. We can see that most pokemon have around 500 and 300 `Total` power. The line you can see is the KDE = Kernel Density Estimation, a non-parametric way to estimate the probability density function of a random variable. Kernel density estimation is a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample.Full documentation for the `distplot` is available [here]().
###Code
sns.distplot(pokemon["Total"])
###Output
_____no_output_____
###Markdown
To see how our data distribution is happening through the dataset, we can also use the `boxplot`. This is a fun little graph that shows us many parameters of the selected column. Here we selected the `Total` values and groupped them by `Generation`. What elements we have here:- The box is where most of the values land.- The line in the middle of the box is the mean value.- The lower end of the box is the 1st quartile.- The upper end of the box is the 3rd quartile.- Whiskers show minimum and maximum values.- The little diamond is an outlier value.The full documentation on the `boxplot` is available [here](https://seaborn.pydata.org/generated/seaborn.boxplot.html).
###Code
sns.boxplot(
x="Generation",
y="Total",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Cluster mapsCluster maps are a good way to show correlation between variables in the dataset. Here I selected all of the Pokemon stats and created a correlation matrix of them. You can easily see, which ones correlate with each other and in what way due to the clustering (above and left). Clustermaps are there to show you a hierarchical overview of the data.The full documentation for `clustermap` is available [here](https://seaborn.pydata.org/generated/seaborn.clustermap.html).
###Code
stats = ["Speed", "Defense", "Sp. Def", "Sp. Atk", "HP", "Attack"]
sns.clustermap(
pokemon[stats].corr(),
cmap="mako",
linewidths=.75,
)
###Output
_____no_output_____
###Markdown
Exercises1. Read in the `avocado.csv` dataset - Set the index properly2. Create a line plot showing the average price of avocado over months3. Create a horizontal bar plot showing 10 highest mean prices depending on region4. Create a count plot for the year of the avocado5. Create a scatter plot of average price vs `Total Volume` for year 2018, when the `Total Volume` is lower than `1e6`6. Show the `AveragePrice` distribution.7. Create a clustermap of avocado correlations.8. Show a boxplot of average price per year.
###Code
# 1. Read in the avocado.csv dataset
# Set the index properly
avocado = pd.read_csv("avocado.csv", index_col=0)
avocado
# 2. Create a line plot showing the average price of avocado over months
s = pd.to_datetime(avocado["Date"])
s = avocado.groupby(s.dt.strftime('%B'))['AveragePrice'].mean()
s.plot.line()
# 3. Create a horizontal bar plot showing 10 highest mean prices depending on region
s = avocado.groupby(avocado["region"])['AveragePrice'].max().sort_values(ascending=False)[:10]
s.plot.barh()
# 4. Create a count plot for the year of the avocado
sns.countplot(
x="year",
data=avocado,
)
# 5. Create a scatter plot of average price vs `Total Volume` for year 2018, when the `Total Volume` is lower than `1e6`
sns.scatterplot(
x="AveragePrice",
y="Total Volume",
data=avocado.loc[(avocado['Total Volume'] < 10**6) & (avocado.year == 2018)],
)
# 6. Show the `AveragePrice` distribution.
sns.distplot(avocado.AveragePrice);
# 7. Create a clustermap of avocado correlations.
stats = ["Date", "AveragePrice", "Total Volume", "4046", "4225", "4770", "Total Bags", "Small Bags", "Large Bags", "XLarge Bags", "type", "year", "region"]
sns.clustermap(
avocado[stats].corr(),
cmap="mako",
linewidths=.75,
)
# 8. Show a boxplot of average price per year.
sns.boxplot(
x="year",
y="AveragePrice",
data=avocado,
)
###Output
_____no_output_____
###Markdown
Data VisualizationOne of the best ways to communicate what is happening in your dataset is through visualizations. Visualizations make it easy for a human to understand millions of rows of data at a glance. Today we will go through basics of the [Seaborn library](https://seaborn.pydata.org/).> Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.To install seaborn we need to run either:```pip install seaborn```or:```conda install seaborn```Let's start by importing packages:
###Code
# Imports
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Let's read our PoKeMoN dataset. Below you can see the column descriptions:| Column | Description || :--- | :--- || `Name` | The name of the pokemon || `Type 1` | The type of the pokemon we will use || `Type 2` | Later generations were using dual types, we won't be using this here || `Total` | The sum of all stat columns || `HP`, `Attack`, `Defense`, `Sp. Atk`, `Sp. Def`, `Speed` | Pokemon stats || `Generation` | When was this pokemon introduced || `Legendary` | Is the pokemon a Legendary pokemon |
###Code
pokemon = pd.read_csv(
"datasets_121_280_Pokemon.csv",
index_col="#",
)
pokemon.sample(5)
###Output
_____no_output_____
###Markdown
We will setup the standard Seaborn theme by running the following command:
###Code
sns.set()
###Output
_____no_output_____
###Markdown
Line plotsOne of the basic types of plots are line plots. Those are handled in seaborn by the `lineplot` function. By default, the plot aggregates over multiple y values at each value of x and shows an estimate of the central tendency and a confidence interval for that estimate.Full documentation of `lineplot` is available [here](https://seaborn.pydata.org/generated/seaborn.lineplot.htmlseaborn.lineplot).Let's see how the Total stats changed over the generations.
###Code
sns.lineplot(
x="Generation",
y="Total",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
We can have multiple values on a line plot as well. A useful technique to visualize multiple columns is melting the dataframe. Then we can just use `value` for the actual value of the column, and `variable` for the label as `hue`. The `ci` parameter hides the estimator shade you could see above.
###Code
# We melt the values, so that it is useful for data viz
generation_values = pd.melt(
pokemon,
id_vars=["Generation"],
value_vars=["Speed", "Defense", "Sp. Def", "Sp. Atk", "HP", "Attack"],
)
fig, ax = plt.subplots()
sns.lineplot(
x="Generation",
y="value",
hue="variable",
ci=None,
data=generation_values,
)
fig.set_size_inches(12, 8)
###Output
_____no_output_____
###Markdown
Bar plotsAnother often used type of a plot is the Bar plot. You might remember the use of it in pandas:
###Code
# Bar plot using pandas
pokemon["Generation"].value_counts().plot.bar()
###Output
_____no_output_____
###Markdown
We will use Seaborn `barplot` function to create bar plots. Here you can see an example of an attack `Total` bar plot depending on the `Generation`. The whiskers (vertical black lines), show the estimation error. Ideally there would be no estimation error, but we work with what we got.Full documentation of `barplot` is available [here](https://seaborn.pydata.org/generated/seaborn.barplot.htmlseaborn.barplot).
###Code
sns.barplot(
x="Generation",
y="Total",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
A nice alternative to this, is the `countplot`, which automatically counts things in the bucket. Countplot documentation is available [here](https://seaborn.pydata.org/generated/seaborn.countplot.htmlseaborn.countplot).
###Code
sns.countplot(
"Generation",
data=pokemon,
)
sns.countplot(
y="Type 1",
order=pokemon["Type 1"].value_counts().index,
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Categorical plotA nice way to create a plot is to use the `catplot` function. This function provides access to several axes-level functions that show the relationship between a numerical and one or more categorical variables using one of several visual representations. The kind parameter selects the underlying axes-level function to use like stripplots, swarms, boxplots, barplots and more. We will use a barplot here, to show the `Total` means over `Generation` depending whether the pokemon was `Legendary` or not.Full documentation on `catplot` is available [here](https://seaborn.pydata.org/generated/seaborn.catplot.htmlseaborn.catplot).
###Code
sns.catplot(
x="Generation",
y="Total",
hue="Legendary",
kind="bar",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Scatter plotScatter plot is a relational plot, that shows the relation based on the `x` and `y` value. The relationship between x and y can be shown for different subsets of the data using the hue, size, and style parameters. These parameters control what visual semantics are used to identify the different subsets. It is possible to show up to three dimensions independently by using all three semantic types, but this style of plot can be hard to interpret and is often ineffective. Using redundant semantics (i.e. both hue and style for the same variable) can be helpful for making graphics more accessible.Full documentation on `scatterplot` is available [here](https://seaborn.pydata.org/generated/seaborn.scatterplot.htmlseaborn.scatterplot).
###Code
sns.scatterplot(
x="Attack",
y="Defense",
hue="Legendary",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Swarm plotA simillar plot to scatter is the swarm plot. It shows you the count distribution over a numerical range. Here we categorize the swarmplot by `Generation` to see what was the distribution of starter types (water, fire, grass) pokemon over the generations. We are also adding a `palette` to colour them accordingly.The full documentation for `swarmplot` is available [here](https://seaborn.pydata.org/generated/seaborn.swarmplot.htmlseaborn.swarmplot).
###Code
starter_types = ["Water", "Fire", "Grass"]
palette = {
"Water": "#6890F0",
"Fire": "#F08030",
"Grass": "#78C850",
}
g = sns.swarmplot(
x="Generation",
y="Attack",
hue="Type 1",
palette=palette,
data=pokemon[pokemon["Type 1"].isin(starter_types)],
)
g.legend_.remove()
###Output
_____no_output_____
###Markdown
Distribution plotsA good way to show what is happening with our data is to show the distribution. The basic method to do that is the `distplot`. Below we can see the distribution of values for the `Total` column. We can see that most pokemon have around 500 and 300 `Total` power. The line you can see is the KDE = Kernel Density Estimation, a non-parametric way to estimate the probability density function of a random variable. Kernel density estimation is a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample.Full documentation for the `distplot` is available [here]().
###Code
sns.distplot(pokemon["Total"])
###Output
_____no_output_____
###Markdown
To see how our data distribution is happening through the dataset, we can also use the `boxplot`. This is a fun little graph that shows us many parameters of the selected column. Here we selected the `Total` values and groupped them by `Generation`. What elements we have here:- The box is where most of the values land.- The line in the middle of the box is the mean value.- The lower end of the box is the 1st quartile.- The upper end of the box is the 3rd quartile.- Whiskers show minimum and maximum values.- The little diamond is an outlier value.The full documentation on the `boxplot` is available [here](https://seaborn.pydata.org/generated/seaborn.boxplot.html).
###Code
sns.boxplot(
x="Generation",
y="Total",
data=pokemon,
)
###Output
_____no_output_____
###Markdown
Cluster mapsCluster maps are a good way to show correlation between variables in the dataset. Here I selected all of the Pokemon stats and created a correlation matrix of them. You can easily see, which ones correlate with each other and in what way due to the clustering (above and left). Clustermaps are there to show you a hierarchical overview of the data.The full documentation for `clustermap` is available [here](https://seaborn.pydata.org/generated/seaborn.clustermap.html).
###Code
stats = ["Speed", "Defense", "Sp. Def", "Sp. Atk", "HP", "Attack"]
sns.clustermap(
pokemon[stats].corr(),
cmap="mako",
linewidths=.75,
)
###Output
_____no_output_____ |
notebook/Tutorial-Psi4.ipynb | ###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); The Outgoing Gravitational Wave Weyl scalar $\psi_4$ Author: Zach Etienne[comment]: (Abstract: TODO)**Module Status:** Validated **Validation Notes:** This module has been validated to agree at roundoff error with the WeylScal4 ETK thorn in Cartesian coordinates (as it agrees to roundoff error with Patrick Nelson's [Cartesian Weyl Scalars & Invariants NRPy+ tutorial notebook](Tutorial-WeylScalarsInvariants-Cartesian.ipynb), which itself was validated against WeylScal4). In addition, in SinhSpherical coordinates it yields results for a ringing Brill-Lindquist black hole remnant that agree with black hole perturbation theory to more than 7 decades in amplitude, surpassing the agreement seen in Fig. 6 of [Ruchlin, Etienne, & Baumgarte](https://arxiv.org/pdf/1712.07658.pdf). NRPy+ Source Code for this module: [BSSN/Psi4.py](../edit/BSSN/Psi4.py) Introduction:This module constructs $\psi_4$, a quantity that is immensely useful when extracting gravitational wave content from a numerical relativity simulation. $\psi_4$ is related to the gravitational wave strain via$$\psi_4 = \ddot{h}_+ - i \ddot{h}_\times.$$We construct $\psi_4$ from the standard ADM spatial metric $\gamma_{ij}$ and extrinsic curvature $K_{ij}$, and their derivatives. The full expression is given by Eq. 5.1 in [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf):\begin{align}\psi_4 &= \left[ {R}_{ijkl}+2K_{i[k}K_{l]j}\right]{n}^i\bar{m}^j{n}^k\bar{m}^l \\& -8\left[ K_{j[k,l]}+{\Gamma }_{j[k}^pK_{l]p}\right]{n}^{[0}\bar{m}^{j]}{n}^k\bar{m}^l \\& +4\left[ {R}_{jl}-K_{jp}K_l^p+KK_{jl}\right]{n}^{[0}\bar{m}^{j]}{n}^{[0}\bar{m}^{l]},\end{align}Note that $\psi_4$ is complex, with the imaginary components originating from the tetrad vector $m^\mu$. This module does not specify a tetrad; instead it only constructs the above expression leaving $m^\mu$ and $n^\mu$ unspecified. The [next module on tetrads defines these tetrad quantities](Tutorial-Psi4_tetrads.ipynb) (currently only a quasi-Kinnersley tetrad is supported). A Note on Notation:As is standard in NRPy+, * Greek indices range from 0 to 3, inclusive, with the zeroth component denoting the temporal (time) component.* Latin indices range from 0 to 2, inclusive, with the zeroth component denoting the first spatial component.As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). Table of Contents$$\label{toc}$$This tutorial notebook is organized as follows1. [Step 1](initializenrpy): Initialize needed NRPy+ modules1. [Step 2](riemann): Constructing the 3-Riemann tensor $R_{ik\ell m}$1. [Step 3](rank4termone): Constructing the rank-4 tensor in Term 1 of $\psi_4$: $R_{ijkl} + 2 K_{i[k} K_{l]j}$1. [Step 4](rank3termtwo): Constructing the rank-3 tensor in Term 2 of $\psi_4$: $-8 \left(K_{j[k,l]} + \Gamma^{p}_{j[k} K_{l]p} \right)$1. [Step 5](rank2termthree): Constructing the rank-2 tensor in term 3 of $\psi_4$: $+4 \left(R_{jl} - K_{jp} K^p_l + K K_{jl} \right)$1. [Step 6](psifour): Constructing $\psi_4$ through contractions of the above terms with arbitrary tetrad vectors $n^\mu$ and $m^\mu$1. [Step 7](code_validation): Code Validation against `BSSN.Psi4` NRPy+ module1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Initialize core NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$Let's start by importing all the needed modules from NRPy+:
###Code
# Step 1.a: import all needed modules from NRPy+:
import sympy as sp
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
import reference_metric as rfm
# Step 1.b: Set the coordinate system for the numerical grid
# Note that this parameter is assumed to be set
# prior to calling the Python Psi4.py module,
# so this Step will not appear there.
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
# Step 1.c: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.d: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
# Step 1.e: Import all ADM quantities as written in terms of BSSN quantities
import BSSN.ADM_in_terms_of_BSSN as AB
AB.ADM_in_terms_of_BSSN()
# Step 1.f: Initialize tetrad vectors.
# mre4U = $\text{Re}(m^\mu)$
# mim4U = $\text{Im}(m^\mu)$, and
# n4U = $n^\mu$
# Note that in the separate Python Psi4.py
# module, these will be set to the tetrad
# chosen within the Psi4_tetrads.py module.
# We choose the most general form for the
# tetrad vectors here instead, to ensure complete
# code validation.
mre4U = ixp.declarerank1("mre4U",DIM=4)
mim4U = ixp.declarerank1("mim4U",DIM=4)
n4U = ixp.declarerank1("n4U" ,DIM=4)
###Output
_____no_output_____
###Markdown
Step 2: Constructing the 3-Riemann tensor $R_{ik\ell m}$ \[Back to [top](toc)\]$$\label{riemann}$$Analogously to Christoffel symbols, the Riemann tensor is a measure of the curvature of an $N$-dimensional manifold. Thus the 3-Riemann tensor is not simply a projection of the 4-Riemann tensor (see e.g., Eq. 2.7 of [Campanelli *et al* (1998)](https://arxiv.org/pdf/gr-qc/9803058.pdf) for the relation between 4-Riemann and 3-Riemann), as $N$-dimensional Riemann tensors are meant to define a notion of curvature given only the associated $N$-dimensional metric. So, given the ADM 3-metric, the Riemann tensor in arbitrary dimension is given by the 3-dimensional version of Eq. 1.19 in Baumgarte & Shapiro's *Numerical Relativity*. I.e.,$$R^i_{jkl} = \partial_k \Gamma^{i}_{jl} - \partial_l \Gamma^{i}_{jk} + \Gamma^i_{mk} \Gamma^m_{jl} - \Gamma^{i}_{ml} \Gamma^{m}_{jk},$$where $\Gamma^i_{jk}$ is the Christoffel symbol associated with the 3-metric $\gamma_{ij}$:$$\Gamma^l_{ij} = \frac{1}{2} \gamma^{lk} \left(\gamma_{ki,j} + \gamma_{kj,i} - \gamma_{ij,k} \right) $$Notice that this equation for the Riemann tensor is equivalent to the equation given in the Wikipedia article on [Formulas in Riemannian geometry](https://en.wikipedia.org/w/index.php?title=List_of_formulas_in_Riemannian_geometry&oldid=882667524):$$R^\ell{}_{ijk}=\partial_j \Gamma^\ell{}_{ik}-\partial_k\Gamma^\ell{}_{ij}+\Gamma^\ell{}_{js}\Gamma_{ik}^s-\Gamma^\ell{}_{ks}\Gamma^s{}_{ij},$$with the replacements $i\to \ell$, $j\to i$, $k\to j$, $l\to k$, and $s\to m$. Wikipedia also provides a simpler form in terms of second-derivatives of three-metric itself (using the definition of Christoffel symbol), so that we need not define derivatives of the Christoffel symbol:$$R_{ik\ell m}=\frac{1}{2}\left(\gamma_{im,k\ell} + \gamma_{k\ell,im}- \gamma_{i\ell,km}- \gamma_{km,i\ell} \right)+\gamma_{np} \left(\Gamma^n{}_{k\ell} \Gamma^p{}_{im} - \Gamma^n{}_{km} \Gamma^p{}_{i\ell} \right).$$First we construct the term on the left:
###Code
# Step 2: Construct the (rank-4) Riemann curvature tensor associated with the ADM 3-metric:
RDDDD = ixp.zerorank4()
gammaDDdDD = AB.gammaDDdDD
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
RDDDD[i][k][l][m] = sp.Rational(1,2) * \
(gammaDDdDD[i][m][k][l] + gammaDDdDD[k][l][i][m] - gammaDDdDD[i][l][k][m] - gammaDDdDD[k][m][i][l])
###Output
_____no_output_____
###Markdown
... then we add the term on the right:
###Code
# ... then we add the term on the right:
gammaDD = AB.gammaDD
GammaUDD = AB.GammaUDD
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
for n in range(DIM):
for p in range(DIM):
RDDDD[i][k][l][m] += gammaDD[n][p] * \
(GammaUDD[n][k][l]*GammaUDD[p][i][m] - GammaUDD[n][k][m]*GammaUDD[p][i][l])
###Output
_____no_output_____
###Markdown
Step 3: Constructing the rank-4 tensor in Term 1 of $\psi_4$: $R_{ijkl} + 2 K_{i[k} K_{l]j}$ \[Back to [top](toc)\]$$\label{rank4termone}$$Following Eq. 5.1 in [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf), the rank-4 tensor in the first term of $\psi_4$ is given by$$R_{ijkl} + 2 K_{i[k} K_{l]j} = R_{ijkl} + K_{ik} K_{lj} - K_{il} K_{kj}$$
###Code
# Step 3: Construct the (rank-4) tensor in term 1 of psi_4 (referring to Eq 5.1 in
# Baker, Campanelli, Lousto (2001); https://arxiv.org/pdf/gr-qc/0104063.pdf
rank4term1DDDD = ixp.zerorank4()
KDD = AB.KDD
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
rank4term1DDDD[i][j][k][l] = RDDDD[i][j][k][l] + KDD[i][k]*KDD[l][j] - KDD[i][l]*KDD[k][j]
###Output
_____no_output_____
###Markdown
Step 4: Constructing the rank-3 tensor in Term 2 of $\psi_4$: $-8 \left(K_{j[k,l]} + \Gamma^{p}_{j[k} K_{l]p} \right)$ \[Back to [top](toc)\]$$\label{rank3termtwo}$$Following Eq. 5.1 in [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf), the rank-3 tensor in the second term of $\psi_4$ is given by$$-8 \left(K_{j[k,l]} + \Gamma^{p}_{j[k} K_{l]p} \right)$$First let's construct the first term in this sum: $K_{j[k,l]} = \frac{1}{2} (K_{jk,l} - K_{jl,k})$:
###Code
# Step 4: Construct the (rank-3) tensor in term 2 of psi_4 (referring to Eq 5.1 in
# Baker, Campanelli, Lousto (2001); https://arxiv.org/pdf/gr-qc/0104063.pdf
rank3term2DDD = ixp.zerorank3()
KDDdD = AB.KDDdD
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
rank3term2DDD[j][k][l] = sp.Rational(1,2)*(KDDdD[j][k][l] - KDDdD[j][l][k])
###Output
_____no_output_____
###Markdown
... then we construct the second term in this sum: $\Gamma^{p}_{j[k} K_{l]p} = \frac{1}{2} (\Gamma^{p}_{jk} K_{lp}-\Gamma^{p}_{jl} K_{kp})$:
###Code
# ... then we construct the second term in this sum:
# \Gamma^{p}_{j[k} K_{l]p} = \frac{1}{2} (\Gamma^{p}_{jk} K_{lp}-\Gamma^{p}_{jl} K_{kp}):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
for p in range(DIM):
rank3term2DDD[j][k][l] += sp.Rational(1,2)*(GammaUDD[p][j][k]*KDD[l][p] - GammaUDD[p][j][l]*KDD[k][p])
###Output
_____no_output_____
###Markdown
Finally, we multiply the term by $-8$:
###Code
# Finally, we multiply the term by $-8$:
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
rank3term2DDD[j][k][l] *= sp.sympify(-8)
###Output
_____no_output_____
###Markdown
Step 5: Constructing the rank-2 tensor in term 3 of $\psi_4$: $+4 \left(R_{jl} - K_{jp} K^p_l + K K_{jl} \right)$ \[Back to [top](toc)\]$$\label{rank2termthree}$$Following Eq. 5.1 in [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf), the rank-2 tensor in the third term of $\psi_4$ is given by$$+4 \left(R_{jl} - K_{jp} K^p_l + K K_{jl} \right),$$where\begin{align}R_{jl} &= R^i_{jil} \\&= \gamma^{im} R_{ijml} \\K &= K^i_i \\&= \gamma^{im} K_{im}\end{align}Let's build the components of this term: $R_{jl}$, $K^p_l$, and $K$, as defined above:
###Code
# Step 5: Construct the (rank-2) tensor in term 3 of psi_4 (referring to Eq 5.1 in
# Baker, Campanelli, Lousto (2001); https://arxiv.org/pdf/gr-qc/0104063.pdf
# Step 5.1: Construct 3-Ricci tensor R_{ij} = gamma^{im} R_{ijml}
RDD = ixp.zerorank2()
gammaUU = AB.gammaUU
for j in range(DIM):
for l in range(DIM):
for i in range(DIM):
for m in range(DIM):
RDD[j][l] += gammaUU[i][m]*RDDDD[i][j][m][l]
# Step 5.2: Construct K^p_l = gamma^{pi} K_{il}
KUD = ixp.zerorank2()
for p in range(DIM):
for l in range(DIM):
for i in range(DIM):
KUD[p][l] += gammaUU[p][i]*KDD[i][l]
# Step 5.3: Construct trK = gamma^{ij} K_{ij}
trK = sp.sympify(0)
for i in range(DIM):
for j in range(DIM):
trK += gammaUU[i][j]*KDD[i][j]
###Output
_____no_output_____
###Markdown
Next we put these terms together to construct the entire term:$$+4 \left(R_{jl} - K_{jp} K^p_l + K K_{jl} \right),$$
###Code
# Next we put these terms together to construct the entire term in parentheses:
# +4 \left(R_{jl} - K_{jp} K^p_l + K K_{jl} \right),
rank2term3DD = ixp.zerorank2()
for j in range(DIM):
for l in range(DIM):
rank2term3DD[j][l] = RDD[j][l] + trK*KDD[j][l]
for p in range(DIM):
rank2term3DD[j][l] += - KDD[j][p]*KUD[p][l]
# Finally we multiply by +4:
for j in range(DIM):
for l in range(DIM):
rank2term3DD[j][l] *= sp.sympify(4)
###Output
_____no_output_____
###Markdown
Step 6: Constructing $\psi_4$ through contractions of the above terms with an arbitrary tetrad vectors $m^\mu$ and $n^\mu$ \[Back to [top](toc)\]$$\label{psifour}$$Eq. 5.1 in [Baker, Campanelli, Lousto (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf) writes $\psi_4$ (which is complex) as the contraction of each of the above terms with products of tetrad vectors:\begin{align}\psi_4 &= \left[ {R}_{ijkl}+2K_{i[k}K_{l]j}\right]{n}^i\bar{m}^j{n}^k\bar{m}^l \\& -8\left[ K_{j[k,l]}+{\Gamma }_{j[k}^pK_{l]p}\right]{n}^{[0}\bar{m}^{j]}{n}^k\bar{m}^l \\& +4\left[ {R}_{jl}-K_{jp}K_l^p+KK_{jl}\right]{n}^{[0}\bar{m}^{j]}{n}^{[0}\bar{m}^{l]},\end{align}where $\bar{m}^\mu$ is the complex conjugate of $m^\mu$, and $n^\mu$ is real. The third term is given by\begin{align}{n}^{[0}\bar{m}^{j]}{n}^{[0}\bar{m}^{l]}&= \frac{1}{2}({n}^{0}\bar{m}^{j} - {n}^{j}\bar{m}^{0} )\frac{1}{2}({n}^{0}\bar{m}^{l} - {n}^{l}\bar{m}^{0} )\\&= \frac{1}{4}({n}^{0}\bar{m}^{j} - {n}^{j}\bar{m}^{0} )({n}^{0}\bar{m}^{l} - {n}^{l}\bar{m}^{0} )\\&= \frac{1}{4}({n}^{0}\bar{m}^{j}{n}^{0}\bar{m}^{l} - {n}^{j}\bar{m}^{0}{n}^{0}\bar{m}^{l} - {n}^{0}\bar{m}^{j}{n}^{l}\bar{m}^{0} + {n}^{j}\bar{m}^{0}{n}^{l}\bar{m}^{0})\end{align}Only $m^\mu$ is complex, so we can separate the real and imaginary parts of $\psi_4$ by hand, defining $M^\mu$ to now be the real part of $\bar{m}^\mu$ and $\mathcal{M}^\mu$ to be the imaginary part. All of the above products are of the form ${n}^\mu\bar{m}^\nu{n}^\eta\bar{m}^\delta$, so let's evalute the real and imaginary parts of this product once, for all such terms:\begin{align}{n}^\mu\bar{m}^\nu{n}^\eta\bar{m}^\delta&= {n}^\mu(M^\nu - i \mathcal{M}^\nu){n}^\eta(M^\delta - i \mathcal{M}^\delta) \\&= \left({n}^\mu M^\nu {n}^\eta M^\delta -{n}^\mu \mathcal{M}^\nu {n}^\eta \mathcal{M}^\delta \right)+i \left(-{n}^\mu M^\nu {n}^\eta \mathcal{M}^\delta-{n}^\mu \mathcal{M}^\nu {n}^\eta M^\delta\right)\end{align}
###Code
# Step 6: Construct real & imaginary parts of psi_4
# by contracting constituent rank 2, 3, and 4
# tensors with input tetrads mre4U, mim4U, & n4U.
def tetrad_product__Real_psi4(n,Mre,Mim, mu,nu,eta,delta):
return +n[mu]*Mre[nu]*n[eta]*Mre[delta] - n[mu]*Mim[nu]*n[eta]*Mim[delta]
def tetrad_product__Imag_psi4(n,Mre,Mim, mu,nu,eta,delta):
return -n[mu]*Mre[nu]*n[eta]*Mim[delta] - n[mu]*Mim[nu]*n[eta]*Mre[delta]
# We split psi_4 into three pieces, to expedite & possibly parallelize C code generation.
psi4_re_pt = [sp.sympify(0),sp.sympify(0),sp.sympify(0)]
psi4_im_pt = [sp.sympify(0),sp.sympify(0),sp.sympify(0)]
# First term:
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
psi4_re_pt[0] += rank4term1DDDD[i][j][k][l] * \
tetrad_product__Real_psi4(n4U,mre4U,mim4U, i+1,j+1,k+1,l+1)
psi4_im_pt[0] += rank4term1DDDD[i][j][k][l] * \
tetrad_product__Imag_psi4(n4U,mre4U,mim4U, i+1,j+1,k+1,l+1)
# Second term:
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
psi4_re_pt[1] += rank3term2DDD[j][k][l] * \
sp.Rational(1,2)*(+tetrad_product__Real_psi4(n4U,mre4U,mim4U, 0,j+1,k+1,l+1)
-tetrad_product__Real_psi4(n4U,mre4U,mim4U, j+1,0,k+1,l+1) )
psi4_im_pt[1] += rank3term2DDD[j][k][l] * \
sp.Rational(1,2)*(+tetrad_product__Imag_psi4(n4U,mre4U,mim4U, 0,j+1,k+1,l+1)
-tetrad_product__Imag_psi4(n4U,mre4U,mim4U, j+1,0,k+1,l+1) )
# Third term:
for j in range(DIM):
for l in range(DIM):
psi4_re_pt[2] += rank2term3DD[j][l] * \
(sp.Rational(1,4)*(+tetrad_product__Real_psi4(n4U,mre4U,mim4U, 0,j+1,0,l+1)
-tetrad_product__Real_psi4(n4U,mre4U,mim4U, j+1,0,0,l+1)
-tetrad_product__Real_psi4(n4U,mre4U,mim4U, 0,j+1,l+1,0)
+tetrad_product__Real_psi4(n4U,mre4U,mim4U, j+1,0,l+1,0)))
psi4_im_pt[2] += rank2term3DD[j][l] * \
(sp.Rational(1,4)*(+tetrad_product__Imag_psi4(n4U,mre4U,mim4U, 0,j+1,0,l+1)
-tetrad_product__Imag_psi4(n4U,mre4U,mim4U, j+1,0,0,l+1)
-tetrad_product__Imag_psi4(n4U,mre4U,mim4U, 0,j+1,l+1,0)
+tetrad_product__Imag_psi4(n4U,mre4U,mim4U, j+1,0,l+1,0)))
###Output
_____no_output_____
###Markdown
Step 7: Code validation against `BSSN.Psi4` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between1. this tutorial and 2. the NRPy+ [BSSN.Psi4](../edit/BSSN/Psi4.py) module.By default, we compare all quantities in Spherical coordinates, though other coordinate systems may be chosen.
###Code
# Call the BSSN_RHSs() function from within the
# BSSN/BSSN_RHSs.py module,
# which should do exactly the same as in Steps 1-16 above.
import BSSN.Psi4 as BP4
BP4.Psi4(specify_tetrad=False)
print("Consistency check between this tutorial and BSSN.Psi4 NRPy+ module: ALL SHOULD BE ZERO.")
for part in range(3):
print("psi4_im_pt["+str(part)+"] - BP4.psi4_im_pt["+str(part)+"] = " + str(psi4_im_pt[part] - BP4.psi4_im_pt[part]))
print("psi4_re_pt["+str(part)+"] - BP4.psi4_re_pt["+str(part)+"] = " + str(psi4_re_pt[part] - BP4.psi4_re_pt[part]))
###Output
Consistency check between this tutorial and BSSN.Psi4 NRPy+ module: ALL SHOULD BE ZERO.
psi4_im_pt[0] - BP4.psi4_im_pt[0] = 0
psi4_re_pt[0] - BP4.psi4_re_pt[0] = 0
psi4_im_pt[1] - BP4.psi4_im_pt[1] = 0
psi4_re_pt[1] - BP4.psi4_re_pt[1] = 0
psi4_im_pt[2] - BP4.psi4_im_pt[2] = 0
psi4_re_pt[2] - BP4.psi4_re_pt[2] = 0
###Markdown
Step 8: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Psi4.pdf](Tutorial-Psi4.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-Psi4.ipynb
!pdflatex -interaction=batchmode Tutorial-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Psi4.tex
!pdflatex -interaction=batchmode Tutorial-Psi4.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-Psi4.ipynb to latex
[NbConvertApp] Writing 65590 bytes to Tutorial-Psi4.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
|
deepdrugdataset/plotAverageAtomicSimilarities.ipynb | ###Markdown
Pymol VizThis all needs pipelining properly, but now we have the dzs giving the contribution of each residue to the SVM. Carl makes a density field $ \rho_B(r) = \sum_{j \in B} \delta_{z_j, B} N \left( r_j, \sigma_j \right)$, i.e a bunch of atom-centred Gaussians of width $\sigma = 0.5 A$. I think I'll just colour residues using the b-factor then spectrum it.
###Code
files = glob.glob("*_heme*.npy")
labels = set(x[:10] for x in files)
print(labels)
for label in labels:
heme_means = np.load(f"{label}_vshemes.npy")*1000 # scale up
nucleo_means = np.load(f"{label}_vsnucleos.npy")*1000
# I want to highlight the most similar atoms. So zero out the atoms which have similarity less than the top-20 value.
top20_threshold = sorted(heme_means)[-20]
for i, x in enumerate(heme_means):
if x < top20_threshold:
heme_means[i] = 0
top20_threshold = sorted(nucleo_means)[-20]
for i, x in enumerate(nucleo_means):
if x < top20_threshold:
nucleo_means[i] = 0
with open(f"../../inputs/protein-heme/{label[:5]}_converted.pdb") as flines:
data = flines.readlines()
residueSequenceNumbers = []
for line in data:
if line.startswith("ATOM"):
residueSequenceNumbers.append(int(line[22:26].strip()))
assert len(residueSequenceNumbers) == len(heme_means)
pymolScript = f"load ../../inputs/protein-heme/{label[:5]}.pdb, {label[:5]}\n"
pymolScript += f"alter {label[:5]}, b=-1\n"
for resi,b in zip(residueSequenceNumbers, heme_means): # might not work if the residue ids are off
pymolScript += f"alter resi {resi}, b={b}\n"
pymolScript += f"""
#formatting
bg_color white
hide all
#show sticks
show cartoon
spectrum b, blue_red
set opaque_background=0
set antialias = on
set line_smooth = 1
set depth_cue = 1
set specular = 1
set surface_quality = 1
set stick_quality = 15
set sphere_quality = 2
set ray_trace_fog = 0.8
set light = (-0.2,0,-1)
set ray_shadows, 0
set surface_mode, 1
set cartoon_side_chain_helper,on
zoom
rebuild
"""
pymolScript += f"save {label[:5]}.pse \n"
pymolScript += f"""
set ray_trace_mode = 1
png {label[:5]}.png, width=10cm, dpi=300, ray=1
"""
with open("temp.pml", mode='w') as flines:
flines.write(pymolScript)
# Run quietly
subprocess.run(["pymol", "-c", "temp.pml"])
os.remove("temp.pml")
display(Image(f"{label[:5]}.png"))
# os.remove(f"{pdbRef}.png")
###Output
_____no_output_____ |
Credit Risk Evaluator.ipynb | ###Markdown
Prework: clean data, data in 2 seperate files for train and test Check Columns are the SAME
###Code
# Display all the columns for the dataframes (not-truncated)
pd.set_option('display.max_columns', None)
def check_test_and_train_matching_columns(train_df, test_df):
# Display warning if columns do not match
inner_join = set(train_df.columns) & set(test_df.columns)
full_join = set(train_df.columns) | set(test_df.columns)
unmatching_columns = list(full_join - inner_join)
if (len(unmatching_columns) != 0):
print("Columns count does not match at...")
return unmatching_columns
else:
print("Columns match!")
# was duplicating the index file to add "Unnamed: 0"
# still not dropping index column
train_df = pd.read_csv(Path('Resources/2019loans.csv',index=False)).drop(['Unnamed: 0'],axis=1)
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv',index=False)).drop(['Unnamed: 0'],axis=1)
check_test_and_train_matching_columns(train_df,test_df)
train_df.head()
train_df.info()
# can drop the first column: is a duplicate of index
test_df.tail()
test_df.info()
# can drop the first column: is a duplicate of info: happened when grabbing the files
# index is a column, don't want as a feature so remove from test and train
# delete column without having to reassign dataFrame
# df.drop('column_name', axis=1, inplace=True)
train_df.drop('index', axis=1, inplace=True)
test_df.drop('index', axis=1, inplace=True)
train_df.head()
train_df.head()
###Output
_____no_output_____
###Markdown
Review some of the data we have:
###Code
# some of our key X value, and loan amount , home_ownership, annual_inc see if balanced
# our y value: loan_status. : particularly high_risk (known label)
# not sure of any exact specifications, so keep all as is (review notes in readme.md)
train_df['loan_status'].value_counts()
'''
loan_amt value counts
10000.0 1042
20000.0 695
15000.0 588
40000.0 580
12000.0 466
...
23050.0 1
17050.0 1
30050.0 1
6650.0 1
35325.0 1
home_ownership value counts:
MORTGAGE 5800
RENT 4944
OWN 1371
ANY 65
annual_inc value counts:
60000.0 423
75000.0 390
65000.0 388
80000.0 368
50000.0 366
...
36465.0 1
111843.0 1
63092.0 1
76980.0 1
46080.0 1
loan_status:
low_risk 6090
high_risk 6090
'''
###Output
_____no_output_____
###Markdown
Preprocessing Data
###Code
# Convert categorical data to numeric (where object to numeric -> get_dummies) and separate target feature(X)
# Want dummies for all categorical (sometimes using one-hot-key could use individually to make categorical values)
#For training data
y_train = LabelEncoder().fit_transform(train_df['loan_status'])
#train_df['Label'] = LabelEncoder().fit_transform(train_df['loan_status'])
'''
i Label loan_status
0 1 low_risk
1 1 low_risk
2 1 low_risk
3 1 low_risk
12175 0 high_risk
12176 0 high_risk
12177 0 high_risk
12178 0 high_risk
12179 0 high_risk
'''
# want to set the high and low risk columns for y, labels as dict
X_train = train_df.drop(columns = ["loan_status"]) # remove the y label
X_train = pd.get_dummies(X_train) # converts all categorical data to numeric
X_train.head()
print(y_train)
# low risk = 1
# high risk = 0
print(len(y_train))
print(len(X_train))
print(X_train.shape, y_train.shape)
# Convert categorical data to numeric and separate target feature
# For testing data
y_test = LabelEncoder().fit_transform(test_df['loan_status'])
X_test = test_df.drop(columns = ["loan_status"])
X_test = pd.get_dummies(X_test)
X_test.head()
print(y_test)
# low risk = 1
# high risk = 0
print(len(y_test))
print(len(X_test))
print(X_test.shape, y_test.shape)
# can see that the y_test is missing one of the dummy columns because the shape has 91 col. rather than 92
# add missing dummy variables to testing set
# find any missing column
check_test_and_train_matching_columns(X_train, X_test)
# set the missing value for the column debt_settlement_flag_N=1 , debt_settlement_flag_Y=0
# add the column with the value of 0, then check the head and/or shape
X_test['debt_settlement_flag_Y'] = 0
X_test.shape
X_test.head()
###Output
_____no_output_____
###Markdown
Prediction - Unscaled Data Prediction UNSCALED Data:Random Forest Classifier should perform better than Logistic Regression because the dataset consists of categorical data, which tends to work best with Random Forest models. Logistic Regression, on the other hand, performs best with linearly separable datasets.https://scholar.smu.edu/cgi/viewcontent.cgi?article=1041&context=datasciencereview:~:text=In%20general%2C%20logistic%20regression%20performs,variables%20increases%20in%20a%20dataset.About the models: - Logistic Regression is a classification algorithm used to predict a discrete set of classes or categories. (ie:yes/no or true/false)It uses the sigmoid function to return a probability 'value'. - Random Forest Classifier is an ensemble learning method that constructs a set of decision trees from a randomly selected subset of training set and combines them together to return a 'prediction'. ------------ Logistic regression model - unscaled
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
reg_classifier = LogisticRegression()
reg_classifier
# gives same warnings
# trouble converging may be because #'s not scaled
# adjusted values so can converge
reg_classifier = LogisticRegression(
#solver='lbfgs', # non linear data, not working as well
solver='liblinear', # generally better on data with linear relationships, small datasets
#max_iter=1000, #train loop to opti,# times happens
random_state=0
#stop criteria test various
#tol = .01
)
reg_classifier.fit(X_train, y_train)
print(f"Training Data Score: {reg_classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {reg_classifier.score(X_test, y_test)}")
# logistic
###Output
Training Data Score: 0.707471264367816
Testing Data Score: 0.5746490854955338
###Markdown
Random Forest - Unscaled
###Code
# Train a Random Forest Classifier model and print the model score
# trial and error with 50 dflt, smaller dataset 300 ok because not as time consuming, take a long time to train
# large datasets 50 could take a long time
rf_classifier = RandomForestClassifier(random_state=42, n_estimators=200).fit(X_train, y_train)
print(f'Training Score RF: {rf_classifier.score(X_train, y_train)}')
print(f'Testing Score RF: {rf_classifier.score(X_test, y_test)}')
# Looks to be overfitted because 1.0 in training
# accuracy formula (tp + tn)/(tp+tn+fn+fp)
###Output
Training Score RF: 1.0
Testing Score RF: 0.638664398128456
###Markdown
Results - Unscaled (compare to preciction) Logistic Regression (LR):- Training Data Score: 0.707471264367816- Testing Data Score: 0.5746490854955338 Random Forest Classifier (RF):- Training Score RF: 1.0- Testing Score RF: 0.638664398128456The Random Forest Classifier model did better when both models were not scaled. Although, it looks to be overfitted in training. The complexity may need to be reduced. This was the predicted case because we are using categorical data which works well with Random Forest Classifier. Alternatively, Logistic Regression, on the other hand, performs best with linearly separable datasets. Prediction - Scaled Data Random Forest Classifier SCALING should not make a difference to the Random Forest. Scaling is only needed to be done for distance based algorithms. For tree based algorithms (DecisionTrees), scaling is not required.- https://www.kaggle.com/questions-and-answers/86923- https://www.sciencedirect.com/topics/engineering/random-forest Thus, Logistic Regression, if scaled out, may do better in this case, than the random-forest. - The Testing score for LR: 0.5746490854955338 (will change)- The Testing score for RF: 0.638664398128456 (will not change much)Because the values are somewhat close, and the RF values should not change much, I am estimating that Linear regression may do better. (Feature scaling can matter because coefficients of features with large variance are small and thus less penalized.)- https://scholar.smu.edu/cgi/viewcontent.cgi?article=1041&context=datasciencereview:~:text=In%20general%2C%20logistic%20regression%20performs,variables%20increases%20in%20a%20dataset.- https://www.quora.com/How-does-feature-scaling-affect-logistic-regression-model Scale DATA
###Code
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Scaled Logistic Regression
###Code
# Train the Logistic Regression model on the scaled data and print the model score
scaled_LR_clf = LogisticRegression(solver='liblinear',random_state=0 ).fit(X_train_scaled, y_train)
print(f"Training Data Score: {scaled_LR_clf.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {scaled_LR_clf.score(X_test_scaled, y_test)}")
# confusion matrix scaled LR
y_true = y_test
y_pred = scaled_LR_clf.predict(X_test_scaled)
confusion_matrix(y_true, y_pred)
#confusion_matrix
#tn, fp,
#fn, tp
import seaborn as sns
import matplotlib.pyplot as plt
# show on a plot
matrix = confusion_matrix(y_true, y_pred)
matrix = matrix.astype('float') / matrix.sum(axis=1)[:, np.newaxis]
# Build the plot
plt.figure(figsize=(7,3.5))
sns.set(font_scale=1.4)
sns.heatmap(matrix, annot=True, annot_kws={'size':10},
cmap=plt.cm.Greens, linewidths=0.2)
# Add labels to the plot
#class_names = ['Low Risk', 'High Risk']
class_names = ['High Risk', 'Low Risk']
tick_marks = np.arange(len(class_names))
tick_marks2 = tick_marks + 0.5
plt.xticks(tick_marks, class_names, rotation=0)
plt.yticks(tick_marks2, class_names, rotation=0)
plt.xlabel('Predicted label')
plt.ylabel('True label')
plt.title('Confusion Matrix for Logistic Regression Model')
plt.show()
###Output
_____no_output_____
###Markdown
The Confusion Matrix above shows that the Logistic Regression model struggled a bit more at predicting the High Risk label, but overall predicted both labels equally well. A balanced model.Depending on the the business decision, one can try to improve on the scores. False-negative - One would see predicted High Risk (label) in 529 cases (23%), but these clients are actually Low Risk.- It is not as detrimental to turn away low risk clients, who do actually qualify. Thus, one might keep this number as it is or it can be higher if reducing the False-Positive. False-Positive- One would see predicted Low Risk (label) in 563 cases (24%), but these clients are actually High Risk.- It may be more detrimental to accept loans from these clients labelled low risk, but they are actually high risk. Thus, one might want to minimize this number.Thus, with a balanced model, trying to improve one score will offset another. It may be better to lower the False-Positive scores.
###Code
# Classification Report
# low risk = 1
# high risk = 0
target_names = ['high risk', 'low risk']
print(classification_report(y_true, y_pred, target_names=target_names))
###Output
precision recall f1-score support
high risk 0.77 0.76 0.77 2351
low risk 0.76 0.77 0.77 2351
accuracy 0.77 4702
macro avg 0.77 0.77 0.77 4702
weighted avg 0.77 0.77 0.77 4702
###Markdown
The above classification report shows that precision (accuracy) for high risk is 77%; the model tries to avoid labeling things “high risk” that are not high risk. On the other hand, recall (sensitivity) is a bit lower for high risk 76%, which means that the classifier is missing some 'high risks' because it is trying to be too careful. Because precision and recall are both similar, F1 score is also similar at 77%.High risk and low risk clients are almost equally being turned away for a loan at the same rate because the numbers almost equally match. Suggested Model: Minimize the high risk false positives. In turn the balance will be offset. Although, it may not be as detrimental to the lending facility. Scaled Random Forest
###Code
# Train a Random Forest Classifier model on the scaled data and print the model score
scaled_rf_cls = RandomForestClassifier(random_state=42, n_estimators=200).fit(X_train_scaled, y_train)
print(f'Training Score RF: {scaled_rf_cls.score(X_train_scaled, y_train)}')
print(f'Testing Score RF: {scaled_rf_cls.score(X_test_scaled, y_test)}')
# Confusion matrix for RF model
y_true2 = y_test
y_pred2 = scaled_rf_cls.predict(X_test_scaled)
confusion_matrix(y_true2, y_pred2)
# fn 1225
# show on a plot
matrix = confusion_matrix(y_true2, y_pred2)
matrix = matrix.astype('float') / matrix.sum(axis=1)[:, np.newaxis]
# Build the plot
plt.figure(figsize=(7,3.5))
sns.set(font_scale=1.4)
sns.heatmap(matrix, annot=True, annot_kws={'size':10},
cmap=plt.cm.Blues, linewidths=0.2)
# Add labels to the plot
#class_names = ['Low Risk', 'High Risk']
class_names = ['High Risk', 'Low Risk']
tick_marks = np.arange(len(class_names))
tick_marks2 = tick_marks + 0.5
plt.xticks(tick_marks, class_names, rotation=0)
plt.yticks(tick_marks2, class_names, rotation=0)
plt.xlabel('Predicted label')
plt.ylabel('True label')
plt.title('Confusion Matrix for Random Forest Model')
plt.show()
# high tp tneg
# low tp tneg
###Output
_____no_output_____
###Markdown
The above Confusion Matrix shows that the Random Forest classifier struggled at predicting the Low Risk label. The algorithm is good for high risk at 80%, but struggles for the low risk at 52%The way forward would be to exclusively explore the low risk (as a subset of only low risks, pair plot them)This way one can see different feature sets. There is a struggle in the classification of this class, algorithm clustering, ie) DBSCAN, may be able to identify the noisy data from this class, low risk. * Because there is a lot of noise in this low risk class, scaling the Random Forest did not help.
###Code
# Classification Report for Random Forest model
print(classification_report(y_true2, y_pred2, target_names=target_names))
###Output
precision recall f1-score support
high risk 0.60 0.80 0.69 2351
low risk 0.70 0.48 0.57 2351
accuracy 0.64 4702
macro avg 0.65 0.64 0.63 4702
weighted avg 0.65 0.64 0.63 4702
###Markdown
Prediction before scaling the dataset- I think that the Random Forest Classifier will perform better prior to scaling the data due to the fact that it is a tree-based model.- Also the Logistic Regression Model could be affected by possible outliers.- Logistic regression performs better when the number of noise variables is less than or equal to the number of explanatory variables.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state=0)
classifier.fit(X_train, Y_train)
print(f"Training Data Score: {classifier.score(X_train, Y_train).round(5)}")
classifier.fit(X_test, Y_test)
print(f"Testing Data Score: {classifier.score(X_test, Y_test).round(5)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
RFCr = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, Y_train)
print(f"Training Score: {RFCr.score(X_train, Y_train)}")
print(f"Testing Score: {RFCr.score(X_test, Y_test).round(5)}")
###Output
Training Score: 1.0
Testing Score: 0.56827
###Markdown
Prediction vs Model Results- Based on the model score results the Logistic Regression Model outperform the Random Forest Classifier. My prediction was incorrect probably due to the fact that the data did not have significant outliers that could be stretching the model.- I also learned that the Random Forest model does not perform as well as the Logistic Regression model when the number of numeric veriables is greater than the the number of categorical verialbles, which is the case with our dataset. Model scores prior to scaling\; - **Logistic Regression\:** > Training Data Score\: 0.65074 > Testing Data Score\: 0.82433 - **Random Forest Clasiffier\:** > Training Score: 1.0 > Testing Score: 0.56827
###Code
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
scalert = StandardScaler().fit(X_train)
X_test_scaled = scalert.transform(X_test)
X_train_scaled
X_test_scaled
###Output
_____no_output_____
###Markdown
Prediction after scaling the data - I think that the model score will improve for the Logistic Regression model since we are dealing with a Gradient Descent Based algorithm and these are very sensitive to the range of the data points. - On the other hand Random Forests Classifier should not see any changes since it is based on tree partitioning algorithms. Basically a decision given a threshold, and this shouldn't change with scaling.
###Code
# Train the Logistic Regression model on the scaled data and print the model score
classifier.fit(X_train_scaled, Y_train)
print(f"Training Scaled Data Score: {classifier.score(X_train_scaled, Y_train).round(5)}")
classifier.fit(X_test_scaled, Y_test)
print(f"Testing Scaled Data Score: {classifier.score(X_test_scaled, Y_test).round(5)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clff = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, Y_train)
print(f'Training Score: {clff.score(X_train_scaled, Y_train)}')
print(f'Testing Score: {clff.score(X_test_scaled, Y_test).round(5)}')
###Output
Training Score: 1.0
Testing Score: 0.56912
###Markdown
Consider the ModelsI think the Random Forest Classifier will perform better than the Logistic Regression model.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
clf = LogisticRegression().fit(x_train_dummy, y_train)
print(f'training score: {clf.score(x_train_dummy, y_train)}')
print(f'testing score: {clf.score(x_test_dummy, y_test)}')
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier().fit(x_train_dummy, y_train)
print(f'training score: {clf.score(x_train_dummy, y_train)}')
print(f'testing score: {clf.score(x_test_dummy, y_test)}')
# Scale the data
scaler = StandardScaler().fit(x_train_dummy)
x_train_scaled = scaler.transform(x_train_dummy)
x_test_scaled = scaler.transform(x_test_dummy)
# Train the Logistic Regression model on the scaled data and print the model score
clf = LogisticRegression().fit(x_train_scaled, y_train)
print(f'training score: {clf.score(x_train_scaled, y_train)}')
print(f'testing score: {clf.score(x_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier().fit(x_train_scaled, y_train)
print(f'training score: {clf.score(x_train_scaled, y_train)}')
print(f'testing score: {clf.score(x_test_scaled, y_test)}')
###Output
training score: 1.0
testing score: 0.5974053594215227
###Markdown
Initial Thought: After doing some research on what the random forest classifier is most commonly used for, I believe it is better for categorical data. I think this model will perform better than the logistic regression model.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(solver='lbfgs', max_iter=30000)
classifier.fit(train_features, y_train)
print(f"Training Data Score: {classifier.score(train_features, y_train)}")
print(f"Testing Data Score: {classifier.score(test_features, y_test)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
f_classifier = RandomForestClassifier(random_state=1, n_estimators=500).fit(train_features, y_train)
print(f"Training Data Score: {f_classifier.score(train_features, y_train)}")
print(f"Testing Data Score: {f_classifier.score(test_features, y_test)}")
###Output
Training Data Score: 1.0
Testing Data Score: 0.6180348787749894
###Markdown
Before Scaling The Random Forest Classifier seems to have better scores before scaling. However, with a 1.0 score on the training data set it raises concerns of over fitting.
###Code
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(train_features)
train_features_scaled = scaler.transform(train_features)
test_features_scaled = scaler.transform(test_features)
# Train the Logistic Regression model on the scaled data and print the model score
classifier.fit(train_features_scaled, y_train)
print(f"Training Data Score: {classifier.score(train_features_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(test_features_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
from sklearn.ensemble import RandomForestClassifier
f_classifier = RandomForestClassifier(random_state=1, n_estimators=500).fit(train_features_scaled, y_train)
print(f"Training Data Score: {f_classifier.score(train_features_scaled, y_train)}")
print(f"Testing Data Score: {f_classifier.score(test_features_scaled, y_test)}")
###Output
Training Data Score: 1.0
Testing Data Score: 0.6193109315185028
###Markdown
PredictionRandom Forest will perform better since it typically is more accurate with categorical data than Logistic Regression
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
lr=LogisticRegression()
lr.fit(X_train_dums, y_train)
print(f'Training Score: {lr.score(X_train_dums, y_train)}')
print(f'Testing Score: {lr.score(X_test_dums, y_test)}')
# Train a Random Forest Classifier model and print the model score
rfc = RandomForestClassifier(random_state=1, n_estimators=100).fit(X_train_dums, y_train)
print(f'Training Score: {rfc.score(X_train_dums, y_train)}')
print(f'Testing Score: {rfc.score(X_test_dums, y_test)}')
###Output
Training Score: 1.0
Testing Score: 1.0
###Markdown
ResultsRandom Forest performed the best. PredicationScaling the data will lead to increased accuracy
###Code
# Scale the data
scaler = StandardScaler().fit(X_train_dums)
X_train_scaled = scaler.transform(X_train_dums)
X_test_scaled = scaler.transform(X_test_dums)
# Train the Logistic Regression model on the scaled data and print the model score
lr.fit(X_train_scaled, y_train)
print(f'Training Score: {lr.score(X_train_scaled, y_train)}')
print(f'Testing Score: {lr.score(X_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
rfc = RandomForestClassifier(random_state=1, n_estimators=100).fit(X_train_scaled, y_train)
print(f'Training Score: {rfc.score(X_train_scaled, y_train)}')
print(f'Testing Score: {rfc.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 1.0
###Markdown
1-Data Manage-Clean up-Remove Un named column 1.1-Test Data
###Code
test_df = test_df.drop('Unnamed: 0', axis=1)
test_df
###Output
_____no_output_____
###Markdown
1.2-Train Data
###Code
train_df = train_df.drop('Unnamed: 0', axis=1)
train_df
###Output
_____no_output_____
###Markdown
2-Data Conversion-Training Data 2-1-Separate target feature
###Code
X_trainFeature = train_df.drop('loan_status', axis=1)
X_trainFeature
###Output
_____no_output_____
###Markdown
2-2-Convert categorical data to numeric
###Code
X_train = pd.get_dummies(X_trainFeature)
X_train.head()
###Output
_____no_output_____
###Markdown
2-3-output labels
###Code
y_train = train_df["loan_status"]
y_train
###Output
_____no_output_____
###Markdown
3-Data Conversion-Test Data 3-1-Separate target feature
###Code
X_testFeature = test_df.drop('loan_status', axis=1)
X_testFeature
###Output
_____no_output_____
###Markdown
3-2-Convert categorical data to numeric.
###Code
X_test = pd.get_dummies(X_testFeature)
X_test.head()
###Output
_____no_output_____
###Markdown
3-3-Find Missing Column
###Code
for testColumn in X_test.columns:
for trainColumn in X_train.columns:
if trainColumn not in X_test.columns:
print(trainColumn)
###Output
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
debt_settlement_flag_Y
###Markdown
3-4-Add missing dummy variables to testing set
###Code
X_test["debt_settlement_flag_Y"]=0
###Output
_____no_output_____
###Markdown
3-5: output labels
###Code
# y_test_label = LabelEncoder().fit_transform(test_df['loan_status'])
y_test = test_df['loan_status']
y_test
###Output
_____no_output_____
###Markdown
4 Train the Logistic Regression model on the unscaled data and print the model score 4.1-Train the Logistic Regression model on the unscaled data
###Code
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
###Output
_____no_output_____
###Markdown
4.2 Fit (train) our model by using the training data
###Code
classifier.fit(X_train, y_train)
# X_test_dummies
###Output
/Users/anilzown/opt/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
###Markdown
4.3 Validate the model by using the test data
###Code
# print(f"Training Data Score: {classifier.score(X_train, y_train)}")
# print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
# X_dummies
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test,y_test)}")
# print(X_test_dummies.head())
###Output
Training Data Score: 0.6572249589490968
Testing Data Score: 0.5208421948107188
###Markdown
5 Train Random Forest Classifier model and print the model score
###Code
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6631220757124627
###Markdown
6 Scale the data
###Code
# Scaling the X data by using StandardScaler()
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
X_train_scaled
X_test_scaled
from sklearn.metrics import classification_report
###Output
_____no_output_____
###Markdown
6-1-Train the Logistic Regression model on the scaled data and print the Model report
###Code
classifier = LogisticRegression()
# classifier
classifier.fit(X_train_scaled, y_train)
y_pred = classifier.predict(X_test_scaled)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
high_risk 0.86 0.53 0.66 2351
low_risk 0.66 0.91 0.77 2351
accuracy 0.72 4702
macro avg 0.76 0.72 0.71 4702
weighted avg 0.76 0.72 0.71 4702
###Markdown
6-2-Print the model score
###Code
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
###Output
Training Data Score: 0.7130541871921182
Testing Data Score: 0.7216078264568269
###Markdown
7-Train a Random Forest Classifier model on the scaled data and print the model score
###Code
# Fit a model, and then print a classification report
clf = RandomForestClassifier(random_state=1).fit(X_train_scaled, y_train)
clf.fit(X_train_scaled, y_train)
y_pred = clf.predict(X_test_scaled)
print(classification_report(y_test, y_pred))
print(f"Training Data Score: {clf.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {clf.score(X_test_scaled, y_test)}")
###Output
precision recall f1-score support
high_risk 0.75 0.51 0.60 2351
low_risk 0.63 0.83 0.72 2351
accuracy 0.67 4702
macro avg 0.69 0.67 0.66 4702
weighted avg 0.69 0.67 0.66 4702
Training Data Score: 1.0
Testing Data Score: 0.6688643130582731
###Markdown
Prediction: Random Forest might perfrom better than Logistic Regression becuase random forest is usaly good with complex data such as the one we have becuase it has lot of features.
###Code
import numpy as np
import pandas as pd
from pathlib import Path
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
train_df = pd.read_csv(Path('Resources/2019loans.csv'))
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'))
train_df.head()
train_df.info()
test_df.head()
test_df.dropna()
#convert target column to numeric
test_df["loan_status"] = test_df["loan_status"].map({'high_risk': 1, 'low_risk': 0})
train_df["loan_status"] = train_df["loan_status"].map({'high_risk': 1, 'low_risk': 0})
train_df['loan_status'].unique()
###Output
_____no_output_____
###Markdown
Convert categorical data to numeric and separate target feature for training data
###Code
train_df2 = pd.get_dummies(train_df)
train_df2
###Output
_____no_output_____
###Markdown
Convert categorical data to numeric and separate target feature for testing data
###Code
test_df2 = pd.get_dummies(test_df)
test_df2['debt_settlement_flag_Y'] = 0
test_df2.head()
y_train = train_df2['loan_status']
X_train = train_df2.drop(['loan_status'], axis=1)
y_test = test_df2['loan_status']
X_test = test_df2.drop(['loan_status'], axis=1 )
print(X_train)
print(y_train)
print(X_test)
print(y_test)
print(f"Train Shape: {X_train.shape}, {y_train.shape}")
print(f"Test Shape: {X_test.shape},{ y_test.shape}")
# Train the Logistic Regression model on the unscaled data and print the model score
modelLR= LogisticRegression(max_iter=2000)
modelLR.fit(X_train, y_train)
print(f"Training Data Score (Logistic): {modelLR.score(X_train, y_train)}")
print(f"Testing Data Score (Logistic): {modelLR.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
modelRFC = RandomForestClassifier(random_state=1, n_estimators=1000)
modelRFC.fit(X_train, y_train)
print(f"Training Data Score (RandomForest): {modelRFC.score(X_train, y_train)}")
print(f"Testing Data Score (RandomForest): {modelRFC.score(X_test, y_test)}")
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
modelLRS = LogisticRegression(max_iter=2000)
modelLRS.fit(X_train_scaled, y_train)
print(f"Training Data Score (Scaled Logistic): {modelLRS.score(X_train_scaled, y_train)}")
print(f"Testing Data Score (Scaled Logistic): {modelLRS.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
modelRFC = RandomForestClassifier(random_state=1, n_estimators=1000)
modelRFC.fit(X_train_scaled, y_train)
print(f"Training Data Score (Scaled RandomForest): {modelRFC.score(X_train_scaled, y_train)}")
print(f"Testing Data Score (Scaled RandomForest): {modelRFC.score(X_test_scaled, y_test)}")
print(f"Training Data Score (Unscaled Logistic): {modelLR.score(X_train, y_train)}")
print(f"Testing Data Score (Unscaled Logistic): {modelLR.score(X_test, y_test)}")
print(f"-----")
print(f"Training Data Score (Scaled Logistic): {modelLRS.score(X_train_scaled, y_train)}")
print(f"Testing Data Score (Scaled Logistic): {modelLRS.score(X_test_scaled, y_test)}")
print(f"-----")
print(f"Training Data Score (Unscaled RandomForest): {modelRFC.score(X_train, y_train)}")
print(f"Testing Data Score (Unscaled RandomForest): {modelRFC.score(X_test, y_test)}")
print(f"-----")
print(f"Training Data Score (Scaled RandomForest): {modelRFC.score(X_train_scaled, y_train)}")
print(f"Testing Data Score (Scaled RandomForest): {modelRFC.score(X_test_scaled, y_test)}")
###Output
Training Data Score (Unscaled Logistic): 0.6972085385878489
Testing Data Score (Unscaled Logistic): 0.5716716290940026
-----
Training Data Score (Scaled Logistic): 0.7127257799671592
Testing Data Score (Scaled Logistic): 0.7201190982560612
-----
Training Data Score (Unscaled RandomForest): 0.49712643678160917
Testing Data Score (Unscaled RandomForest): 0.4293917481922586
-----
Training Data Score (Scaled RandomForest): 1.0
Testing Data Score (Scaled RandomForest): 0.6069757549978733
###Markdown
Data Clean Up
###Code
train_df["loan_status"].value_counts()
y_train1=train_df["loan_status"] #defining label
y_train1
###Output
_____no_output_____
###Markdown
Convert categorical data to numeric and separate target feature for training data
###Code
y_train = LabelEncoder().fit_transform(y_train1)
y_train
###Output
_____no_output_____
###Markdown
lower risk is 1, high risk is 0 in this data
###Code
#drop some cloumns
clean_traindf=train_df.drop(['loan_status','Unnamed: 0','index'], axis=1)
clean_traindf.head()
# Convert categorical data to numeric and separate target feature for training data
X_train=pd.get_dummies(clean_traindf)
print(X_train)
clm=X_train.columns
clm
print(X_train.columns)
X_train.head()
print(X_train.shape, y_train.shape)
test_df.head()
#defining y variable
y_test1=test_df["loan_status"] #defining label
y_test1
# Convert categorical data to numeric and separate target feature for testing data
y_test = LabelEncoder().fit_transform(y_test1)
y_test
#drop some cloumns
clean_testdf=test_df.drop(['loan_status','Unnamed: 0','index'], axis=1)
clean_testdf.head()
clean_testdf.dropna(inplace=True)
#test dataset, converting values
X_test=pd.get_dummies(clean_testdf)
print(X_test.columns)
X_test.head()
print(X_test.shape, y_test.shape)
#find missing columns
column=set(X_train.columns)-set(X_test.columns)
column
X_test.insert(91,'debt_settlement_flag_Y',0)
X_train.debt_settlement_flag_Y.value_counts()
print(X_test.shape,y_test.shape)
# add missing dummy variables to testing set
###Output
_____no_output_____
###Markdown
Droping missing column from train data which was added to testing data in previous code
###Code
X_train.drop(['debt_settlement_flag_Y'], axis=1, inplace=True )
X_train.head()
X_test.drop(['debt_settlement_flag_Y'], axis=1, inplace=True )
X_test.head()
###Output
_____no_output_____
###Markdown
Prediction: I think random forrest will be better predictor. Logistic regression unscaled data
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier=LogisticRegression(max_iter=2000,solver = 'lbfgs')
classifier
classifier.fit(X_train, y_train)
print(f"Logistic Regression unscaled data")
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
###Output
Logistic Regression unscaled data
Training Data Score: 0.694991789819376
Testing Data Score: 0.5735857082092727
###Markdown
Random Forrest Classifier
###Code
# Train a Random Forest Classifier model and print the model score
rfc= RandomForestClassifier(random_state=1, n_estimators=500)
rfc
rfc.fit(X_train, y_train)
print(f"Random Forest Classifier unscaled data")
print(f'Training Score: {rfc.score(X_train, y_train)}')
print(f'Testing Score: {rfc.score(X_test, y_test)}')
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaleddata = scaler.transform(X_train)
X_test_scaleddata= scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Logistic Regression with Scaled Data
###Code
# Train the Logistic Regression model on the scaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier=LogisticRegression(max_iter=2000,solver = 'lbfgs')
classifier
classifier.fit(X_train_scaleddata, y_train)
print(f"Logistic Regression scaled data")
print(f"Training Data Score: {classifier.score(X_train_scaleddata, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaleddata, y_test)}")
###Output
Logistic Regression scaled data
Training Data Score: 0.7079638752052545
Testing Data Score: 0.7677584006805614
###Markdown
Random Forest with Scaled Data
###Code
# Train a Random Forest Classifier model on the scaled data and print the model score
rfc = RandomForestClassifier(n_estimators=100, random_state=1)
rfc.fit(X_train_scaleddata, y_train)
print(f"Random Forest Classifier scaled data")
print(f'Training Score: {rfc.score(X_train_scaleddata, y_train)}')
print(f'Testing Score: {rfc.score(X_test_scaleddata, y_test)}')
###Output
Random Forest Classifier scaled data
Training Score: 1.0
Testing Score: 0.6401531263292216
###Markdown
Convert categorical data to numeric and separate target feature for TRAINING dataIndex and unnamed columns can be dropped or made into index
###Code
from sklearn.preprocessing import StandardScaler, MinMaxScaler, LabelEncoder
# Create the X feature vector (train data), dropping the label feature and index column
X_train = train_df.drop(columns=["loan_status", "index"])
X_train.head()
# Display the binary text features to be used with label encoder (avoid making extra features/columns)
X_train[["application_type", "debt_settlement_flag","hardship_flag","initial_list_status","pymnt_plan"]].head()
# Display the count of unique values in these text data features
X_train[["application_type", "debt_settlement_flag","hardship_flag","initial_list_status","pymnt_plan"]].nunique()
# Drop pymnt_plan column which has only a single value
X_train = X_train.drop('pymnt_plan', axis=1)
X_train.head()
# Use Label Encoder to preprocess binary text features (convert to numeric without additional columns)
le = LabelEncoder()
X_train["application_type"] = le.fit_transform(X_train["application_type"])
print("Label encoder for application_type: ", le.classes_)
X_train["debt_settlement_flag"] = le.fit_transform(X_train["debt_settlement_flag"])
print("Label encoder for deb_settlement_flag: ", le.classes_)
X_train["hardship_flag"] = le.fit_transform(X_train["hardship_flag"])
print("Label encoder for hardship_flag: ", le.classes_)
X_train["initial_list_status"] = le.fit_transform(X_train["initial_list_status"])
print("Label encoder for initial_list_status: ", le.classes_)
# Display the features with binary data after label encoding
X_train[["application_type", "debt_settlement_flag","hardship_flag","initial_list_status"]].head()
# Use pd.get_dummies for all other categorical features (verification_status, home_ownership)
X_train = pd.get_dummies(X_train, drop_first=True)
print(X_train.columns)
X_train.head()
# Create the y-label (train data)
# Convert output labels to 0 and 1 with label encoder
y_train = le.fit_transform(train_df['loan_status'])
print("Label Encoder classes (train): ", le.classes_)
print("y_train (train): ", y_train)
###Output
Label Encoder classes (train): ['high_risk' 'low_risk']
y_train (train): [1 1 1 ... 0 0 0]
###Markdown
Convert categorical data to numeric and separate target feature for TESTING data
###Code
# Create the X feature vector (test data)
X_test = test_df.drop(columns=['loan_status', 'index'])
# Use Label Encoder to preprocess binary text features
le = LabelEncoder()
X_test["application_type"] = le.fit_transform(X_test["application_type"])
print("Label encoder for application_type: ", le.classes_)
X_test["debt_settlement_flag"] = le.fit_transform(X_test["debt_settlement_flag"])
print("Label encoder for deb_settlement_flag: ", le.classes_)
X_test["hardship_flag"] = le.fit_transform(X_test["hardship_flag"])
print("Label encoder for hardship_flag: ", le.classes_)
X_test["initial_list_status"] = le.fit_transform(X_test["initial_list_status"])
print("Label encoder for initial_list_status: ", le.classes_)
X_test["pymnt_plan"] = le.fit_transform(X_test["pymnt_plan"])
print("Label encoder for pymnt_plan: ", le.classes_)
# Drop pymnt_plan since it only has one value, same as training data
X_test = X_test.drop('pymnt_plan', axis=1)
# Use pd.dummies for all other categorical features
X_test = pd.get_dummies(X_test, drop_first=True)
print(X_test.columns)
X_test.head()
# Create the y-label (test data)
# Convert output labels to 0 and 1
y_test = le.fit_transform(test_df['loan_status'])
print("Label Encoder classes (test): ", le.classes_)
print("y_test (test): ", y_test)
# add missing dummy variables to testing set
## NOTE: Shapes for (X_train, y_train) are currently compatible with (X_test, y_test) data frames
## The column names of the binary text features are all accounted for.
print("X_train shape", X_train.shape)
print("y_train shape", y_train.shape)
print("X_test shape", X_test.shape)
print("y_test shape", y_test.shape)
###Output
X_train shape (12180, 85)
y_train shape (12180,)
X_test shape (4702, 85)
y_test shape (4702,)
###Markdown
Create, train, score LogisticRegression model on unscaled data
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
print(f"LR Training Data Score: {classifier.score(X_train, y_train)}")
print(f"LR Testing Data Score: {classifier.score(X_test, y_test)}")
###Output
Training Data Score: 0.649671592775041
Testing Data Score: 0.5159506592939175
###Markdown
Create, train, score RandomForestClassifier model on unscaled data
###Code
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=10).fit(X_train, y_train)
print(f'RFC Training Score: {clf.score(X_train, y_train)}')
print(f'RFC Testing Score: {clf.score(X_test, y_test)}')
###Output
RFC Training Score: 0.9898193760262726
RFC Testing Score: 0.6112292641429179
###Markdown
Scale the data with StandardScaler
###Code
# Scaling the X data by using StandardScaler()
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_train_scaled
# Transforming the test dataset based on the fit from the training dataset
X_test_scaled = scaler.transform(X_test)
X_test_scaled
# Train the Logistic Regression model on the scaled data and print the model score
scaled_classifier = LogisticRegression()
scaled_classifier.fit(X_train_scaled, y_train)
print(f"Scaled LR Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Scaled LR Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
scaled_clf = RandomForestClassifier(random_state=1, n_estimators=10).fit(X_train_scaled, y_train)
print(f'Scaled RFC Training Score: {scaled_clf.score(X_train_scaled, y_train)}')
print(f'Scaled RFC Testing Score: {scaled_clf.score(X_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score (trees=3)
scaled_clf = RandomForestClassifier(random_state=1, n_estimators=3).fit(X_train_scaled, y_train)
print(f'Scaled RFC Training Score: {scaled_clf.score(X_train_scaled, y_train)}')
print(f'Scaled RFC Testing Score: {scaled_clf.score(X_test_scaled, y_test)}')
###Output
Scaled RFC Training Score: 0.9442528735632184
Scaled RFC Testing Score: 0.5935772011909826
###Markdown
Before you create, fit, and score the models, make a prediction as to which model you think will perform better. I think the random forest will perform better because i think it will be able to better drill down which features are more predictive out of the large number of features. It will be able to ignore the "noise" variables better.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
classifier = LogisticRegression()
classifier.fit(train_dummies_data, y_train_label)
print(f"Training Data Score: {classifier.score(train_dummies_data, y_train_label)}")
print(f'Testing Score: {classifier.score(test_dummies_data, y_test_label )}')
# Train a Random Forest Classifier model and print the model score
classifier = RandomForestClassifier(random_state=1, n_estimators=500).fit(train_dummies_data, y_train_label)
print(f"Training Data Score: {classifier.score(train_dummies_data, y_train_label)}")
print(f'Testing Score: {classifier.score(test_dummies_data, y_test_label )}')
###Output
Training Data Score: 1.0
Testing Score: 0.646958740961293
###Markdown
The data going into these models was never scaled, an important step in preprocessing. Use StandardScaler to scale the training and testing sets. Before re-fitting the LogisticRegression and RandomForestClassifier models on the scaled data, make another prediction about how you think scaling will affect the accuracy of the models. Write your predictions down and provide justification. I think scaling the data will help th models perform better as several of the features on are on much larger or smaller scales which would affect the weighting of those features more during the training process
###Code
# Scale the data
scaler = StandardScaler().fit(train_dummies_data)
X_train_scaled = scaler.transform(train_dummies_data)
X_test_scaled = scaler.transform(test_dummies_data)
# Train the Logistic Regression model on the scaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_train_scaled, y_train_label)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train_label)}")
print(f'Testing Score: {classifier.score(X_test_scaled, y_test_label)}')
y_pred = classifier.predict(X_test_scaled)
print(confusion_matrix(y_test_label, y_pred))
print(classification_report(y_test_label, y_pred))
# Train a Random Forest Classifier model on the scaled data and print the model score
Rclassifier = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f"Training Data Score: {Rclassifier.score(X_train_scaled, y_train)}")
print(f'Testing Score: {Rclassifier.score(X_test_scaled, y_test)}')
y_pred = Rclassifier.predict(X_test_scaled)
print(confusion_matrix(y_test_label, y_pred))
print(classification_report(y_test_label, y_pred))
features = classifier.feature_importances_
print(features)
plt.bar(x = range(len(features)), height=features)
plt.show()
###Output
[1.64175246e-02 3.36047005e-02 3.07032294e-02 1.42239163e-02
1.54887781e-02 2.97922850e-03 4.27051795e-03 8.55840612e-03
1.33532005e-03 1.51577257e-02 1.16142472e-02 2.95307776e-02
2.99764667e-02 4.53332754e-02 4.48443196e-02 5.25173382e-02
4.98329797e-02 1.62671260e-02 0.00000000e+00 0.00000000e+00
9.97444637e-02 6.56760511e-04 0.00000000e+00 0.00000000e+00
4.27868968e-03 1.34563223e-02 4.63160385e-03 6.58760719e-03
4.03294385e-03 6.17692505e-03 1.18528253e-02 1.31099214e-02
1.40812113e-02 5.12304140e-03 7.78349300e-03 1.53878110e-02
1.33159958e-02 1.54529099e-02 5.98482161e-03 6.83047428e-03
7.85674268e-03 9.78167759e-03 1.39355711e-02 1.55873811e-02
1.40840046e-02 2.93482835e-04 0.00000000e+00 1.54613564e-02
1.62359542e-02 1.18244670e-02 1.00680524e-02 5.67200574e-03
1.29851695e-02 1.18672718e-02 3.62464457e-03 7.08920963e-03
7.86624182e-03 7.36513040e-03 8.69878830e-03 1.01283162e-02
8.34738211e-03 9.97869361e-03 7.97169822e-03 8.66230546e-03
0.00000000e+00 0.00000000e+00 1.00781070e-03 6.84643078e-03
9.25322619e-03 7.66253578e-03 1.45510549e-03 0.00000000e+00
1.44468755e-02 1.36302861e-02 1.50933062e-02 1.35098313e-02
1.80288820e-04 1.74281622e-03 1.64178137e-03 1.72083212e-03
2.02890909e-03 1.91467927e-03 1.59709624e-03 0.00000000e+00
1.07864320e-03 1.08026699e-03 1.42818911e-03 1.40438753e-03
2.22878863e-03 2.43714863e-03 3.88939360e-05 4.26246803e-05]
###Markdown
Prediction : i predict that random forest classifier will be a better model as it tends to provide more accuracy
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
# Train the Logistic Regression model on the unscaled data and print the model score
clf = LogisticRegression().fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.48915355168013613
###Markdown
Result : RFC has much better result than logical regression , as RFC provides more accurate outcome
###Code
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Prediction : For scaled data , logical regression model will perform better than RFC as these are sensitive to range of data points
###Code
# Train the Logistic Regression model on the scaled data and print the model score
clf = LogisticRegression().fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.48915355168013613
###Markdown
In this comparison of Logistic Regression vs Random Forest Classifier, I expect the Random Forest model to be more accurate. In this model, there are multiple variables that are equally important in making a decision. With so many variables, I don't expect that we will have the clear and separate classes that would be allow the linear regression model to be more accurate. I think our predictions will be more accurate if we take a random sampling across the full data set, and determining what variables are most favorable to approving a loan.
###Code
y_label = LabelEncoder().fit_transform(X['tot_coll_amt'])
X = X.drop('tot_coll_amt',axis=1)
classifier = LogisticRegression()
# Train the Logistic Regression model on the unscaled data and print the model score
X_train, X_test, y_train, y_test = train_test_split(X, y_label, random_state=34)
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
# Train a Random Forest Classifier model and print the model score
shallow_rf = RandomForestClassifier(max_depth=10)
clf = shallow_rf.fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
###Output
Training Score: 0.8524356869184455
###Markdown
As expected, despite the random sampling needed for the RFC model given memory issues, the Random Forest model performed marginally better than the Logistic Regression model. For the scaled data, since the RFC model data size is limited by memory performance issues, I expect the Logistic Regression model to perform better because it has access to the full dataset. Given the method used by the RFC model, the memory requirement is exponentially larger than that needed for Logistic Regression, requiring the model to limit the depth, and therefore the accuracy, of the training data.
For both, I do expect the results to be more accurate than the unscaled results, as scaling the data should magnify the relationships between data points.
###Code
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_train_scaled
# Train the Logistic Regression model on the scaled data and print the model score
classifier.fit(X_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = shallow_rf.fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
###Output
Training Score: 0.8509031198686371
###Markdown
I predict that Random Forest Classifier model will perform better than Logistic Regression modelfor this dataset. The reason is that there are many features (86 columns) to look at in the dataset. Random forest classifier model is better to deal with dataset with more complexity and randomness.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train, y_train)
print(f"LogisticRegression Training Data Score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1).fit(X_train, y_train)
print(f'RandomForestClassifier Training Data Score: {clf.score(X_test, y_test)}')
###Output
RandomForestClassifier Training Data Score: 0.6405784772437261
###Markdown
It turns out that the R2 score for LogisticRegression is 0.5253083794130158 compared to the R2 score 0.6405784772437261 for Random Forest Classifier. My prediction is correct that Random Forest Classifier performs better fot this dataset.
###Code
I predict that after the scaler, both of the model will perform better.
Without scaling features, the algorithm may be biased towards the feature which has big values.
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train_scaled, y_train)
print(f"LogisticRegression Training Data Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1).fit(X_train_scaled, y_train)
print(f'RandomForestClassifier Training Data Score: {clf.score(X_test_scaled, y_test)}')
###Output
RandomForestClassifier Training Data Score: 0.6418545299872395
###Markdown
Import dependencies
###Code
import numpy as np
import pandas as pd
from pathlib import Path
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
###Output
_____no_output_____
###Markdown
Import CSV files
###Code
from google.colab import files
uploaded = files.upload()
import io
train_df = pd.read_csv(io.BytesIO(uploaded['2019loans.csv']))
# Dataset is now stored in a Pandas Dataframe
from google.colab import files
uploaded = files.upload()
import io
test_df = pd.read_csv(io.BytesIO(uploaded['2020Q1loans.csv']))
# Dataset is now stored in a Pandas Dataframe
###Output
_____no_output_____
###Markdown
Verify Imported Data
###Code
test_df.count()
train_df.count()
###Output
_____no_output_____
###Markdown
Too much data for a visual verification Clean Data Drop Null Rows and Columns if they exist
###Code
# Clean data to remove null columns and rows from test and train data
# Drop the null columns where all values are null
test_df = test_df.dropna(axis='columns', how='all')
train_df = train_df.dropna(axis='columns', how='all')
# Drop the null rows
test_df = test_df.dropna()
train_df = train_df.dropna()
# same count means there were not null rows or columns
test_df.count()
# same count means there were not null rows or columns
train_df.count()
###Output
_____no_output_____
###Markdown
The count of both datasets remained the same. There must not have been any Null values in the data
###Code
test_df.head(10)
train_df.head(10)
###Output
_____no_output_____
###Markdown
Find inconsistent data
###Code
train_df=pd.get_dummies(train_df)
test_df=pd.get_dummies(test_df)
# get number of columns to compare with test data
print(len(train_df.columns))
# get number of columns to compare with train data
print(len(test_df.columns))
###Output
93
###Markdown
The lengths are not the same, but which one is missing?
###Code
# put column names in dataframe
Train_Col = train_df.columns
Test_Col = test_df.columns
# added title to column
Train_Col =pd.DataFrame(Train_Col,columns=['labels'])
Test_Col =pd.DataFrame(Test_Col,columns=['labels'])
Train_Col
Test_Col
#convert data to strings
Train_Col.labels.apply(str)
Test_Col.labels.apply(str)
# Determine which data doesn't match
# "debt_settlement_flag_Y" is not in the Test dataset
Missing_Col = Train_Col.merge(Test_Col, how='left', indicator=True)
Missing_Col[Missing_Col['_merge'] == 'left_only'][['labels']]
###Output
_____no_output_____
###Markdown
Add this data to test data before analyzing data Import files again for analysis
###Code
from google.colab import files
uploaded = files.upload()
import io
train_df = pd.read_csv(io.BytesIO(uploaded['2019loans.csv']))
# Dataset is now stored in a Pandas Dataframe
from google.colab import files
uploaded = files.upload()
import io
test_df = pd.read_csv(io.BytesIO(uploaded['2020Q1loans.csv']))
# Dataset is now stored in a Pandas Dataframe
###Output
_____no_output_____
###Markdown
Convert categorical data to numeric and separate target feature for training data
###Code
# reduced data size to only consider binary fields since logistic regression is better with smaller datasets
X_train = train_df.drop(['target'], axis=1)
X_train = pd.get_dummies(X_train, columns=["home_ownership","verification_status",
"pymnt_plan","initial_list_status","debt_settlement_flag",
"application_type","hardship_flag"], drop_first = True)
y_train = LabelEncoder().fit_transform(train_df['target'])
###Output
_____no_output_____
###Markdown
Convert categorical data to numeric and separate target feature for testing data
###Code
# reduced data size to only consider binary fields
X_test = test_df.drop(['target'], axis=1)
X_test = pd.get_dummies(X_test, columns=["home_ownership","verification_status",
"pymnt_plan","initial_list_status","debt_settlement_flag",
"application_type","hardship_flag"], drop_first = True)
y_test = LabelEncoder().fit_transform(test_df['target'])
###Output
_____no_output_____
###Markdown
Add missing dummy variables to testing set
###Code
# add missing variables to test dataset to avoid errors when analyzing
X_test['debt_settlement_flag_Y']=0
X_test.head()
###Output
_____no_output_____
###Markdown
Prediction - Which model you think will perform better? I expect the Random Forest Classifier to yield a more accurate percentage than the Logistic regression model. The Random Forest classifier uses more decision trees when determining the probability. Train the Logistic Regression model on the unscaled data and print the model score
###Code
#logistic regression unscaled
classifier = LogisticRegression(max_iter=10000)
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
###Output
Training Data Score: 0.7025451559934318
Testing Data Score: 0.5661420672054445
###Markdown
Train a Random Forest Classifier model on unscaled data and print the model score
###Code
# Random Forest Classifer Unscaled
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
Training Score: 1.0
###Markdown
Prediction - Which model you think will perform better? I expect the Random Forest Classifier to yield a more accurate percentage than the Logistic regression model. The Random Forest classifier uses more decision trees when determining the probability. Now that the data is scaled, the Logistic Regression model may perform better, but I still choose Random Forest Classifier. Scale the data
###Code
# scaled data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/base.py:488: FutureWarning: The feature names should match those that were passed during fit. Starting version 1.2, an error will be raised.
Feature names must be in the same order as they were in fit.
warnings.warn(message, FutureWarning)
###Markdown
Train the Logistic Regression model on the scaled data and print the model score
###Code
#logistic regression scaled
classifier = LogisticRegression(max_iter=10000)
classifier.fit(X_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
###Output
Training Data Score: 0.7108374384236453
Testing Data Score: 0.7279880901743939
###Markdown
Train a Random Forest Classifier model on the scaled data and print the model score
###Code
# random forest classifer Scaled
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6463207145895363
###Markdown
Post Analysis How do the model scores compare to each other? * I would say there is a 8-10% difference in the accuracy of the unscaled data. The Random Forest classifier was able to produce a more accurate predicion model than the Logistic Regression. I would say there is a 10-12% difference in the accuracy of the scaled data. The Logistic Regression model was able to produce a more accurate predicion model than the Random Forest Classifier. How do the model scores compare to the previous results on unscaled data?* The Logistic Regression model on the scaled data improved its accuracy over 15%. Where as the Random Forest Classifer's accuracy essentially stayed the same weathe the data is scaled or not. Logistic regression models seem to perform more accurately with scaled data. How does this compare to your prediction? * I assumed the Random Forest Classifier would perform better because of the how it analyzes the data in comparison to the Logistic Regression model. It seems that the Random Forest Classifier's accuracy wasn't affected by scaling the data.
###Code
# Convert categorical data to numeric and separate target feature for training data
# Convert categorical data to numeric and separate target feature for testing data
# add missing dummy variables to testing set
# Train the Logistic Regression model on the unscaled data and print the model score
# Train a Random Forest Classifier model and print the model score
# Scale the data
# Train the Logistic Regression model on the scaled data and print the model score
# Train a Random Forest Classifier model on the scaled data and print the model score
###Output
_____no_output_____
###Markdown
Cleanning data
###Code
pd.options.display.min_rows = 90
train_df.isnull().sum()
# Convert categorical data to numeric and separate target feature for training data
X = train_df.drop('loan_status', axis=1)
X.head(5)
x_train = pd.get_dummies(X)
x_train.head(5)
x_train.info()
y_train = LabelEncoder().fit_transform(train_df['loan_status'])
y_train
#Test Data process
#Convert categorical data to numeric and separate target feature for testing data
test_df['loan_status'].unique()
print (test_df.isnull().sum())
Xtest = test_df.drop('loan_status', axis=1)
x_test = pd.get_dummies(Xtest)
y_test = LabelEncoder().fit_transform(test_df['loan_status'])
y_test
# add missing dummy variables to testing set
missing_cols = set( x_train.columns ) - set( x_test.columns )
missing_cols
for c in missing_cols:
x_test[c] = 0
x_test.head()
y_test[1:10]
###Output
_____no_output_____
###Markdown
Predict which model will be best and why
###Code
plt.scatter(train_df['annual_inc'], train_df['loan_status'])
#I think Random Forest Classifier will be better because the data has a lot of categorical data
#and logistic regression tends to not do as well with this type of data.
#Also I checked one of the number variables which is annual income, thinking the more income one has the lower risk it will be
#but clearly more factors will be needed to make this desition
#Models with NOT scaled data
train_df.shape
###Output
_____no_output_____
###Markdown
Train the Logistic Regression model on unscaled data and print the model score
###Code
x_train.shape
x_test.shape
clf = LogisticRegression(max_iter=200000).fit(x_train, y_train)
print(f'Training Score: {clf.score(x_train, y_train)}')
print(f'Testing Score: {clf.score(x_test, y_test)}')
#This model is not doing so well. Maybe when the data is scaled like it's suggesting in the warning, the logisticregresion model
#can perform better and the test data also could do better.
#
###Output
_____no_output_____
###Markdown
Train a Random Forest Classifier model on unscaled data and print the model score
###Code
clf = RandomForestClassifier(random_state=1).fit(x_train, y_train)
print(f'Training Score: {clf.score(x_train, y_train)}')
print(f'Testing Score: {clf.score(x_test, y_test)}')
y_pred = clf.predict(x_test)
y_pred
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
#This model is probably overfitting because having a perfect score in the training data means the data addapted so well to the model,
#and the testing data has a slight change the model won't do so well as it is in this case of getting a 64%
###Output
_____no_output_____
###Markdown
Train the Logistic Regression model using scaled data and print the model score
###Code
scaler = StandardScaler().fit(x_train)
X_train_scaled = scaler.transform(x_train)
X_test_scaled = scaler.transform(x_test)
clf = LogisticRegression(max_iter = 10000).fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
#I think here the model is doing ok because they are close to eachother the training and the testing score
#and they are not that low, although the training score data has a lower score but not that far off.
###Output
_____no_output_____
###Markdown
Train the Random Forest Classifier model on scaled data and print the model score
###Code
clf = RandomForestClassifier(random_state=1).fit(X_train_scaled, y_train)
y_pred = clf.predict(X_test_scaled)
print(classification_report(y_test, y_pred))
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
#Scaling the data in this model didn't help. It still seems to overfit. We got the same results as the unscaled data.
###Output
_____no_output_____
###Markdown
Train the Logistic Regression model on unscaled and split data and print the model score
###Code
X1_train, X1_test, y1_train, y1_test = train_test_split(x_train, y_train, random_state=1)
clf = LogisticRegression(max_iter=200000).fit(X1_train, y1_train)
print(f'Training Score: {clf.score(X1_train, y1_train)}')
print(f'Testing Score: {clf.score(X1_test, y1_test)}')
#Here the scores are about the same, I would say 70 percent accuracy for both.
###Output
_____no_output_____
###Markdown
Train the Random Forest Classifier model on unscaled and split data and print the model score
###Code
clf = RandomForestClassifier(random_state=1).fit(X1_train, y1_train)
print(f'Training Score: {clf.score(X1_train, y1_train)}')
print(f'Testing Score: {clf.score(X1_test, y1_test)}')
#The score here is still overfitting but with a better testing score.
###Output
_____no_output_____
###Markdown
Train the Logistic Regression model on scaled and split data and print the model score
###Code
X1_train, X1_test, y1_train, y1_test = train_test_split(x_train, y_train, random_state=1)
scaler = StandardScaler().fit(X1_train)
X_train_scaled = scaler.transform(X1_train)
X_test_scaled = scaler.transform(X1_test)
#Added some different parameters to see if I get a better score. It takes like 15 mins to run this step
logclf = LogisticRegression()
param_grid = [{'penalty':['l1','l2','elasticnet','none'],
'C': np.logspace(-4, 4, 20),
'solver':['lbfgs','newton-cg','liblinear','sag','saga'],
'max_iter':[100,1000,2500,5000]}]
clf = GridSearchCV(logclf, param_grid = param_grid, cv = 3, verbose= True, n_jobs = -1)
bestclf = clf.fit(X_train_scaled, y1_train)
#si sirve le printeas
print('done')
print(f'Training Score: {bestclf.score(X_train_scaled, y1_train)}')
print(f'Testing Score from split data: {bestclf.score(X_test_scaled, y1_test)}')
bestclf.best_estimator_
#Not much changed from Regression model on unscaled and split but I can see saga solver gave us just a little bit better score
#Try just as the previouse way without changing the solver or any other parameter
clf = LogisticRegression(max_iter = 10000).fit(X_train_scaled, y1_train)
print(f'Training Score: {clf.score(X_train_scaled, y1_train)}')
print(f'Testing Score from split data: {clf.score(X_test_scaled, y1_test)}')
#Try to fit the original testing data from 2020Q in to this new split model
scaler = StandardScaler().fit(x_test)
X_test_scaled = scaler.transform(x_test)
print(f'Training Score: {clf.score(X_train_scaled, y1_train)}')
print(f'Testing Score from 2020Q data: {clf.score(X_test_scaled, y_test)}')
###Output
Training Score: 0.705528188286809
Testing Score from 2020Q data: 0.6631220757124627
###Markdown
Train the Random Forest Classifier model on scaled and split data and print the model score
###Code
scaler = StandardScaler().fit(X1_train)
X_train_scaled = scaler.transform(X1_train)
X_test_scaled = scaler.transform(X1_test)
clf1 = RandomForestClassifier(random_state=1).fit(X_train_scaled, y1_train)
y1_pred = clf1.predict(X_test_scaled)
print(classification_report(y1_test, y1_pred))
print(f'Training Score: {clf1.score(X_train_scaled, y1_train)}')
print(f'Testing Score from split data: {clf1.score(X_test_scaled, y1_test)}')
#Predict using the model of the split train scaled data vs 2020Q1 test data
scaler = StandardScaler().fit(x_test)
X_test_scaled = scaler.transform(x_test)
y1_pred = clf1.predict(X_test_scaled)
print(f'Training Score: {clf1.score(X_train_scaled, y1_train)}')
print(f'Testing Score from 2020Q data: {clf1.score(X_test_scaled, y_test)}')
#This last try didn't do well at all with the 2020Q1 data as a test.
###Output
_____no_output_____
###Markdown
Which Model will do better - Unscaled Data:I think the logistic regression model will do better. I think the unscaled data will throw off the random forest model to a large enough extent that it will impact accuracy. Since the logistic regression model is a linear combination of the underlying variables, the model already scales the variables when it generates model coefficients.Which Model will do better - Scaled Data:I think the Random Forest Classifier will do better because factors driving lack of debt repayment are likley a non-linear relationship (one individual factor can push someone to be high risk, even if other factors look okay). Logit regressions only capture linear combinations, so I think Random Forests will do better.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
classifier = LogisticRegression()
classifier.fit(train_X, train_Y)
print(f"Training Data Score: {classifier.score(train_X, train_Y)}")
print(f"Testing Data Score: {classifier.score(test_X, test_Y)}")
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(train_X, train_Y)
print(f"Training Data Score: {classifier.score(train_X, train_Y)}")
print(f"Testing Data Score: {classifier.score(test_X, test_Y)}")
# Scale the data
scaler = StandardScaler().fit(train_X)
train_X_scaled = scaler.transform(train_X)
scaler = StandardScaler().fit(test_X)
test_X_scaled = scaler.transform(test_X)
# Train the Logistic Regression model on the scaled data and print the model score
classifier = LogisticRegression()
classifier.fit(train_X_scaled, train_Y)
print(f"Training Data Score: {classifier.score(train_X_scaled, train_Y)}")
print(f"Testing Data Score: {classifier.score(test_X_scaled, test_Y)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(train_X_scaled, train_Y)
print(f"Training Data Score: {classifier.score(train_X_scaled, train_Y)}")
print(f"Testing Data Score: {classifier.score(test_X_scaled, test_Y)}")
###Output
Training Data Score: 0.713136288998358
Testing Data Score: 0.670565716716291
###Markdown
Prediction: I think that linear regression will perform better than a random forest classifier because random forest usually doesn't do well with unscaled data and a large number of features.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
classifier = LogisticRegression(max_iter=50000)
classifier
classifier.fit(X_train, y_train)
print(f"Training dataset score: {classifier.score(X_train, y_train)}")
print(f"Testing dataset score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=250, random_state=1)
clf.fit(X_train, y_train)
print(f"Training dataset score: {clf.score(X_train, y_train)}")
print(f"Testing dataset score: {clf.score(X_test, y_test)}")
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Prediction: I think that with the data being scaled, both models will improve their performance.
###Code
# Train the Logistic Regression model on the scaled data and print the model score
classifier = LogisticRegression(max_iter=50000)
# classifier
classifier.fit(X_train_scaled, y_train)
print(f"Training dataset score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing dataset score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(n_estimators=250, random_state=1)
clf.fit(X_train_scaled, y_train)
print(f"Training dataset score: {clf.score(X_train_scaled, y_train)}")
print(f"Testing dataset score: {clf.score(X_test_scaled, y_test)}")
###Output
Training dataset score: 1.0
Testing dataset score: 0.6692896639727776
###Markdown
Prediction: Linear vs Forest You will be creating and comparing two models on this data: a logistic regression, and a random forests classifier. Before you create, fit, and score the models, make a prediction as to which model you think will perform better.1) beleive while random forest classifire model is more of accuracy driven algorithm, it's efficiency depends on the best and proper fit data set. Since we got the data from generateddata.ipynb and used techniques like "Undersampling", "Oversampling" and "Smote" to fix imbalanced dataset, Linear regression should be better in precision. 2) In a position where I am working for finencial establishment, and I'm giving out loans. Finance departments are always under regulatory scrutiny; Linear regression model should be easily explained and over all perfrom better.
###Code
# drop unnamed column
test_df = test_df.drop('Unnamed: 0', axis = 1).set_index('index')
#Prepare for testing data conversion
x_test = test_df.drop('loan_status', axis=1)
test_df["loan_status"].value_counts()
# Convert categorical data to numeric and separate target feature for training data
from sklearn.preprocessing import LabelEncoder, StandardScaler
x_train_d = pd.get_dummies(x_train)
#convert to array
y_train_l = LabelEncoder().fit_transform(train_df['loan_status'])
y_train_l
# Convert categorical data to numeric and separate target feature for testing data
x_test_d = pd.get_dummies(x_test)
#convert to array
y_test_l = LabelEncoder().fit_transform(test_df['loan_status'])
y_test_l
# add missing dummy variables to testing set, Set default to 0
missing_d = set(x_train_d.columns) - set(x_test_d.columns)
for x in missing_d:
x_test_d[x] = 0
x_test_d = x_test_d[x_train_d.columns]
x_test_d.head()
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classify_lr = LogisticRegression()
# Use training data into model
classify_lr.fit(x_train_d, y_train_l)
#Print model score
print(f"Training Score: {classify_lr.score(x_train_d, y_train_l)}")
print(f"Testing Score: {classify_lr.score(x_test_d, y_test_l)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
classify_rf = RandomForestClassifier(random_state=1, n_estimators=100).fit(x_train_d, y_train_l)
print(f'Training Score: {classify_rf.score(x_train_d, y_train_l)}')
print(f'Testing Score: {classify_rf.score(x_test_d, y_test_l)}')
# Scale the data
scale = StandardScaler().fit(x_train_d)
#set variables to train and test scale
x_train_scale = scale.transform(x_train_d)
x_test_scale = scale.transform(x_test_d)
###Output
_____no_output_____
###Markdown
Notice after Scaling the data, Linear regression model;training and testing score had both increased. More accurate and more precise.
###Code
# Train the Logistic Regression model on the scaled data and print the model score
classify_lr = LogisticRegression().fit(x_train_scale, y_train_l)
print(f'Training Score: {classify_lr.score(x_train_scale, y_train_l)}')
print(f'Testing Score: {classify_lr.score(x_test_scale, y_test_l)}')
###Output
Training Score: 0.7078817733990148
Testing Score: 0.767333049766057
###Markdown
After scaling the data, Random Forest Classifier model; training score remained the same, however testing score had decreased, making it let precise.
###Code
# Train a Random Forest Classifier model on the scaled data and print the model score
classify_rf = RandomForestClassifier(random_state=1, n_estimators=250).fit(x_train_scale, y_train_l)
print(f'Training Score: {classify_rf.score(x_train_scale, y_train_l)}')
print(f'Testing Score: {classify_rf.score(x_test_scale, y_test_l)}')
###Output
Training Score: 1.0
Testing Score: 0.6471714164185453
###Markdown
Predictions:Which model will perform better with the unscaled data?- The values of loan amount and annual incomes have a wide spread in value so in my opinion both models need the data to be scaled. I imagoine the random forest classifer will perform better in these conditions due to it's ability to determine/weigh the importance of each feature.Which will perform better on scaled data?- I think the scaling will impact the logistic regressions positively and we will see better accuracy with that model on scaled data. Model Trials:
###Code
# Split the data for training and testing
X_train, X_test, y_train, y_test = train_test_split(X_dummies, y, random_state=42)
# Train the Logistic Regression model on the unscaled data and print the model score
classifier = LogisticRegression(solver='liblinear').fit(X_train, y_train)
print(f"LR Training Score: {classifier.score(X_train, y_train)}")
print(f"LR Testing Score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
classifier = RandomForestClassifier(random_state=42, n_estimators=500).fit(X_train, y_train)
print(f'RFC Training Score: {classifier.score(X_train, y_train)}')
print(f'RFC Testing Score: {classifier.score(X_test, y_test)}')
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier = LogisticRegression(solver='liblinear').fit(X_train_scaled, y_train)
print(f"LR Scaled Data Training Score: {classifier.score(X_train_scaled, y_train)}")
print(f"LR Scaled Data Testing Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
classifier = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'RFC Scaled Data Training Score: {classifier.score(X_train_scaled, y_train)}')
print(f'RFC Scaled Data Testing Score: {classifier.score(X_test_scaled, y_test)}')
###Output
RFC Scaled Data Training Score: 1.0
RFC Scaled Data Testing Score: 1.0
###Markdown
Because of the large number of features in the dataset the Logistic Regression model would work better than the Random Forest Classifier. The Random Forest Classifier has increased noise; ie higher true and false positive rates as the number of features increase in the dataset and in this case the dataset has many features. Therefore, the Logistic Regression would most likely be the better model.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
model = LogisticRegression(max_iter=100, random_state=42)
model.fit(X_train, y_train)
print(f"Training Data Score: {model.score(X_train, y_train)}")
print(f"Testing Data Score: {model.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=42, n_estimators=50).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.641429179072735
###Markdown
Based on the output, the Random Tree Classifier did better than the Logistics Regression model with a better testing score, which was surprising considering the number of features. With scaling, I anticipate that the Logistics Regression Model would significantly improve, but the Random Tree Classifier may still be a better fit. Scaling would make the data points generalized so that the distance between them will be lower and therefore easier to train.
###Code
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
clf1 = LogisticRegression(random_state=42).fit(X_train_scaled, y_train)
print(f'Training Score: {clf1.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf1.score(X_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
clf1 = RandomForestClassifier(random_state=42, n_estimators=50).fit(X_train_scaled, y_train)
print(f'Training Score: {clf1.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf1.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6403658017864738
###Markdown
Consider the models:After reading some articles about different models I believe Random Forests will perform better given the amount of features this data set has and it's ability to weight certain features as more important.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
clf = LogisticRegression().fit(x_train_dummy, y_train)
print(f'training score: {clf.score(x_train_dummy, y_train)}')
print(f'testing score: {clf.score(x_test_dummy, y_test)}')
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier().fit(x_train_dummy, y_train)
print(f'training score: {clf.score(x_train_dummy, y_train)}')
print(f'testing score: {clf.score(x_test_dummy, y_test)}')
###Output
training score: 1.0
testing score: 0.6025095703955764
###Markdown
It appears that the Random Forest model performed better than the Logistic Regression model.
###Code
# Scale the data
scaler = StandardScaler().fit(x_train_dummy)
x_train_scaled = scaler.transform(x_train_dummy)
x_test_scaled = scaler.transform(x_test_dummy)
###Output
_____no_output_____
###Markdown
Prediction:I predict both models will preform better with scaled data.
###Code
# Train the Logistic Regression model on the scaled data and print the model score
clf = LogisticRegression().fit(x_train_scaled, y_train)
print(f'training score: {clf.score(x_train_scaled, y_train)}')
print(f'testing score: {clf.score(x_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier().fit(x_train_scaled, y_train)
print(f'training score: {clf.score(x_train_scaled, y_train)}')
print(f'testing score: {clf.score(x_test_scaled, y_test)}')
###Output
training score: 1.0
testing score: 0.6233517652062952
###Markdown
Due to the sheer amount of columns to be treated as variables in the x field, I believe that the logistic regression model will be better for this dataset once it has been scaled. The enormous variety in the different types of information contained in the loan information does not look like it would be reliable without first being scaled.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train, y_train)
cl = classifier.fit(X_train, y_train)
# Printing Training score and Testing score for Logistic Regression
print(f'Training Score: {classifier.score(X_train, y_train)}')
print(f'Testing Score: {classifier.score(X_test, y_test)}')
###Output
Training Score: 0.6511494252873563
Testing Score: 0.5163760102084219
###Markdown
Random Forest Classifier
###Code
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
classifier = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {classifier.score(X_train, y_train)}')
print(f'Testing Score: {classifier.score(X_test, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6433432581880051
###Markdown
Contrary to my prediction, the Random Forest Classifier model had a more accurate testing score than the Logistic Regression model.
###Code
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier=LogisticRegression().fit(X_train_scaled, y_train)
print(f'Training Score: {classifier.score(X_train_scaled, y_train)}')
print(f'Testing Score: {classifier.score(X_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6420672054444917
###Markdown
Prediction:I think the Random Forest Model is going to perform better with this data because of the high number of variables inthe data set. Typically, Random Forest performs better as the number of explanatory variables increases and I thinkthis data contains a high number of explanatory variables.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
#create
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
#fit
classifier.fit(X_train, y_train)
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, l1_ratio=None, max_iter=100,
multi_class='auto', n_jobs=None, penalty='l2',
random_state=None, solver='lbfgs', tol=0.0001, verbose=0,
warm_start=False)
#validate (print)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6180348787749894
###Markdown
Results:Random Forest Model performed better than the Logistic Regression, but neither model produced what would probably be considered an "acceptble" testing score for predicting loan risk. I think using unscaled data has a lot to do with this outcome, becuase many of the variables (such as income, loan amount, interest rate, etc.) are potentially important in predicting risk, and most likely contain outliers in these data sets.
###Code
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Prediction:I still think the Random Forest model will perform better than Logistic Regression, but I think test scores for both models will improve with the scaled data becuase we are reducing the impact that outliers have on skewing the data.
###Code
# Train the Logistic Regression model on the scaled data and print the model score
classifier_2 = LogisticRegression()
classifier_2.fit(X_train_scaled, y_train)
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, l1_ratio=None, max_iter=100,
multi_class='auto', n_jobs=None, penalty='l2',
random_state=None, solver='lbfgs', tol=0.0001, verbose=0,
warm_start=False)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf_2 = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'Training Score: {clf_2.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf_2.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6193109315185028
###Markdown
Cleaning up training dataset
###Code
train_df.head(2)
train_df["loan_status"].value_counts()
#defining target variable
y_train1=train_df["loan_status"] #defining label
y_train1
# Convert categorical data to numeric and separate target feature for training data
y_train = LabelEncoder().fit_transform(y_train1)
y_train
# we can see lower risk is 1, high risk is 0 in our data
#drop some cloumns
clean_traindf=train_df.drop(['loan_status','Unnamed: 0','index'], axis=1)
clean_traindf.head(2)
# Convert categorical data to numeric and separate target feature for training data
X_train=pd.get_dummies(clean_traindf)
print(X_train)
clm=X_train.columns
clm
print(X_train.columns)
X_train.head(2)
print(X_train.shape, y_train.shape)
###Output
(12180, 92) (12180,)
###Markdown
Cleaning up test dataset
###Code
test_df.head(2)
#defining y variable
y_test1=test_df["loan_status"] #defining label
y_test1
# Convert categorical data to numeric and separate target feature for testing data
y_test = LabelEncoder().fit_transform(y_test1)
y_test
#drop some cloumns
clean_testdf=test_df.drop(['loan_status','Unnamed: 0','index'], axis=1)
clean_testdf.head(2)
clean_testdf.dropna(inplace=True)
#test dataset, converting values
X_test=pd.get_dummies(clean_testdf)
print(X_test.columns)
X_test.head(2)
print(X_test.shape, y_test.shape)
#find missing columns
column=set(X_train.columns)-set(X_test.columns)
column
X_test.insert(91,'debt_settlement_flag_Y',0)
X_train.debt_settlement_flag_Y.value_counts()
print(X_test.shape,y_test.shape)
###Output
(4702, 92) (4702,)
###Markdown
Drop missing column from train data wheich as added to testing data in previous code
###Code
X_train.drop(['debt_settlement_flag_Y'], axis=1, inplace=True )
X_train.head(2)
X_test.drop(['debt_settlement_flag_Y'], axis=1, inplace=True )
X_test.head()
###Output
_____no_output_____
###Markdown
Before the analysis I think random forrest will be better predictor. Logistic regression unscaled data
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier=LogisticRegression(max_iter=2000,solver = 'lbfgs')
classifier
classifier.fit(X_train, y_train)
print(f"Logistic Regression unscaled data")
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
###Output
Logistic Regression unscaled data
Training Data Score: 0.6980295566502464
Testing Data Score: 0.5776265418970651
###Markdown
Random Forrest Classifier
###Code
# Train a Random Forest Classifier model and print the model score
rfc= RandomForestClassifier(random_state=1, n_estimators=500)
rfc
rfc.fit(X_train, y_train)
print(f"Random Forest Classifier unscaled data")
print(f'Training Score: {rfc.score(X_train, y_train)}')
print(f'Testing Score: {rfc.score(X_test, y_test)}')
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaleddata = scaler.transform(X_train)
X_test_scaleddata= scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Logistic Regression with Scaled Data
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier=LogisticRegression(max_iter=2000,solver = 'lbfgs')
classifier
classifier.fit(X_train_scaleddata, y_train)
print(f"Logistic Regression scaled data")
print(f"Training Data Score: {classifier.score(X_train_scaleddata, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaleddata, y_test)}")
###Output
Logistic Regression scaled data
Training Data Score: 0.7079638752052545
Testing Data Score: 0.7679710761378137
###Markdown
Random Forrest with Scaled Data
###Code
# Train a Random Forest Classifier model on the scaled data and print the model score
rfc = RandomForestClassifier(n_estimators=100, random_state=1)
rfc.fit(X_train_scaleddata, y_train)
print(f"Random Forest Classifier scaled data")
print(f'Training Score: {rfc.score(X_train_scaleddata, y_train)}')
print(f'Testing Score: {rfc.score(X_test_scaleddata, y_test)}')
###Output
Random Forest Classifier scaled data
Training Score: 1.0
Testing Score: 0.6401531263292216
###Markdown
LOGISTIC REGRESSION MODEL
###Code
#import dependencies
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
# Train the Logistic Regression model on the unscaled data and print the model score
classifier = LogisticRegression()
# Fit our model using the training data
classifier.fit(x_train, y_train)
print(f"Training Data Score: {classifier.score(x_train, y_train)}")
print(f"Testing Data Score: {classifier.score(x_test, y_test)}")
###Output
Training Data Score: 0.6564860426929392
Testing Data Score: 0.5199914929817099
###Markdown
RandomForest Classifier Model
###Code
#import dependencies
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report, roc_curve
# Train a Random Forest Classifier model and print the model score
rf_clf = RandomForestClassifier(random_state=1)
rf_clf.fit(x_train, y_train)
#classification reports
y_pred = rf_clf.predict(x_test)
print(classification_report(y_test, y_pred))
#model score
print(f"Training Data Score: {rf_clf.score(x_train, y_train)}")
print(f"Testing Data Score: {rf_clf.score(x_test, y_test)}")
###Output
Training Data Score: 1.0
Testing Data Score: 0.6671629094002552
###Markdown
Scaling the Data
###Code
#import dependencies
from sklearn.preprocessing import StandardScaler
# Scale the data and all features
scaler = StandardScaler().fit(x_train)
x_train_scaled = scaler.transform(x_train)
x_test_scaled = scaler.transform(x_test)
# Train the Logistic Regression model on the scaled data and print the model score
# Create a logistic regression model
classifier_scaled = LogisticRegression()
# Fit our model using the training data
classifier_scaled.fit(x_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(x_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(x_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
# Train a Random Forest Classifier model and print the model score
rf_clf_scaled = RandomForestClassifier(random_state=42)
rf_clf_scaled.fit(x_train_scaled, y_train)
#classification reports
y_pred = rf_clf_scaled.predict(x_test_scaled)
print(classification_report(y_test, y_pred))
#model score
print(f"Training Data Score: {rf_clf_scaled.score(x_train_scaled, y_train)}")
print(f"Testing Data Score: {rf_clf_scaled.score(x_test_scaled, y_test)}")
###Output
Training Data Score: 1.0
Testing Data Score: 0.6546150574223735
###Markdown
Creating the X and y co-ordinates for train and test dataframes
###Code
# X_2019 has been created for 2019 CSV files and dropped Unnamed and loan_status fields
X_2019 = train_df.drop(['Unnamed: 0','loan_status'], axis=1)
# X_2020 has been created for 2020 CSV files and dropped Unnamed and loan_status fields
X_2020 = test_df.drop(['Unnamed: 0','loan_status'], axis=1)
###Output
_____no_output_____
###Markdown
Creating a training set from the 2019 loans using pd.get_dummies() to convert the categorical data to numeric columns.
###Code
# Convert categorical data to numeric and separate target feature for training data
# LabelEncoder was imported from sklearn.preprocessing
from sklearn.preprocessing import LabelEncoder
X_2019dummies = pd.get_dummies(X_2019)
# y_2019label created to save the target column of loan_status
y_2019label = LabelEncoder().fit_transform(train_df['loan_status'])
y_2019label
# Label encoder converted low_risk = 1 and high_risk = 0
###Output
_____no_output_____
###Markdown
Creating a test set from the 2020 loans using pd.get_dummies() to convert the categorical data to numeric columns.
###Code
# Convert categorical data to numeric and separate target feature for testing data
X_2020dummies = pd.get_dummies(X_2020)
# y_2020label created to save the target column of loan_status
y_2020label = LabelEncoder().fit_transform(test_df['loan_status'])
y_2020label
# Label encoder converted low_risk = 1 and high_risk = 0
# Displaying the Training data set
# Training data set consist of 93 columns
X_2019dummies
# Displaying the Test data set
# Training data set consist of 92 columns; Column "debt_settlement_flag_Y" is not present in the test dataset
X_2020dummies
# add missing dummy variables to testing set
idx = 92
# The column "debt_settlement_flag_Y" was added in the Test dataset with value of 0
X_2020dummies.insert(loc=idx, column = 'debt_settlement_flag_Y', value = 0)
X_2020dummies
###Output
_____no_output_____
###Markdown
Prediction Time:
Per me Logistic Regression model would perform better. Per the description of the assignment both the CSV have been undersampled. Logistic regression performs better when the number of noise variables is less than or equal to the number of explanatory variables.
###Code
# Creating Train and Test splits for the 2019 and 2020 dataset
from sklearn.model_selection import train_test_split
X_2019train, X_2019test, y_2019train, y_2019test = train_test_split(X_2019dummies, y_2019label, random_state=1)
X_2020train, X_2020test, y_2020train, y_2020test = train_test_split(X_2020dummies, y_2020label, random_state=1)
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
#classifier
classifier.fit(X_2019train, y_2019train)
print(f"Training Data Score: {classifier.score(X_2019train, y_2019train)}")
print(f"Testing Data Score: {classifier.score(X_2020test, y_2020test)}")
###Output
Training Data Score: 0.6553913519430761
Testing Data Score: 0.548469387755102
###Markdown
Logistic Regression Model scores without scaling the data are as follows:
Training Data Score: 0.6553913519430761
Testing Data Score: 0.548469387755102
###Code
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_2019train, y_2019train)
print(f'Training Score: {clf.score(X_2019train, y_2019train)}')
print(f'Testing Score: {clf.score(X_2020test, y_2020test)}')
###Output
Training Score: 1.0
Testing Score: 0.6930272108843537
###Markdown
Random Forest Classifier Model scores without scaling the data are as follows:
Training Score: 1.0
Testing Score: 0.6930272108843537 Without scaling the model scores indicate two things:
1. Testing score with Random Forest Classifier model is better for the Test dataset
2. We also observe that Training and Test score for Logistic Regression is closer to each other
However with these score the Random Forest Classifier model seems to be the right choice
###Code
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_2019train)
X_2019train_scaled = scaler.transform(X_2019train)
X_2020test_scaled = scaler.transform(X_2020test)
# Train the Logistic Regression model on the scaled data and print the model score
clf = LogisticRegression().fit(X_2019train_scaled, y_2019train)
print(f'Training Score: {clf.score(X_2019train_scaled, y_2019train)}')
print(f'Testing Score: {clf.score(X_2020test_scaled, y_2020test)}')
###Output
Training Score: 0.7065134099616859
Testing Score: 0.70578231292517
###Markdown
Logistic Regression Model scores with scaling the data are as follows:
Training Score: 0.7065134099616859
Testing Score: 0.70578231292517
###Code
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_2019train_scaled, y_2019train)
print(f'Training Score: {clf.score(X_2019train_scaled, y_2019train)}')
print(f'Testing Score: {clf.score(X_2020test_scaled, y_2020test)}')
###Output
Training Score: 1.0
Testing Score: 0.6904761904761905
###Markdown
Random Forest Classifier Model scores without scaling the data are as follows:
Training Score: 1.0
Testing Score: 0.6904761904761905 With scaling the model scores indicate two things:
1. Testing score with Random Forest Classifier model is similar to earlier values
2. The Logistic Regression values are better after scaling
With scaling Logical Regression model seems to be the right choice
###Code
from sklearn.metrics import accuracy_score
from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
train_scores = []
test_scores = []
for k in range(1, 20, 2):
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_2019train_scaled, y_2019train)
train_score = knn.score(X_2019train_scaled, y_2019train)
test_score = knn.score(X_2020test_scaled, y_2020test)
train_scores.append(train_score)
test_scores.append(test_score)
print(f"k: {k}, Train/Test Score: {train_score:.3f}/{test_score:.3f}")
plt.plot(range(1, 20, 2), train_scores, marker='o')
plt.plot(range(1, 20, 2), test_scores, marker="x")
plt.xlabel("k neighbors")
plt.ylabel("Testing accuracy score")
plt.show()
###Output
k: 1, Train/Test Score: 1.000/0.529
k: 3, Train/Test Score: 0.799/0.489
k: 5, Train/Test Score: 0.750/0.504
k: 7, Train/Test Score: 0.725/0.496
k: 9, Train/Test Score: 0.716/0.497
k: 11, Train/Test Score: 0.706/0.506
k: 13, Train/Test Score: 0.697/0.510
k: 15, Train/Test Score: 0.695/0.522
k: 17, Train/Test Score: 0.693/0.514
k: 19, Train/Test Score: 0.694/0.520
###Markdown
PredictionBetween the two-prediction models (Logistic Regression and Random Forest Classifier) given to train and test the sample datasets, I anticipate Logistic Regression model to perform with better results given the fact that logistic regression model is well-designed to predict discrete set of classes which is in this case what the we are trying to accomplish through given data to analyze to predict the loan risk rate whether it’s high or low. Also, considering that the data set is using unscaled data, logistic regression model will be more sensitive to each variable with higher value and will provide biased results. However, with the scaled data Logistic Regression model may provide better results given the fact that this exercise is looking to create a prediction model with discrete classes, which in this case is looking to determine whether the loan is at high or low risk. In addition, most of the features can be classified based on it’s value such as, high vs low loan amount, high income vs low income, and others.
###Code
import numpy as np
import pandas as pd
from pathlib import Path
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import mean_squared_error, r2_score
train_df = pd.read_csv(Path('Resources/2019loans.csv'),index_col=False)
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'),index_col=False)
train_df.head()
X_train_df = train_df.drop('target', axis=1)
X_test_df = test_df.drop('target', axis=1)
X_train_df
# Data cleansing - finding null values
# print(train_df.isnull().sum())
# print(test_df.isnull().sum())
# Convert categorical data to numeric and separate target feature for training data
X_train_dummies = pd.get_dummies(X_train_df)
X_train_dummies.head()
# Convert categorical data to numeric and separate target feature for testing data
X_test_dummies = pd.get_dummies(X_test_df)
X_test_dummies.head()
# To identify missing columns in each of the data set
train_col = list(X_train_dummies.columns)
test_col = list(X_test_dummies.columns)
set1 = set(train_col)
set2 = set(test_col)
missing_set2 = list(sorted(set1 - set2))
missing_set1 = list(sorted(set2 - set1))
print('missing in train_col:', missing_set1)
print('missing in test_col:', missing_set2)
# add missing dummy variables to testing set
X_test_dummies['debt_settlement_flag_Y'] = '0'
X_test_dummies
y_train_label = LabelEncoder().fit_transform(train_df['target'])
y_test_label = LabelEncoder().fit_transform(test_df['target'])
# y_test_label
# Train the Logistic Regression model on the unscaled data and print the model score
model = LogisticRegression()
model.fit(X_train_dummies, y_train_label)
print(f'Training Score: {model.score(X_train_dummies, y_train_label)}')
print(f'Testing Score: {model.score(X_test_dummies, y_test_label)}')
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=55).fit(X_train_dummies, y_train_label)
print(f'Training Score: {clf.score(X_train_dummies, y_train_label)}')
print(f'Testing Score: {clf.score(X_test_dummies, y_test_label)}')
# Scale the data
scaler = StandardScaler().fit(X_train_dummies)
X_train_scaled = scaler.transform(X_train_dummies)
X_test_scaled = scaler.transform(X_test_dummies)
# Train the Logistic Regression model on the scaled data and print the model score
model = LogisticRegression()
model.fit(X_train_scaled, y_train_label)
print(f'Training Score: {model.score(X_train_scaled, y_train_label)}')
print(f'Testing Score: {model.score(X_test_scaled, y_test_label)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=1000).fit(X_train_scaled, y_train_label)
print(f'Training Score: {clf.score(X_train_scaled, y_train_label)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test_label)}')
###Output
Training Score: 1.0
Testing Score: 0.6458953636750319
###Markdown
Import and Data preprocessing
###Code
#dependancies
import pickle
import numpy as np
import pandas as pd
from pathlib import Path
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import accuracy_score
#loadind data
train_df = pd.read_csv(Path('Resources/2019loans.csv'))
train_df.head(3)
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'))
test_df.head()
# drop Unamed columns
train_df.drop(['Unnamed: 0'], axis= 1, inplace = True)
del train_df['index']
##
test_df.drop(['Unnamed: 0'], axis= 1, inplace = True)
del test_df['index']
# identify the missing data
train_df.isna().sum()
train_df.info()
# identify the missing data
test_df.isna().sum()
test_df.info()
train_df["loan_status"].unique()
#
train_df["loan_status"].value_counts()
# identify the variable to convert to numerical variable
obj_df = test_df.select_dtypes(include=['object']).copy()
obj_df.head()
# Convert categorical data to numeric and separate target feature for training data
# defining our features and label
X = train_df.drop(['loan_status'], axis =1)
y = LabelEncoder().fit_transform(train_df['loan_status'])
y_train = y
X_train= pd.get_dummies(X, columns=['initial_list_status',
'home_ownership',
'application_type',
'verification_status',
'pymnt_plan',
'hardship_flag',
'debt_settlement_flag'], drop_first=True)
# Convert categorical data to numeric and separate target feature for testing data
# defining our features and label
X = test_df.drop(['loan_status'], axis =1)
y = LabelEncoder().fit_transform(test_df['loan_status'])
# convert the string to numeric variable
y_test = y
X_test= pd.get_dummies(X, columns=['initial_list_status',
'home_ownership',
'application_type',
'verification_status',
'pymnt_plan',
'hardship_flag',
'debt_settlement_flag'], drop_first=True)
X_test.describe()
# add missing dummy variables to testing set
feature_difference = set(X_train) - set(X_test)
feature_difference_df = pd.DataFrame(data=np.zeros((X_test.shape[0], len(feature_difference))),
columns=list(feature_difference))
X_test = X_test.join(feature_difference_df)
X_test
X_test.describe()
###Output
_____no_output_____
###Markdown
My predictionIn my opinion the model that will predict best will be the logistic regression, becasue the outputs we are going to predict is a categorical variable. Data Modeling
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
model = LogisticRegression(random_state =1, solver = 'lbfgs', max_iter=100)
classifier = model.fit(X_train, y_train)
print(f'Unscaled LogisticRegression Training Score:{classifier.score(X_train, y_train)}')
print(f'Unscaled LogisticRegression Testing Score:{classifier.score(X_test, y_test)}')
#predit the 2020 data
predictions = classifier.predict(X_test)
pd.DataFrame({"Prediction": predictions, "Actual": y_test})
# confusion matrix
y_true = y_test
y_pred = classifier.predict(X_test)
confusion_matrix(y_true, y_pred)
# accuracy of the model
print(classification_report(y_true, y_pred))
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=50).fit(X_train, y_train)
print(f'Unscaled RandomForestClassifier Training Score: {clf.score(X_train, y_train)}')
print(f'Unscaled RandomForestClassifier Testing Score: {clf.score(X_test, y_test)}')
predictions = clf.predict(X_test)
pd.DataFrame({"Prediction": predictions, "Actual": y_test})
# confusion matrix
y_true = y_test
y_pred = predictions
confusion_matrix(y_true, y_pred)
print(classification_report(y_true, y_pred))
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
model = LogisticRegression(random_state =1, solver = 'lbfgs', max_iter=100)
classifier = model.fit(X_train_scaled, y_train)
print(f'LogisticRegression Training Score:{classifier.score(X_train_scaled, y_train)}')
print(f'LogisticRegression Testing Score:{classifier.score(X_test_scaled, y_test)}')
# predict the 2020 wit scaling data
predictions = classifier.predict(X_test)
pd.DataFrame({"Prediction": predictions, "Actual": y_test})
# confusion matrix
y_true = y_test
y_pred = predictions
confusion_matrix(y_true, y_pred)
print(classification_report(y_true, y_pred))
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=50).fit(X_train_scaled, y_train)
print(f'RandomForestClassifier Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'RandomForestClassifier Testing Score: {clf.score(X_test_scaled, y_test)}')
# predict 2020
predictions = clf.predict(X_test)
pd.DataFrame({"Prediction": predictions, "Actual": y_test})
# confusion matrix
y_true = y_test
y_pred = predictions
confusion_matrix(y_true, y_pred)
print(classification_report(y_true, y_pred))
###Output
precision recall f1-score support
0 0.46 0.31 0.37 2351
1 0.48 0.64 0.55 2351
accuracy 0.47 4702
macro avg 0.47 0.47 0.46 4702
weighted avg 0.47 0.47 0.46 4702
###Markdown
conclusionUnscaled LogisticRegression Testing Score:0.5161633347511697Unscaled RandomForestClassifier Testing Score: 0.6307954062101233Scaled LogisticRegression Testing Score: 0.7681837515950659Scaled RandomForestClassifier Testing Score: 0.6312207571246278I was suprised to see the RandomForestClassifier performed better than LogisticRegression for the first time. With the scaled data the LogisticRegression performed better than RandomForestClassifier. The RandomForestClassifier score is stable for the two kind of data (unscaled and scaled). But because of the the accuracy of the Unscaled RandomForestClassifier(0.63) is higher than the LogisticRegression one, so I Think the RandomForestClassifier with the unscaled data is one I should keep.
###Code
# Saving model
pickle.dump(clf, open('Model/model.pkl','wb'))
###Output
_____no_output_____
###Markdown
Data Cleaning & Split the Data into Training and Testing
###Code
columns = [
"loan_amnt", "int_rate", "installment", "home_ownership", "annual_inc",
"verification_status", "pymnt_plan", "dti", "delinq_2yrs",
"inq_last_6mths", "open_acc", "pub_rec", "revol_bal", "total_acc",
"initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt",
"total_pymnt_inv", "total_rec_prncp", "total_rec_int",
"total_rec_late_fee", "recoveries", "collection_recovery_fee",
"last_pymnt_amnt", "collections_12_mths_ex_med", "policy_code",
"application_type", "acc_now_delinq", "tot_coll_amt", "tot_cur_bal",
"open_acc_6m", "open_act_il", "open_il_12m", "open_il_24m",
"mths_since_rcnt_il", "total_bal_il", "il_util", "open_rv_12m",
"open_rv_24m", "max_bal_bc", "all_util", "total_rev_hi_lim", "inq_fi",
"total_cu_tl", "inq_last_12m", "acc_open_past_24mths", "avg_cur_bal",
"bc_open_to_buy", "bc_util", "chargeoff_within_12_mths", "delinq_amnt",
"mo_sin_old_il_acct", "mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op",
"mo_sin_rcnt_tl", "mort_acc", "mths_since_recent_bc",
"mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl",
"num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl",
"num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0", "num_sats",
"num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m",
"num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75",
"pub_rec_bankruptcies", "tax_liens", "tot_hi_cred_lim",
"total_bal_ex_mort", "total_bc_limit", "total_il_high_credit_limit",
"hardship_flag", "debt_settlement_flag",
"loan_status"
]
df = pd.read_csv(R'/Users/melissa/Downloads/Supervised_Machine_Learning/Resources /Generator /2019loans.csv', skiprows=1, header=None, names=columns, index_col=False)
df
df1 = df.drop("tot_coll_amt", axis=1)
x = pd.get_dummies(df1, dtype=float)
x
y = LabelEncoder().fit_transform(df['tot_coll_amt'])
y
x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=1)
###Output
_____no_output_____
###Markdown
Predictions and Comparison Makes a prediction on which model will perform better on the unscaled data. Since we are dealing with a large dataset and contains categorical variables it is imoortant to take into account the models that can divide the data based on categorical data. With that, I think that random Forest will do better because it generalizes the data (random). Makes a prediction on which model will perform better on the scaled data. The model that will preform better on the scaled data will bee the Logistic regression because there is less noise variables as a result there could be a higher true and false positive rate. Makes a comparison between predicted behavior of the models on unscaled data and the actual results. indicate that training accuracy is higher than testing in both model and similar to my prediction it random forest can be a better model. Makes a comparison between predicted behavior of the models on scaled data and the actual results. the scaled data indicate that test accuracy is much higher than training in the linear regression this shows that radnom forest would be better model because the test score and training score are closer in number. I however assumed that linear model would be better.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
linear_df = LogisticRegression()
linear_df.fit(x_train, y_train)
training_score = linear_df.score(x_train, y_train)
testing_score = linear_df.score(x_test, y_test)
print(f"Training Score: {training_score}")
print(f"Testing Score: {testing_score}")
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(max_depth=10, random_state=1, n_estimators=500).fit(x_train, y_train)
print(f'Training Score: {clf.score(x_train, y_train)}')
print(f'Testing Score: {clf.score(x_test, y_test)}')
# Scale the data
x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=1)
scaler = StandardScaler().fit(x_train)
x_train_scaled = scaler.transform(x_train)
x_test_scaled = scaler.transform(x_test)
linear_df.fit(x_train_scaled, y_train)
training_score = linear_df.score(x_train_scaled, y_train)
testing_score = linear_df.score(x_test_scaled, y_test)
print(f"Training Score: {training_score}")
print(f"Testing Score: {testing_score}")
clf = RandomForestClassifier(max_depth=10, random_state=1, n_estimators=500).fit(x_train_scaled, y_train)
print(f'Training Score: {clf.score(x_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(x_test_scaled, y_test)}')
###Output
Training Score: 0.8562671045429666
Testing Score: 0.8492610837438423
###Markdown
Hypothesis - In doing research online, the random forest classifier model usually performs better than the logistic regression model, so I assume the random forest classifier model will out perform the logistic regression model in our example as well.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f"Training Data Score: {clf.score(X_train, y_train)}")
print(f"Testing Data Score: {clf.score(X_test, y_test)}")
###Output
Training Data Score: 1.0
Testing Data Score: 0.6180348787749894
###Markdown
2nd Hypothesis - Since scaling changes the range of the datasets, I assume scaling the data will cause it to be even more accurate. Still assuming the random forest classifier model will out perform the logisitic regression model.
###Code
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier.fit(X_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
srfc = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f"Training Data Score: {srfc.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {srfc.score(X_test_scaled, y_test)}")
###Output
Training Data Score: 1.0
Testing Data Score: 0.6193109315185028
###Markdown
I predict that the Random Forest Classifier model will perform better than the Logistic Regression because there are so many features in these datasets (86 columns).
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(max_iter = 800)
clf.fit(X_train,y)
print(f'the training score is: {clf.score(X_train,y)}')
print(f'the testing score is: {clf.score(X_test,y2)}')
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier (n_estimators = 200, random_state = 1)
clf.fit(X_train,y)
print(f'the training score is: {clf.score(X_train,y)}')
print(f'the testing score is: {clf.score(X_test, y2)}')
###Output
the training score is: 1.0
the testing score is: 0.6210123351765207
###Markdown
As the result, Random Forest Classifier model performed better. But with 1.0 training score it's clearly overfitted.
###Code
# Scale the data
from sklearn.preprocessing import StandardScaler
scale = StandardScaler().fit(X_train)
X_train_scaled = scale.transform(X_train)
X_test_scaled = scale.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
clf = LogisticRegression(max_iter = 800)
clf.fit(X_train_scaled,y)
print(f'the training score is: {clf.score(X_train_scaled,y)}')
print(f'the testing score is: {clf.score(X_test_scaled,y2)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier (n_estimators = 200, random_state = 1)
clf.fit(X_train_scaled,y)
print(f'the training score is: {clf.score(X_train_scaled,y)}')
print(f'the testing score is: {clf.score(X_test_scaled, y2)}')
###Output
the training score is: 1.0
the testing score is: 0.6214376860910251
###Markdown
Retrieve the Data
###Code
train_df = pd.read_csv(Path('Resources/2019loans.csv'))
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'))
train_df.head()
#Seperate 'loan_status' for training data
y_train = train_df['loan_status']
X_train = train_df.drop(columns=['loan_status'])
X_train.head()
test_df.head()
#Seperate 'loan_status' for testing data
y_test = test_df['loan_status']
X_test = test_df.drop(columns=['loan_status'])
X_test.head()
###Output
_____no_output_____
###Markdown
Preprocessing: Convert Categorical Data to Numeric
###Code
# Convert categorical data to numeric (with one-hot encoding)
X_train_dummies = pd.get_dummies(X_train)
X_test_dummies = pd.get_dummies(X_test)
print(f'Training Data: {X_train_dummies.shape}, Testing Data: {X_test_dummies.shape}')
# Converting output labels to 0 and 1 for training data
y_label_train = LabelEncoder().fit_transform(train_df['loan_status'])
y_label_train
# Converting output labels to 0 and 1 for test data
y_label_test = LabelEncoder().fit_transform(test_df['loan_status'])
y_label_test
# add missing dummy variables to testing set
for column in X_train_dummies.columns:
if column not in X_test_dummies.columns:
X_test_dummies[column]=0
# check if number of columns match
print(f'Training Data: {X_train_dummies.shape}, Testing Data: {X_test_dummies.shape}')
###Output
Training Data: (12180, 94), Testing Data: (4702, 94)
###Markdown
Predictions Both models studied are predictive analysis models. Logistic Regression is an algorithm which predicts a discrete set of classes or categories. It particularly uses an activation function to return a probability value of 0 or 1. As for the Random Forest Classifier, it is a method which groups a set of decision trees randomly from a subset of training data and then returns a prediction. Overall, Random Forest Classifiers are more commonly used given their high accuracy and medium interpretability. Given that the data were working with here is categorical, the Random Forest Classifier will work best. Logisitic Regressions works best with data that is seperable linearly, which is not our case here. Here, we can't unaccount for bias, since the data in unscaled. Logistic Regression Model - Unscaled
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
unscaled_logistic = LogisticRegression().fit(X_train_dummies, y_label_train)
print(f'Training Score: {unscaled_logistic.score(X_train_dummies, y_label_train)}')
print(f'Testing Score: {unscaled_logistic.score(X_test_dummies, y_label_test)}')
###Output
Training Score: 0.648440065681445
Testing Score: 0.5253083794130158
###Markdown
Random Forest Classifier - Unscaled
###Code
# Train a Random Forest Classifier model and print the model score
unscaled_r_forest = RandomForestClassifier(random_state=42, n_estimators=100).fit(X_train_dummies, y_label_train)
print(f'Training Score: {unscaled_r_forest.score(X_train_dummies, y_label_train)}')
print(f'Testing Score: {unscaled_r_forest.score(X_test_dummies, y_label_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6305827307528711
###Markdown
Unscaled Results Logistic Regression Model Unscaled: - Training Score: 0.648440065681445- Testing Score: 0.5253083794130158Random Forest Classifier Unscaled: - Training Score: 1.0- Testing Score: 0.6305827307528711Just as predicted, given the unscaled data, the Random Forest Classifier model performed better than the Logistic Regression model with a testing score of ~63%. However, although it performed better the the Logisic Regression model, it is clearly an overfitted model, since the testing score is much lower than the training score. Scale the Data
###Code
# Scale the data
scaler = StandardScaler().fit(X_train_dummies)
X_train_scaled = scaler.transform(X_train_dummies)
X_test_scaled = scaler.transform(X_test_dummies)
#Shape the data
print(f'Scaled Training Dataset shape: {X_train_scaled.shape, y_train.shape}')
print(f'Scaled Testing Dataset shape: {X_test_scaled.shape, y_test.shape}')
###Output
Scaled Training Dataset shape: ((12180, 94), (12180,))
Scaled Testing Dataset shape: ((4702, 94), (4702,))
###Markdown
Prediction for Scaled Data Scaling data allows all features to be shifted to similar numeric scaled so that the magnitude of one feature doesn't bias the model during training.Scaling data has no effect on graphical-model classifiers, such as the Random Forest Classifier, which means only the score for the Logistic Regression Model should improve. Logisitic Regression Model - Scaled
###Code
# Train the Logistic Regression model on the scaled data and print the model score
scaled_logistic = LogisticRegression().fit(X_train_scaled, y_label_train)
print(f'Training Score: {scaled_logistic.score(X_train_scaled, y_label_train)}')
print(f'Testing Score: {scaled_logistic.score(X_test_scaled, y_label_test)}')
###Output
Training Score: 0.713136288998358
Testing Score: 0.7205444491705657
###Markdown
Random Forest Classifier - Scaled
###Code
# Train a Random Forest Classifier model on the scaled data and print the model score
scaled_r_forest = RandomForestClassifier(random_state=42, n_estimators=100).fit(X_train_scaled, y_label_train)
print(f'Training Score: {scaled_r_forest.score(X_train_scaled, y_label_train)}')
print(f'Testing Score: {scaled_r_forest.score(X_test_scaled, y_label_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6297320289238622
###Markdown
Import Dependencies and Data
###Code
import numpy as np
import pandas as pd
import sklearn
from pathlib import Path
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
train_df = pd.read_csv(Path('Resources/2019loans.csv'))
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'))
###Output
_____no_output_____
###Markdown
See data and what variables are not numerical
###Code
train_df.head()
test_df.head()
train_df.columns
test_df.columns
see_hidden = train_df[['dti', 'delinq_2yrs', 'inq_last_6mths', 'open_acc',
'pub_rec', 'revol_bal', 'total_acc', 'initial_list_status', 'out_prncp',
'out_prncp_inv', 'total_pymnt', 'total_pymnt_inv', 'total_rec_prncp',
'total_rec_int', 'total_rec_late_fee', 'recoveries',
'collection_recovery_fee', 'last_pymnt_amnt',
'collections_12_mths_ex_med', 'policy_code', 'application_type',
'acc_now_delinq', 'tot_coll_amt', 'tot_cur_bal', 'open_acc_6m',
'open_act_il', 'open_il_12m', 'open_il_24m', 'mths_since_rcnt_il',
'total_bal_il', 'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc',
'all_util', 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl', 'inq_last_12m',
'acc_open_past_24mths', 'avg_cur_bal', 'bc_open_to_buy', 'bc_util',
'chargeoff_within_12_mths', 'delinq_amnt', 'mo_sin_old_il_acct',
'mo_sin_old_rev_tl_op', 'mo_sin_rcnt_rev_tl_op', 'mo_sin_rcnt_tl',
'mort_acc', 'mths_since_recent_bc', 'mths_since_recent_inq',
'num_accts_ever_120_pd', 'num_actv_bc_tl', 'num_actv_rev_tl',
'num_bc_sats', 'num_bc_tl', 'num_il_tl', 'num_op_rev_tl',
'num_rev_accts', 'num_rev_tl_bal_gt_0', 'num_sats', 'num_tl_120dpd_2m',
'num_tl_30dpd', 'num_tl_90g_dpd_24m', 'num_tl_op_past_12m']]
see_hidden.head()
see_hidden_2 = train_df[['total_pymnt', 'total_pymnt_inv', 'total_rec_prncp',
'total_rec_int', 'total_rec_late_fee', 'recoveries',
'collection_recovery_fee', 'last_pymnt_amnt',
'collections_12_mths_ex_med', 'policy_code', 'application_type',
'acc_now_delinq', 'tot_coll_amt', 'tot_cur_bal', 'open_acc_6m',
'open_act_il', 'open_il_12m', 'open_il_24m', 'mths_since_rcnt_il',
'total_bal_il', 'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc',
'all_util', 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl', 'inq_last_12m',
'acc_open_past_24mths', 'avg_cur_bal', 'bc_open_to_buy', 'bc_util',
'chargeoff_within_12_mths', 'delinq_amnt', 'mo_sin_old_il_acct',
'mo_sin_old_rev_tl_op', 'mo_sin_rcnt_rev_tl_op', 'mo_sin_rcnt_tl',
'mort_acc', 'mths_since_recent_bc', 'mths_since_recent_inq',
'num_accts_ever_120_pd', 'num_actv_bc_tl', 'num_actv_rev_tl',
'num_bc_sats']]
see_hidden_2.head()
see_hidden_3 = train_df[['application_type',
'acc_now_delinq', 'tot_coll_amt', 'tot_cur_bal', 'open_acc_6m',
'open_act_il', 'open_il_12m', 'open_il_24m', 'mths_since_rcnt_il',
'total_bal_il', 'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc',
'all_util', 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl', 'inq_last_12m',
'acc_open_past_24mths', 'avg_cur_bal', 'bc_open_to_buy', 'bc_util',
'chargeoff_within_12_mths', 'delinq_amnt']]
see_hidden_3
see_hidden_4 = train_df[['il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc',
'all_util']]
see_hidden_4
train_df['home_ownership'].unique()
test_df['home_ownership'].unique()
train_df['verification_status'].unique()
train_df['verification_status'].value_counts()
test_df['verification_status'].unique()
train_df['loan_status'].unique()
test_df['loan_status'].unique()
train_df['pymnt_plan'].unique()
test_df['pymnt_plan'].unique()
train_df['hardship_flag'].unique()
test_df['hardship_flag'].unique()
train_df['initial_list_status'].unique()
test_df['initial_list_status'].unique()
train_df['application_type'].unique()
test_df['application_type'].unique()
train_df['debt_settlement_flag'].unique()
test_df['debt_settlement_flag'].unique()
###Output
_____no_output_____
###Markdown
Drop rows with no data
###Code
train_df.dropna()
test_df.dropna()
###Output
_____no_output_____
###Markdown
Convert categorical data to numeric and separate target feature for training data
###Code
X_train = pd.get_dummies(data=train_df, columns=['home_ownership', 'verification_status', 'loan_status', 'pymnt_plan', 'initial_list_status','application_type', 'hardship_flag', 'debt_settlement_flag'])
# Convert categorical data to numeric and separate target feature for training data
X_test = pd.get_dummies(data=test_df, columns=['home_ownership', 'verification_status', 'loan_status', 'pymnt_plan', 'initial_list_status','application_type', 'hardship_flag', 'debt_settlement_flag'])
X_train.columns
X_train = X_train.drop(columns=['Unnamed: 0', 'home_ownership_ANY', 'verification_status_Not Verified','loan_status_high_risk', 'pymnt_plan_n', 'initial_list_status_f', 'application_type_Joint App', 'hardship_flag_N', 'debt_settlement_flag_Y'], axis=1)
X_test = X_test.drop(columns=['Unnamed: 0', 'home_ownership_ANY', 'verification_status_Not Verified','loan_status_high_risk', 'pymnt_plan_n', 'initial_list_status_f', 'application_type_Joint App', 'hardship_flag_N'], axis=1)
###Output
_____no_output_____
###Markdown
Convert categorical data to numeric and separate target feature for testing data label encoder
###Code
y_train = LabelEncoder().fit_transform(train_df['loan_status'])
y_test = LabelEncoder().fit_transform(test_df['loan_status'])
print(y_train)
print(y_test)
# X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
model = LogisticRegression()
model.fit(X_train, y_train)
print(f"Training Data Score: {model.score(X_train, y_train)}")
print(f"Testing Data Score: {model.score(X_test, y_test)}")
###Output
Training Data Score: 0.6560755336617405
Testing Data Score: 0.5180774138664398
###Markdown
add missing dummy variables to testing set
###Code
X_train.columns
X_test.columns
for c in X_train.columns:
if c not in X_test.columns:
print(c)
# I deleted extraneous columns and made binary the other variables already so there really isn't a need for this step
###Output
_____no_output_____
###Markdown
Train the Logistic Regression model on the unscaled data and print the model score
###Code
clf = LogisticRegression(max_iter=100, random_state=10)
clf.fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
Training Score: 0.6560755336617405
Testing Score: 0.5180774138664398
###Markdown
Train a Random Forest Classifier model and print the model score
###Code
rfc = RandomForestClassifier(random_state=10, n_estimators=30).fit(X_train, y_train)
print(f'Training Score: {rfc.score(X_train, y_train)}')
print(f'Testing Score: {rfc.score(X_test, y_test)}')
###Output
Training Score: 1.0
Testing Score: 1.0
###Markdown
Scale the data
###Code
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
print(X_train_scaled)
print(X_test_scaled)
###Output
[[-1.31172014 -0.39311205 0.73658452 ... 0.41370744 -0.17149859
0.02026518]
[-0.46579523 0.35168119 -0.19171582 ... 0.41370744 -0.17149859
0.02026518]
[ 1.3364188 0.25400339 -0.32080462 ... 0.41370744 -0.17149859
0.02026518]
...
[ 1.67571549 -1.34791257 0.85997823 ... 0.41370744 -0.17149859
0.02026518]
[ 1.67600634 -0.23438563 -1.00231755 ... -2.41716707 -0.17149859
0.02026518]
[ 1.67906533 -0.23438563 0.69292214 ... 0.41370744 -0.17149859
0.02026518]]
[[-1.20255948 2.20755943 -1.12001617 ... 0.41370744 -0.17149859
0.02026518]
[-1.62943343 -1.11348584 0.21833096 ... 0.41370744 -0.17149859
0.02026518]
[-1.49837845 -1.34791257 0.54295132 ... 0.41370744 -0.17149859
0.02026518]
...
[-1.10927546 -0.72277464 1.7009538 ... 0.41370744 -0.17149859
0.02026518]
[-1.10922531 -0.91813024 0.85997823 ... 0.41370744 -0.17149859
0.02026518]
[-1.1091551 1.23078141 1.22636262 ... 0.41370744 -0.17149859
0.02026518]]
###Markdown
Train the Logistic Regression model on the scaled data and print the model score
###Code
clf_scaled = LogisticRegression(max_iter=100, random_state=10)
clf_scaled.fit(X_train_scaled, y_train)
print(f'Training Score: {clf_scaled.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf_scaled.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.9997873245427478
###Markdown
Train a Random Forest Classifier model on the scaled data and print the model score
###Code
rfc_scaled = RandomForestClassifier(random_state=10, n_estimators=30).fit(X_train_scaled, y_train)
print(f'Training Score: {rfc_scaled.score(X_train_scaled, y_train)}')
print(f'Testing Score: {rfc_scaled.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 1.0
###Markdown
My prediction is that the RandomForest model will perform better as it is more focused on accuracy and does well with categorical datasets. RandomForest is also less prone to overfitting. Logistic Regression Model
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train,y_train)
log_unscaled=classifier.fit(X_train, y_train)
#print model scores
print(f'Training Score: {classifier.score(X_train, y_train)}')
print(f'Testing Score: {classifier.score(X_test, y_test)}')
#report
from sklearn.metrics import classification_report
y_true = y_test
y_pred = classifier.predict(X_test)
print(classification_report(y_true, y_pred))
###Output
precision recall f1-score support
high_risk 0.53 0.30 0.39 2351
low_risk 0.51 0.73 0.60 2351
accuracy 0.52 4702
macro avg 0.52 0.52 0.49 4702
weighted avg 0.52 0.52 0.49 4702
###Markdown
Random Forest Classifier
###Code
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
classifier = RandomForestClassifier(random_state=1,n_estimators=400).fit(X_train, y_train)
print(f'Training Score: {classifier.score(X_train, y_train)}')
print(f'Testing Score: {classifier.score(X_test, y_test)}')
# classification Report
from sklearn.metrics import classification_report
y_true = y_test
y_pred = classifier.predict(X_test)
print(classification_report(y_true, y_pred))
###Output
precision recall f1-score support
high_risk 0.61 0.81 0.70 2351
low_risk 0.72 0.48 0.57 2351
accuracy 0.64 4702
macro avg 0.66 0.64 0.63 4702
weighted avg 0.66 0.64 0.63 4702
###Markdown
Between the LogisticRegression and the the RandomForest Classifier models, the RandomForest had a better training and testing score than the LogisticRegression. Also when looking at their classification reports, RandomForest's f1-score was higher or more close to 1.
###Code
#scaled data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
X_test_scaled
###Output
_____no_output_____
###Markdown
Normalizing the data is to help with optimization. It does not really affect RandomForest models. However, with LogisticRegression, normalized data makes it easier to interpret expected values. My prediction will be again that LogisticRegression model will perform better than RandomForest.
###Code
# Scale the data
# Train the Logistic Regression model on the scaled data and print the model score
classifier=LogisticRegression().fit(X_train_scaled, y_train)
print(f'Training Score: {classifier.score(X_train_scaled, y_train)}')
print(f'Testing Score: {classifier.score(X_test_scaled, y_test)}')
#LogisticRegression Scaled Classification Report
from sklearn.metrics import classification_report
y_true = y_test
y_pred = classifier.predict(X_test)
print(classification_report(y_true, y_pred))
# Train a Random Forest Classifier model on the scaled data and print the model score
classifier = RandomForestClassifier(random_state=1, n_estimators=400).fit(X_train_scaled, y_train)
print(f'Training Score: {classifier.score(X_train_scaled, y_train)}')
print(f'Testing Score: {classifier.score(X_test_scaled, y_test)}')
#RandomForest Scaled Classification Report
from sklearn.metrics import classification_report
y_true = y_test
y_pred = classifier.predict(X_test)
print(classification_report(y_true, y_pred))
###Output
precision recall f1-score support
high_risk 0.33 0.00 0.00 2351
low_risk 0.50 1.00 0.67 2351
accuracy 0.50 4702
macro avg 0.42 0.50 0.33 4702
weighted avg 0.42 0.50 0.33 4702
###Markdown
Logistic vs. Random Forest Model Unscaled data PredictionsI predict the logistic regression model will perform worse on the unscaled dataset compared to the random forest model. Logistic regression models classify the lable based on an arithmatic equation 'weighted' by the feature's numerical value. Therefore, since the dataset is unscaled and highly skewed, the LR is likely to underperform. The random forest classifier is not affected by the scaling of the dataset, so I predict greater accuracy.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
classifier = LogisticRegression(max_iter=1000)
classifier.fit(X_train_dummies, y_train_label)
print(f"Training test score: {classifier.score(X_train_dummies, y_train_label)}")
print(f"Testing test score: {classifier.score(X_test_dummies, y_test_label)}")
# Train a Random Forest Classifier model and print the model score
classifier_rf = RandomForestClassifier(n_estimators=200)
classifier_rf.fit(X_train_dummies, y_train_label)
print(f"Training test score: {classifier_rf.score(X_train_dummies, y_train_label)}")
print(f"Testing test score: {classifier_rf.score(X_test_dummies, y_test_label)}")
###Output
Training test score: 1.0
Testing test score: 0.6112292641429179
###Markdown
PerformanceAs expected, the random forest model outperformed the logisitic regression. The random forest is also overfitted to the training data set indicated by a training test r^2 score of 1.0. I anticipate the logistic regression to perform much better with the scaled dataset.
###Code
# Scale the data
scaler = StandardScaler().fit(X_train_dummies)
X_train_scaled = scaler.transform(X_train_dummies)
X_test_scaled = scaler.transform(X_test_dummies)
# Train the Logistic Regression model on the scaled data and print the model score
classifier_LR = LogisticRegression(max_iter=1000)
classifier_LR.fit(X_train_scaled, y_train_label)
print(f"Training test score: {classifier_LR.score(X_train_scaled, y_train_label)}")
print(f"Testing test score: {classifier_LR.score(X_test_scaled, y_test_label)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
classifier_rf = RandomForestClassifier(n_estimators=200)
classifier_rf.fit(X_train_scaled, y_train_label)
print(f"Training test score: {classifier_rf.score(X_train_scaled, y_train_label)}")
print(f"Testing test score: {classifier_rf.score(X_test_scaled, y_test_label)}")
###Output
Training test score: 1.0
Testing test score: 0.5878349638451723
###Markdown
Performance on scaled dataThe logistic regression model outperformed the random forest model when the data is scaled using the StandardScaler() function. The logistic regression model dramatically increased the testing data set r^2 score from 0.58 -> 0.72, while the random forest decreased the testing data set r^2 score from 0.62 -> 0.58 Trying feature selection on dataset PredictionSince there are so many features in this training model, I anticipate there are many that are not impactful in the models' decisionsHere I will find the non-essential features and remove them from the model and retest
###Code
# Determining what features are important in the random forest model
features = classifier_rf.feature_importances_
plt.bar(x = range(len(features)), height=features)
plt.xlabel('Feature number')
plt.ylabel('Feature importance')
plt.title('Feature importance vs. feature index')
plt.show()
sel = SelectFromModel(classifier_LR)
sel.fit(X_train_scaled, y_train_label)
sel.get_support()
# feature selection
# transforming unscaled datasets to remove unimportant features
X_selected_train = sel.transform(X_train_dummies)
X_selected_test = sel.transform(X_test_dummies)
# scale filtered datasets
scaler = StandardScaler().fit(X_selected_train)
X_selected_train_scaled = scaler.transform(X_selected_train)
X_selected_test_scaled = scaler.transform(X_selected_test)
classifier_LR_selected = LogisticRegression(max_iter=1000).fit(X_selected_train_scaled, y_train_label)
print(f'Training Score: {classifier_LR_selected.score(X_selected_train_scaled, y_train_label)}')
print(f'Testing Score: {classifier_LR_selected.score(X_selected_test_scaled, y_test_label)}')
sel = SelectFromModel(classifier_rf)
sel.fit(X_train_scaled, y_train_label)
sel.get_support()
# feature selection
X_selected_train = sel.transform(X_train_dummies)
X_selected_test = sel.transform(X_test_dummies)
scaler = StandardScaler().fit(X_selected_train)
X_selected_train_scaled = scaler.transform(X_selected_train)
X_selected_test_scaled = scaler.transform(X_selected_test)
classifier_rf = RandomForestClassifier(n_estimators=200)
classifier_rf.fit(X_selected_train_scaled, y_train_label)
print(f'Training Score: {classifier_rf.score(X_selected_train_scaled, y_train_label)}')
print(f'Testing Score: {classifier_rf.score(X_selected_test_scaled, y_test_label)}')
###Output
Training Score: 1.0
Testing Score: 0.6142067205444491
###Markdown
insigths from above logistic regression and random forest classification using unscaled data:- random forest had higher accuracy in test dataset.- compared to logistic regression, random forest has overfitting problem compared to logistic regression.
###Code
# Scale the data
scaler=StandardScaler()
scaler.fit(x_train_encoded)
x_train_encoded_scale=scaler.transform(x_train_encoded)
x_test_encoded_scale=scaler.transform(x_test_encoded)
# Train the Logistic Regression model on the scaled data and print the model score
# Train the Logistic Regression model on the unscaled data and print the model score
model = LogisticRegression(max_iter=10000)
model.fit(x_train_encoded_scale, y_train_le)
model.predict(x_test_encoded_scale)
print(f"Training Data Score: {model.score(x_train_encoded_scale, y_train_le)}")
print(f"Testing Data Score: {model.score(x_test_encoded_scale, y_test_le)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1).fit(x_train_encoded_scale, y_train_le)
clf.predict(x_test_encoded_scale)
print(f'Training Score: {clf.score(x_train_encoded_scale, y_train_le)}')
print(f'Testing Score: {clf.score(x_test_encoded_scale, y_test_le)}')
###Output
Training Score: 1.0
Testing Score: 0.6418545299872395
###Markdown
Hypothesis: I guess that the logistic regression will be the best model.
###Code
train_df = pd.read_csv(Path('Resources/2019loans.csv'))
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'))
train_df.head()
# Drop redundant columns to create X training data, remove target column
X_train = train_df.drop(['Unnamed: 0', 'index', 'loan_status'], axis=1)
X_train
# get dummy the x data the entire dataframe
X_dummies_train = pd.get_dummies(X_train)
print(X_dummies_train.columns)
X_dummies_train
# loan status is the target
y_train = train_df['loan_status']
y_train
y_train_label = LabelEncoder().fit_transform(train_df['loan_status'])
# Drop redundant columns to create X training data, remove target column
X_test = test_df.drop(['Unnamed: 0', 'index', 'loan_status'], axis=1)
X_test
# get dummy the x test data the entire dataframe
X_dummies_test = pd.get_dummies(X_test)
print(X_dummies_test.columns)
X_dummies_test
# add missing dummy variables to testing set
for col in X_dummies_train.columns:
if col not in X_dummies_test.columns:
X_dummies_test[col] = 0
# loan status is the target
y_test = test_df['loan_status']
y_test
# Do I need to convert the categorical data to numeric?
y_test_label = LabelEncoder().fit_transform(test_df['loan_status'])
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train, y_train_label)
print(f"Training Data Score: {classifier.score(X_dummies_train, y_train_label)}")
print(f"Testing Data Score: {classifier.score(X_dummies_test, y_test_label)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1).fit(X_dummies_train, y_train)
print(f'Training Score: {clf.score(X_dummies_train, y_train)}')
print(f'Testing Score: {clf.score(X_dummies_test, y_test)}')
# Scale the data and rerun the models
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_dummies_train)
X_train_scaled = scaler.transform(X_dummies_train)
X_test_scaled = scaler.transform(X_dummies_test)
# Train the Logistic Regression model on the scaled data and print the model score
clf = LogisticRegression().fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
predictions = classifier.predict(X_dummies_test)
pd.DataFrame({"Prediction": predictions, "Actual": y_test_label})
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1).fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
# How do I assess which model performed better? Do I need to use a confusion matrix and assess multiple elements?
###Output
Training Score: 1.0
Testing Score: 0.6548277328796257
###Markdown
PREDICTION I predict that the Logistic Regression model will perform better than the Random Forest Classifer model, as the Random Forest Classifer doesn't tend to generalize as well and we're working with a fairly large number of features.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
X_test_scaled
# Train the Logistic Regression model on the scaled data and print the model score
classifier = LogisticRegression()
classifier
classifier.fit(X_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
###Output
<ipython-input-27-60f91c5f130b>:2: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
###Markdown
Reading the CSV files + data cleansing
###Code
train_df = pd.read_csv(Path('Resources/2019loans.csv'))
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'))
train_df.head()
test_df.head()
# preparing dataset for tranining data
x_tr = train_df.drop(columns = ["loan_status"])
y_tr = train_df["loan_status"]
x_tr.head()
# preparing dataset for testing data
x_te = test_df.drop(columns = ["loan_status"])
y_te = test_df["loan_status"]
x_te.head()
# Convert categorical data to numeric and separate target feature for training data
x_tr = pd.get_dummies(x_tr)
# Convert categorical data to numeric and separate target feature for testing data
x_te = pd.get_dummies(x_te)
# add missing dummy variables to testing set
for c in x_tr.columns:
if c not in x_te.columns:
x_te[c] = 0
###Output
_____no_output_____
###Markdown
Model --> Fit --> Predict
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
risk_model = LogisticRegression()
risk_model.fit(x_tr, y_tr)
predicted = risk_model.predict(x_tr)
risk_model.score(x_tr, y_tr)
risk_model.score(x_te, y_te)
# Train a Random Forest Classifier model and print the model score
forest_classifier_mode = RandomForestClassifier(n_estimators=1000, random_state=66)
forest_model = forest_classifier_mode.fit(x_tr, y_tr)
forest_model.score(x_tr, y_tr)
forest_model.score(x_te, y_te)
# Scale the data
scaler = StandardScaler().fit(x_tr)
x_tr_scaled = scaler.transform(x_tr)
x_te_scaled = scaler.transform(x_te)
# Train the Logistic Regression model on the scaled data and print the model score
risk_model.fit(x_tr_scaled, y_tr)
predicted = risk_model.predict(x_tr_scaled)
risk_model.score(x_tr_scaled, y_tr)
risk_model.score(x_te_scaled, y_te)
# Train a Random Forest Classifier model on the scaled data and print the model score
forest_classifier_mode = RandomForestClassifier(n_estimators=1000, random_state=66)
forest_model = forest_classifier_mode.fit(x_tr_scaled, y_tr)
forest_model.score(x_tr_scaled, y_tr)
forest_model.score(x_te_scaled, y_te)
###Output
_____no_output_____
###Markdown
PredictionI think the random forest model will do better because the ability to increase the n_estimators will help the model training
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
classifier = LogisticRegression().fit(x_train, y_train)
print(f"Training Data Score: {classifier.score(x_train, y_train)}")
print(f"Testing Data Score: {classifier.score(x_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
RFclassifier = RandomForestClassifier(random_state=1, n_estimators=25).fit(x_train, y_train)
print(f'Training Score: {RFclassifier.score(x_train, y_train)}')
print(f'Testing Score: {RFclassifier.score(x_test, y_test)}')
###Output
Training Score: 0.9986863711001642
Testing Score: 0.6261165461505742
###Markdown
Model ComparisionAs expected, the random forest model did better with both the training and testing score, tho it seems to be over fitting
###Code
# Scale the data
scaler = StandardScaler().fit(x_train)
x_train_scaled = scaler.transform(x_train)
x_test_scaled = scaler.transform(x_test)
###Output
_____no_output_____
###Markdown
PredictionSince the random forest did better in the intial test, I think it will again do better than the logistic regression model.
###Code
# Train the Logistic Regression model on the scaled data and print the model score
scaled_classifier = LogisticRegression().fit(x_train_scaled, y_train)
print(f'Training Score: {scaled_classifier.score(x_train_scaled, y_train)}')
print(f'Testing Score: {scaled_classifier.score(x_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
scaled_RFclassifier = RandomForestClassifier(random_state=1, n_estimators=25).fit(x_train_scaled, y_train)
print(f'Training Score: {scaled_RFclassifier.score(x_train_scaled, y_train)}')
print(f'Testing Score: {scaled_RFclassifier.score(x_test_scaled, y_test)}')
###Output
Training Score: 0.9986863711001642
Testing Score: 0.6273925988940876
###Markdown
Train Dataframe (2019 Loans)
###Code
train_df
###Output
_____no_output_____
###Markdown
Get dummies and make new dataframe (Shorter route and works more efficent for bigger data sets)
###Code
train_df_dropped = train_df.drop(columns=['loan_status','index','Unnamed: 0'])
train_loan_status_df = train_df['loan_status']
train_df_dropped.head()
print(train_df_dropped.dtypes)
# prints out dataframe that has object values (categorical values)
categorical_train_df_only = train_df_dropped.select_dtypes('object')
categorical_train_df_only
# EXAMPLE OF WHAT GET DUMMIES DOES
preview_get_dummies = categorical_train_df_only[['home_ownership','hardship_flag']]
preview_get_dummies
pd.get_dummies(preview_get_dummies)
# get the columns names that have 'object' values
columns_text = categorical_train_df_only.columns.values.tolist()
# train_df dataframe duplicated for a new train data frame
duplicate_data_train = train_df_dropped
# for loop where x will have a column name then it will drop a column with that specific name since we are replacing those columns with numeric values
for x in columns_text:
duplicate_data_train = duplicate_data_train.drop([x], axis=1)
# One column that should not be in the dataframe and it is an int column
# duplicate_data_train = duplicate_data_train.drop(['Unnamed: 0'], axis=1)
# dummies only with object values
df = pd.get_dummies(train_df[columns_text])
# merge the dropped columns dataframe and the dummies dataframe
main_train_data = pd.concat((duplicate_data_train, df), axis=1)
categorical_train_df_only = main_train_data.select_dtypes('object')
print(categorical_train_df_only)
main_train_data
###Output
Empty DataFrame
Columns: []
Index: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, ...]
[12180 rows x 0 columns]
###Markdown
Get dummies and make new dataframe (Longer route and only works if data is small)
###Code
# Convert categorical data to numeric and separate target feature for training data
home_ownership_data = pd.get_dummies(train_df.home_ownership)
verification_status_data = pd.get_dummies(train_df.verification_status)
# loan_status_data = pd.get_dummies(train_df.loan_status)
pymnt_plan_data = pd.get_dummies(train_df.pymnt_plan)
initial_list_status_data = pd.get_dummies(train_df.initial_list_status)
application_type_data = pd.get_dummies(train_df.application_type)
hardship_flag_data = pd.get_dummies(train_df.hardship_flag)
debt_settlement_flag_data = pd.get_dummies(train_df.debt_settlement_flag)
# drop columns in certain dataframe
new_train_df = train_df.drop(['verification_status','home_ownership','loan_status',
'pymnt_plan','initial_list_status','application_type','hardship_flag','debt_settlement_flag','Unnamed: 0','index'], axis=1)
# concat (add dataframe to dataframe)
new_data_train = pd.concat((new_train_df, verification_status_data, home_ownership_data, #loan_status_data,
pymnt_plan_data, initial_list_status_data,application_type_data,
hardship_flag_data, debt_settlement_flag_data), axis=1)
# Check if any column values are objects
categorical_train_df_only = new_data_train.select_dtypes('object')
print(categorical_train_df_only)
new_data_train
###Output
Empty DataFrame
Columns: []
Index: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, ...]
[12180 rows x 0 columns]
###Markdown
Test Dataframe (2020 Q1 Loans)
###Code
test_df
###Output
_____no_output_____
###Markdown
Get dummies and make new dataframe (Shorter route and works more efficent for bigger data sets)
###Code
test_df_dropped = test_df.drop(columns=['loan_status','index','Unnamed: 0'])
test_loan_status_df = test_df['loan_status']
test_df_dropped.head()
print(test_df_dropped.dtypes)
# prints out dataframe that has object values (categorical values)
categorical_test_df_only = test_df_dropped.select_dtypes('object')
categorical_test_df_only
# get the columns names that have 'object' values
columns_text = categorical_test_df_only.columns.values.tolist()
# train_df dataframe duplicated for a new train data frame
duplicate_data_test = test_df_dropped
# for loop where x will have a column name then it will drop a column with that specific name since we are replacing those columns with numeric values
for x in columns_text:
duplicate_data_test = duplicate_data_test.drop([x], axis=1)
# One column that should not be in the dataframe and it is an int column
# duplicate_data_test = duplicate_data_test.drop(['Unnamed: 0'], axis=1)
# dummies only with object values
df = pd.get_dummies(test_df_dropped[columns_text])
# merge the dropped columns dataframe and the dummies dataframe
main_test_data = pd.concat((duplicate_data_test, df), axis=1)
categorical_test_df_only = main_test_data.select_dtypes('object')
print(categorical_test_df_only)
main_test_data
###Output
Empty DataFrame
Columns: []
Index: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, ...]
[4702 rows x 0 columns]
###Markdown
Train and Test Dataframe most efficent way (2019 Loans, 2020 Q1 Loans) Train Data (2019 Loans)
###Code
X_train = pd.get_dummies(train_df.drop(columns=['loan_status','index','Unnamed: 0']))
y_train = train_df['loan_status']
X_train
y_train
###Output
_____no_output_____
###Markdown
Test Data (2020 Q1 Loans)
###Code
X_test = pd.get_dummies(test_df.drop(columns=['loan_status','index','Unnamed: 0']))
y_test = test_df['loan_status']
X_test
y_test
###Output
_____no_output_____
###Markdown
LogisticRegression Model
###Code
print("X_train:", X_train.shape)
print("X_test", X_test.shape)
# add missing dummy variables to testing set
for col in X_train.columns:
if col not in X_test.columns:
X_test[col] = 0
X_train
X_test
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(solver='lbfgs', random_state=1)
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
# Train a Random Forest Classifier model
# and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
# from sklearn.preprocessing import StandardScaler
# scaler = StandardScaler()
# scaler.fit(X_train)
# X_train_scaled = scaler.transform(X_train)
# X_test_scaled = scaler.transform(X_test)
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
print("Scaler:", scaler)
print("- " * 10)
print("X_train Value before scaler.fit:\n", X_train)
print("- " * 10)
scaler.fit(X_train)
print("X_train Value after fit:\n", scaler.fit(X_train))
print("- " * 10)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
print("X_train_scaled Value:\n", X_train_scaled)
print("- " * 10)
print("X_test_scaled Value:\n", X_test_scaled)
print("- " * 10)
# Train the Logistic Regression model on the scaled data and print the model score
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(solver='lbfgs')
clf.fit(X_train_scaled, y_train)
clf.score(X_test_scaled, y_test)
# Train a Random Forest Classifier model on the scaled data and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
clf.fit(X_train_scaled, y_train)
clf.score(X_test_scaled, y_test)
###Output
_____no_output_____
###Markdown
Processing test data
###Code
# add missing dummy variables to testing set
X_test = pd.get_dummies(test_df)
y_test = X_test['loan_status_high_risk']
X_test = X_test.drop(["loan_status_low_risk","loan_status_high_risk"],axis=1)
X_test['debt_settlement_flag_Y'] = 0
X_test
set(X_train.columns) - set(X_test.columns)
###Output
_____no_output_____
###Markdown
Consider the models I think the random Forest would perform better than logistic regression in this case. Because test dataset includes numerical value,such as installment, annual_inc,etc. "Since Logistic Regression depends on a calculation based on ‘weights’, numerical encoding of categorical variables can lead the algorithm to treat certain categories are of higher importance compared to others, depending on the number assigned." -- https://medium.com/@bemali_61284/random-forest-vs-logistic-regression-16c0c8e2484c. On the other hand, "by randomly selecting subsets of features, some trees of the forest can isolate more important features while increasing the overall accuracy of the result".-- https://medium.com/@bemali_61284/random-forest-vs-logistic-regression-16c0c8e2484c. Fit a Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
classifier_lib = LogisticRegression(solver='liblinear', max_iter=10000)
classifier_lib
# Train the Logistic Regression model on the unscaled data and print the model score
classifier_lib.fit(X_train, y_train)
print(f"Training Data Score: {classifier_lib.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier_lib.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Training Score: {clf.score(X_test, y_test)}')
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6165461505742237
###Markdown
Guess: I don't think a big variation will be detected between the models.
###Code
# add missing dummy variables to testing set
final_df = pd.concat([train,test])
#Load dataset
df1 = final_df.drop("loan_status_high_risk", axis=1)
X = pd.get_dummies(df1, dtype=float)
y = final_df['loan_status_high_risk']
# Train the Logistic Regression model on the unscaled data and print the model score
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score bbnn
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
from sklearn.ensemble import RandomForestRegressor
classifier = RandomForestRegressor()
classifier
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
# Train the Logistic Regression model on the scaled data and print the model score
def test_model(model, data):
reg = model.fit(X_train_scaled, y_train)
print(f'Model: {type(reg).__name__}')
print(f'Train score: {reg.score(X_train_scaled, y_train)}')
print(f'Test Score: {reg.score(X_test_scaled, y_test)}\n')
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
data2 = [X_train_scaled, X_test_scaled, y_train, y_test]
test_model(LogisticRegression(), data2)
# Train a Random Forest Classifier model on the scaled data and print the model score
test_model(RandomForestRegressor(), data2)
Looks like the scaled data provided highly divisible data. Compared to the unscaled data, this seems to match my predications more in the fact that there is not diffrentiation between the results
of the two models used.
###Output
_____no_output_____
###Markdown
Matching FeaturesAfter my inital run, I found that the data had a mismatch in the columns/features. I added a column to the test data to correct this error.
###Code
train_cols = X_train.columns
test_cols = X_test.columns
common_cols = train_cols.intersection(test_cols)
train_not_test = train_cols.difference(test_cols)
print(train_not_test)
#identify replacement value
X_train['debt_settlement_flag_Y'].mode()
#add column to the test data
X_test['debt_settlement_flag_Y']=0
X_test.head()
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
# Train the Logistic Regression model on the unscaled data
from sklearn.linear_model import LogisticRegression
lr_classifier = LogisticRegression(solver= 'lbfgs', max_iter=200)
lr_classifier.fit(X_train, y_train)
#print the model score
print(f"Training Data Score: {lr_classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {lr_classifier.score(X_test, y_test)}")
###Output
Training Data Score: 0.6588669950738916
Testing Data Score: 0.5174393874946831
###Markdown
Random Forest
###Code
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
rf_classifier = RandomForestClassifier(random_state=42, n_estimators=50)
rf_classifier.fit(X_train, y_train)
#print the model score
print(f"Training Data Score: {rf_classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {rf_classifier.score(X_test, y_test)}")
###Output
Training Data Score: 1.0
Testing Data Score: 0.6312207571246278
###Markdown
Based on the testing results, it appears that the Random Forest Classifier was the more successful test.
###Code
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
lrs_classifier = LogisticRegression().fit(X_train_scaled, y_train)
print(f'Training Score: {lrs_classifier.score(X_train_scaled, y_train)}')
print(f'Testing Score: {lrs_classifier.score(X_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
rfs_classifier = RandomForestClassifier(random_state=42, n_estimators=50).fit(X_train_scaled, y_train)
print(f'Training Score: {rfs_classifier.score(X_train_scaled, y_train)}')
print(f'Testing Score: {rfs_classifier.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6295193534666099
###Markdown
EDA Exploratory data analysis
###Code
train_df
train_df.shape
train_df.info()
#Examining Loan Status
train_df['loan_status']
train_df['loan_status'].value_counts()
train_df['loan_status']
test_df
#Examining Loan Status
test_df['loan_status']
train_df['loan_status']
###Output
_____no_output_____
###Markdown
Preprocess
###Code
# Convert categorical data to numeric and separate target feature for training data
train_dum = pd.get_dummies(train_df)
# Convert categorical data to numeric and separate target feature for testing data
test_dum = pd.get_dummies(test_df)
ctest = test_dum.columns
ctrain = train_dum.columns
###Output
_____no_output_____
###Markdown
Print ctest and ctrain
###Code
print(len(ctest))
len(ctrain)
for c in ctrain:
if c not in ctest:
print(c)
y_train.value_counts()
# add missing dummy variables to testing set
test_dum['debt_settlement_flag_Y'] = 0
###Output
_____no_output_____
###Markdown
Print ctest
###Code
ctest
X = train_dum.drop(['loan_status_high_risk', 'loan_status_low_risk'], axis=1)
y = train_dum[['loan_status_high_risk', 'loan_status_low_risk']]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=2)
X20 = test_dum.drop(['loan_status_high_risk', 'loan_status_low_risk'], axis=1)
y20 = test_dum.drop(['loan_status_high_risk', 'loan_status_low_risk'], axis=1)
X20_train, X20_test, y20_train, y20_test = train_test_split(X20, y20, random_state=2)
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train, y_train)
training_score = model.score(X_train, y_train)
print(f"Training Score: {training_score}")
y_pred = model.predict(X_test).sum(axis=1)
testing_score = model.score(X_test, y_test)
print(f"Testing Score: {testing_score}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
modelscale = LinearRegression()
modelscale.fit(X_train_scaled, y_train)
training_score_scaled = modelscale.score(X_train_scaled, y_train)
testing_score_scaled = model.score(X_test_scaled, y_test)
print(f"Training Score: {training_score_scaled}")
print(f"Testing Score: {testing_score_scaled}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf_scaled = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'Training Score: {clf_scaled.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf_scaled.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.8003284072249589
###Markdown
Dependencies
###Code
import numpy as np
import pandas as pd
from pathlib import Path
from sklearn.preprocessing import LabelEncoder
###Output
_____no_output_____
###Markdown
Load Data
###Code
train_df = pd.read_csv('./Resources/2019loans.csv')
test_df = pd.read_csv('./Resources/2020Q1loans.csv')
###Output
_____no_output_____
###Markdown
Checking the head
###Code
train_df.head()
test_df.head()
###Output
_____no_output_____
###Markdown
Which model will perform better?Since there are so many features and multiple categorical data, I think the RandomForestClassifier model will perform better than the LogisticRegression model.
###Code
# Viewing Columns - training data
train_df.columns
# Viewing Columns - testing data
test_df.columns
###Output
_____no_output_____
###Markdown
Preprocessing Training Data Set
###Code
# Seperating X training data
X = train_df[['loan_amnt', 'int_rate', 'installment', 'home_ownership', 'annual_inc',
'verification_status', 'pymnt_plan', 'dti', 'delinq_2yrs',
'inq_last_6mths', 'open_acc', 'pub_rec', 'revol_bal', 'total_acc',
'initial_list_status', 'out_prncp', 'out_prncp_inv', 'total_pymnt',
'total_pymnt_inv', 'total_rec_prncp', 'total_rec_int',
'total_rec_late_fee', 'recoveries', 'collection_recovery_fee',
'last_pymnt_amnt', 'collections_12_mths_ex_med', 'policy_code',
'application_type', 'acc_now_delinq', 'tot_coll_amt', 'tot_cur_bal',
'open_acc_6m', 'open_act_il', 'open_il_12m', 'open_il_24m',
'mths_since_rcnt_il', 'total_bal_il', 'il_util', 'open_rv_12m',
'open_rv_24m', 'max_bal_bc', 'all_util', 'total_rev_hi_lim', 'inq_fi',
'total_cu_tl', 'inq_last_12m', 'acc_open_past_24mths', 'avg_cur_bal',
'bc_open_to_buy', 'bc_util', 'chargeoff_within_12_mths', 'delinq_amnt',
'mo_sin_old_il_acct', 'mo_sin_old_rev_tl_op', 'mo_sin_rcnt_rev_tl_op',
'mo_sin_rcnt_tl', 'mort_acc', 'mths_since_recent_bc',
'mths_since_recent_inq', 'num_accts_ever_120_pd', 'num_actv_bc_tl',
'num_actv_rev_tl', 'num_bc_sats', 'num_bc_tl', 'num_il_tl',
'num_op_rev_tl', 'num_rev_accts', 'num_rev_tl_bal_gt_0', 'num_sats',
'num_tl_120dpd_2m', 'num_tl_30dpd', 'num_tl_90g_dpd_24m',
'num_tl_op_past_12m', 'pct_tl_nvr_dlq', 'percent_bc_gt_75',
'pub_rec_bankruptcies', 'tax_liens', 'tot_hi_cred_lim',
'total_bal_ex_mort', 'total_bc_limit', 'total_il_high_credit_limit',
'hardship_flag', 'debt_settlement_flag']]
# Seperating y training data
y = train_df[['target']]
y
# Convert categorical data to numeric
X_dummies = pd.get_dummies(X)
# Converting output labels to 0 and 1
y_label = LabelEncoder().fit_transform(y['target'])
y_label
# Checking X_dummies
X_dummies
X_dummies.columns
###Output
_____no_output_____
###Markdown
Testing Data Set
###Code
# Removing target feature from testing data
X_test = test_df[['loan_amnt', 'int_rate', 'installment', 'home_ownership', 'annual_inc',
'verification_status', 'pymnt_plan', 'dti', 'delinq_2yrs',
'inq_last_6mths', 'open_acc', 'pub_rec', 'revol_bal', 'total_acc',
'initial_list_status', 'out_prncp', 'out_prncp_inv', 'total_pymnt',
'total_pymnt_inv', 'total_rec_prncp', 'total_rec_int',
'total_rec_late_fee', 'recoveries', 'collection_recovery_fee',
'last_pymnt_amnt', 'collections_12_mths_ex_med', 'policy_code',
'application_type', 'acc_now_delinq', 'tot_coll_amt', 'tot_cur_bal',
'open_acc_6m', 'open_act_il', 'open_il_12m', 'open_il_24m',
'mths_since_rcnt_il', 'total_bal_il', 'il_util', 'open_rv_12m',
'open_rv_24m', 'max_bal_bc', 'all_util', 'total_rev_hi_lim', 'inq_fi',
'total_cu_tl', 'inq_last_12m', 'acc_open_past_24mths', 'avg_cur_bal',
'bc_open_to_buy', 'bc_util', 'chargeoff_within_12_mths', 'delinq_amnt',
'mo_sin_old_il_acct', 'mo_sin_old_rev_tl_op', 'mo_sin_rcnt_rev_tl_op',
'mo_sin_rcnt_tl', 'mort_acc', 'mths_since_recent_bc',
'mths_since_recent_inq', 'num_accts_ever_120_pd', 'num_actv_bc_tl',
'num_actv_rev_tl', 'num_bc_sats', 'num_bc_tl', 'num_il_tl',
'num_op_rev_tl', 'num_rev_accts', 'num_rev_tl_bal_gt_0', 'num_sats',
'num_tl_120dpd_2m', 'num_tl_30dpd', 'num_tl_90g_dpd_24m',
'num_tl_op_past_12m', 'pct_tl_nvr_dlq', 'percent_bc_gt_75',
'pub_rec_bankruptcies', 'tax_liens', 'tot_hi_cred_lim',
'total_bal_ex_mort', 'total_bc_limit', 'total_il_high_credit_limit',
'hardship_flag', 'debt_settlement_flag']]
# Convert categorical data to numeric
X_test = pd.get_dummies(X_test)
X_test
# Separate target feature for testing data
y_test = test_df[['target']]
test_y_label = LabelEncoder().fit_transform(y_test['target'])
test_y_label
#Finding mission value
missing = set(X_dummies)-set(X_test)
missing
# add missing dummy variables to testing set
X_test['debt_settlement_flag_Y'] = 0
X_test
###Output
_____no_output_____
###Markdown
LogisticRegression Model
###Code
# X and y shape
print("Shape: ", X_dummies.shape, y_label.shape)
###Output
Shape: (12180, 92) (12180,)
###Markdown
Train the Logistic Regression model on the unscaled data and print the model score
###Code
# Create a logistic regression model
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
# Fit/train the model to the data
classifier.fit(X_dummies, y_label)
# Print the accuracy score for the test data (Validate)
print(f"Training Data Score: {classifier.score(X_dummies, y_label)}")
print(f"Testing Data Score: {classifier.score(X_test, test_y_label)}")
# Make predictions by using the X_test and y_test data
pd.DataFrame({
"actual": list(test_y_label),
"predicted": list(classifier.predict(X_test))
})
# Create a confusion matrix on the test data
# Use the true values from test_y_label and the output of the model with X_test as the input
from sklearn.metrics import confusion_matrix
y_true = test_y_label
y_pred = classifier.predict(X_test)
confusion_matrix(y_true, y_pred)
###Output
_____no_output_____
###Markdown
Train a Random Forest Classifier model and print the model score
###Code
# Import a Random Forests classifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
# Fit a model, and then print a classification report
clf = RandomForestClassifier(random_state=1).fit(X_dummies, y_label)
y_pred_2 = clf.predict(X_test)
print(classification_report(test_y_label, y_pred_2, target_names=['low_risk','high_risk']))
print(f'Training Score: {clf.score(X_dummies, y_label)}')
print(f'Testing Score: {clf.score(X_test, test_y_label)}')
###Output
precision recall f1-score support
low_risk 0.60 0.82 0.69 2351
high_risk 0.72 0.46 0.56 2351
accuracy 0.64 4702
macro avg 0.66 0.64 0.63 4702
weighted avg 0.66 0.64 0.63 4702
Training Score: 1.0
Testing Score: 0.638664398128456
###Markdown
OutcomeAs I expected, the Random Forest Classifier Model performed better than the Logistic Regression Model. The Random Forest Classifier model received a testing score of 0.64 and the Logistic Regression model received a testing score of 0.51. Scale the data Predicton 2I think both models are going to perform better since the StandardScaler will shift and scale all features to remove biases. Since we are scaling the data to have a mean of 0 and a variance of 1, I still think the Random Forest Classifier Modelwill outperform the Logistic Regression Model.
###Code
# Scale the data with StandardScaler()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_dummies)
X_train_scaled = scaler.transform(X_dummies)
X_test_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Train the Logistic Regression model on the scaled data and print the model score
###Code
# Create a logistic regression model
classifier2 = LogisticRegression()
classifier2
# Fit/train the model to the data
classifier.fit(X_train_scaled, y_label)
# Print the accuracy score for the test data (Validate)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_label)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, test_y_label)}")
# Make predictions by using the X_test and y_test data
pd.DataFrame({
"actual": list(test_y_label),
"predicted": list(classifier.predict(X_test_scaled))
})
# Create a confusion matrix on the test data
y_true = test_y_label
y_pred = classifier.predict(X_test_scaled)
confusion_matrix(y_true, y_pred)
###Output
_____no_output_____
###Markdown
Train a Random Forest Classifier model on the scaled data and print the model score
###Code
# Fit a model, and then print a classification report
clf = RandomForestClassifier(random_state=1).fit(X_train_scaled, y_label)
y_pred_2 = clf.predict(X_test_scaled)
print(classification_report(test_y_label, y_pred_2, target_names=['low_risk','high_risk']))
print(f'Training Score: {clf.score(X_train_scaled, y_label)}')
print(f'Testing Score: {clf.score(X_test_scaled, test_y_label)}')
###Output
precision recall f1-score support
low_risk 0.60 0.82 0.69 2351
high_risk 0.72 0.45 0.56 2351
accuracy 0.64 4702
macro avg 0.66 0.64 0.63 4702
weighted avg 0.66 0.64 0.63 4702
Training Score: 1.0
Testing Score: 0.6378136962994471
###Markdown
Predictions:- I think the Random Forest Model will be a better model because its utilizing a bunch of random samples and averaging the results together. This allows it to better account for the 'noise' within the data. The Logistic Regression Model assumes there is a linear relationship which may not yeild accurate predictions.
###Code
# Convert categorical data to numeric and separate target feature for training data
X_train_df= train_df.drop(['loan_status', 'Unnamed: 0'], axis=1)
X_train_df.head()
X_train= pd.get_dummies(X_train_df)
X_train.head()
y_train= train_df['loan_status']
y_train.head()
# Convert categorical data to numeric and separate target feature for testing data
X_test_df= test_df.drop(['loan_status', 'Unnamed: 0'], axis=1)
X_test_df.head()
X_test= pd.get_dummies(X_test_df)
X_test.head()
y_test= test_df['loan_status']
y_test.head()
# add missing dummy variables to testing set
for column in X_train:
if column not in X_test:
X_test[column] = 0
print(column)
X_test.head()
# X_test['debt_settlement_flag_Y'] = 0
for index, value in enumerate(X_test['debt_settlement_flag_N']):
if value == 0:
X_test.at[index, 'debt_settlement_flag_Y'] = 1
else:
pass
X_test.head()
# Train the Logistic Regression model on the unscaled data and print the model score
clf = LogisticRegression()
clf.fit(X_train, y_train)
print(f"Training Data Score: {clf.score(X_train, y_train)}")
print(f"Testing Data Score: {clf.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
rfc = RandomForestClassifier(random_state=1, n_estimators=500)
rfc.fit(X_train, y_train)
print(f' Random Forest Training Score: {rfc.score(X_train, y_train)}')
print(f' Random Forest Testing Score: {rfc.score(X_test, y_test)}')
###Output
Random Forest Training Score: 1.0
Random Forest Testing Score: 0.6631220757124627
###Markdown
Results 1:- The Random Forest model scored higher for both the train and test data
###Code
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled=scaler.transform(X_train)
X_train_scaled
X_test_scaled=scaler.transform(X_test)
X_test_scaled
# Train the Logistic Regression model on the scaled data and print the model score
clf_scaled = LogisticRegression()
clf_scaled.fit(X_train_scaled, y_train)
print(f' Scaled Logistic Regression Score: {clf_scaled.score(X_train_scaled, y_train)}')
print(f' Scaled Logistic Regression Score: {clf_scaled.score(X_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
rfc_scaled = RandomForestClassifier(random_state=1, n_estimators=500)
rfc_scaled.fit(X_train_scaled, y_train)
print(f' Scaled Random Forest Train Score: {rfc_scaled.score(X_train_scaled, y_train)}')
print(f' Scaled Random Forest Test Score: {rfc_scaled.score(X_test_scaled, y_test)}')
###Output
Scaled Random Forest Train Score: 1.0
Scaled Random Forest Test Score: 0.6635474266269672
###Markdown
Read the CSV and Perform Basic Data Cleaning
###Code
# Convert categorical data to numeric and separate target feature for training data
columns = [
"loan_amnt", "int_rate", "installment", #"home_ownership",
#"annual_inc", "verification_status", "issue_d",
"loan_status",
#"pymnt_plan",
"dti", "delinq_2yrs", "inq_last_6mths",
"open_acc", "pub_rec", "revol_bal", "total_acc",
#"initial_list_status",
"out_prncp", #"out_prncp_inv",
"total_pymnt", #"total_pymnt_inv",
"total_rec_prncp", #"total_rec_int", "total_rec_late_fee",
"recoveries", "collection_recovery_fee", "last_pymnt_amnt", #"next_pymnt_d",
"collections_12_mths_ex_med", #"policy_code", "application_type", "acc_now_delinq",
"tot_coll_amt", "tot_cur_bal", "open_acc_6m", "open_act_il",
"open_il_12m", "open_il_24m", "mths_since_rcnt_il", "total_bal_il",
"il_util", "open_rv_12m", "open_rv_24m", "max_bal_bc",
"all_util", "total_rev_hi_lim", "inq_fi", "total_cu_tl",
"inq_last_12m", "acc_open_past_24mths", "avg_cur_bal", "bc_open_to_buy",
"bc_util", "chargeoff_within_12_mths", "delinq_amnt", "mo_sin_old_il_acct",
"mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op", "mo_sin_rcnt_tl", "mort_acc",
"mths_since_recent_bc", "mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl",
"num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl",
"num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0",
"num_sats", "num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m",
"num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75", "pub_rec_bankruptcies",
"tax_liens", "tot_hi_cred_lim", "total_bal_ex_mort", "total_bc_limit",
"total_il_high_credit_limit"#, "hardship_flag", "debt_settlement_flag"
]
target = ["loan_status"]
# Create our features
pd.get_dummies()
# Convert categorical data to numeric and separate target feature for testing data
x = {'Current': 'low_risk'}
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.datasets import make_blobs
X, y = make_blobs(centers=2, random_state=42)
print(f"Labels: {y[:10]}")
print(f"Data: {X[:10]}")
X, y = make_blobs(centers=3, random_state=42)
# Split the X and y into X_train, X_test, y_train, y_test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# Train a Random Forest Classifier model and print the model score
importances = list(zip(columns))
importances.sort(reverse=True)
importances
# Split the X and y into X_train, X_test, y_train, y_test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# Resample the training data with the RandomOversampler
# Train the Classifier
from sklearn.datasets import make_classification
X, y = make_classification(n_classes=2, class_sep=2,
weights=[0.1, 0.9], n_informative=3, n_redundant=1, flip_y=0,
n_features=20, n_clusters_per_class=1, n_samples=1000, random_state=10)
print('Original dataset shape %s' % Counter(y))
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=0)
Counter(y_train)
# Scale your data
from sklearn.preprocessing import MinMaxScaler
X_scaler = MinMaxScaler().fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
X_train_scaled
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, y_train)
print(f"Training Data Score: {model.score(X_train, y_train)}")
print(f"Testing Data Score: {model.score(X_test, y_test)}")
###Output
Training Data Score: 0.9986666666666667
Testing Data Score: 1.0
###Markdown
**Prediction**:I expect the logistic regression to be more accurate than the random forest classifier. Also logistic regression performs better with numerical data.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
print(f'Training Score: {classifier.score(X_train, y_train)}')
print(f'Testing Score: {classifier.score(X_test, y_test)}')
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500)
clf.fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6180348787749894
###Markdown
**Results**: It appears that the Random Forest Classifier model has given a better training score than the logistic regression. My prediction was wrong!
###Code
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
**Prediction**: I expect the logistic regression prediction to improve further after scaling; as scaling will normalise the features in the dataset.
###Code
# Train the Logistic Regression model on the scaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_train_scaled, y_train)
print(f'Training Score: {classifier.score(X_train_scaled, y_train)}')
print(f'Testing Score: {classifier.score(X_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500)
clf.fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6193109315185028
###Markdown
PERFORMANCE PREDICTION- Of the two models being used for this exercise, my prediction is that the RandomForestClassifier will outperform the LogisticRegression model. My reasoning for this is twofold: first the data contains 84 different variables (before preprocessing), which makes it likely than some of the variables are noisy rather than predictive. In general, logistic regression performs better with less noise in the data. Second, many of the variables are qualitative, and logistic regression is not as accurate when the variables are qualitative rather than quantitative. These are just broad guidelines, so ultimately, we must train and test both models to know which one is more accurate
###Code
train_df.head()
test_df.head()
# Convert categorical data to numeric and separate target feature for training data
y_train = train_df["target"]
X_train = train_df.drop(columns = ["target"])
X_train = pd.get_dummies(X_train)
X_train.head()
# Convert categorical data to numeric and separate target feature for testing data
y_test = test_df["target"]
X_test = test_df.drop(columns = ["target"])
X_test = pd.get_dummies(X_test)
X_test.head()
# add missing dummy variables to testing set
for i in X_train.columns:
if i not in X_test.columns:
X_test[i] = 0
X_test.head()
# Train the Logistic Regression model on the unscaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
X_train, X_test, y_train, y_test = train_test_split(X_train, y_train, random_state=42)
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.7908045977011494
###Markdown
RESULTS FOR UNSCALED ModelsLogistic Regression:Training Data Score: 0.6532840722495895Testing Data Score: 0.5093577201190983Random Forest:Training Score: 1.0Testing Score: 0.7908045977011494The comparatively higher testing score tells us that RandomForestClassifier was more accurate for the unscaled data. A closer look at the Raondom Forest result also shows us that the model performed much better on the training data set, but was less accurate predicting the testing data. This indicates the Random Forest model is overfitting on the training data.
###Code
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
X_train_scaled, X_test_scaled, y_train, y_test = train_test_split(X_train_scaled, y_train, random_state=42)
clf_scaled = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'Training Score: {clf_scaled.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf_scaled.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.792907180385289
###Markdown
After scaling the data, the testing score logistic model improved, while the testing score for the RandomForestClassifier decreased slightly. The RandomForestClassifier maintains a slight advantage in accuracy over the LogisticRegression model, though overfitting remains an issue that must be addressed. Identifying important features in the data set and removing variables that are too noisy may improve the performance of the model.
###Code
#Import important features dependency
feature_importances = clf.feature_importances_
#Identify the important features
%matplotlib inline
from matplotlib import pyplot as plt
features = sorted(zip(X_test.columns, clf.feature_importances_), key = lambda x: x[1])
cols = [f[0] for f in features]
width = [f[1] for f in features]
fig, ax = plt.subplots()
fig.set_size_inches(10,200)
plt.margins(y=0.001)
ax.barh(y=cols, width=width)
plt.show()
###Output
_____no_output_____
###Markdown
There does appear to be some noise with some of the variables. Not all of them have predictive importance. We will remove the unimportant variables to see if the models improve.
###Code
#Select important features from training data
from sklearn.feature_selection import SelectFromModel
sel = SelectFromModel(clf)
sel.fit(X_train_scaled, y_train)
#Train, test, split the selected features
X_selected_train, X_selected_test, y_train, y_test = train_test_split(sel.transform(X_train_scaled), y_train, random_state=1)
scaler = StandardScaler().fit(X_selected_train)
X_selected_train_scaled = scaler.transform(X_selected_train)
X_selected_test_scaled = scaler.transform(X_selected_test)
#Run logistic regression on selected features
clf = LogisticRegression()
clf.fit(X_selected_train_scaled, y_train)
print(f'Training Score: {clf.score(X_selected_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_selected_test_scaled, y_test)}')
#Run RandomForestClassifier on selected features
X_selected_train_scaled, X_selected_test_scaled, y_train, y_test = train_test_split(X_selected_train_scaled, y_train, random_state=42)
clf_scaled = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_selected_train_scaled, y_train)
print(f'Training Score: {clf_scaled.score(X_selected_train_scaled, y_train)}')
print(f'Testing Score: {clf_scaled.score(X_selected_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.7976653696498055
###Markdown
Train the Logistic Regression model on the unscaled data and print the model score
###Code
classifier = LogisticRegression(solver = "lbfgs", random_state = 1)
classifier
print("Shape: ", X_train.shape, y_train.shape)
print("Shape: ", X_test.shape, y_test.shape)
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
# Prediction Vs actual
predictions = classifier.predict(X_test)
# pd.DataFrame({"Prediction": predictions, "Actual": y_test})
predictions
# Calculate the classification report
print(classification_report(y_test, predictions, target_names= ["Low Risk", "High Risk"]))
###Output
precision recall f1-score support
Low Risk 0.52 0.30 0.38 2351
High Risk 0.51 0.72 0.60 2351
accuracy 0.51 4702
macro avg 0.51 0.51 0.49 4702
weighted avg 0.51 0.51 0.49 4702
###Markdown
Train a Random Forest Classifier model on unscaled and print the model score
###Code
# Fit a model, and then print a classification report
clf = RandomForestClassifier(random_state=1).fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred, target_names=["Low Risk", "High Risk"]))
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
precision recall f1-score support
Low Risk 0.60 0.82 0.69 2351
High Risk 0.72 0.46 0.56 2351
accuracy 0.64 4702
macro avg 0.66 0.64 0.63 4702
weighted avg 0.66 0.64 0.63 4702
Training Score: 1.0
Testing Score: 0.638664398128456
###Markdown
Scale the data
###Code
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Train the Logistic Regression model on the scaled data and print the model score
###Code
classifier = LogisticRegression()
classifier.fit(X_train_scaled, y_train)
print(f"Scaled Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Scaled Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
# Prediction Vs actual
predictions = classifier.predict(X_test_scaled)
# pd.DataFrame({"Prediction": predictions, "Actual": y_test})
print(classification_report(y_test, predictions,target_names=["Low Risk", "High Risk"]))
###Output
precision recall f1-score support
Low Risk 0.76 0.75 0.76 2351
High Risk 0.76 0.77 0.76 2351
accuracy 0.76 4702
macro avg 0.76 0.76 0.76 4702
weighted avg 0.76 0.76 0.76 4702
###Markdown
Train a Random Forest Classifier model on the scaled data and print the model score
###Code
clf = RandomForestClassifier(random_state=1).fit(X_train_scaled, y_train)
y_pred = clf.predict(X_test_scaled)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
print(classification_report(y_test, y_pred, target_names=["Low Risk", "High Risk"]))
###Output
precision recall f1-score support
Low Risk 0.60 0.82 0.69 2351
High Risk 0.72 0.45 0.56 2351
accuracy 0.64 4702
macro avg 0.66 0.64 0.63 4702
weighted avg 0.66 0.64 0.63 4702
###Markdown
Seperate TARGET feature 'loan status' for training and test set
###Code
y_train_list= train_df["loan_status"]
y_test_list= test_df["loan_status"]
# y_train_list
y_test = pd.get_dummies(y_test_list, drop_first = True)
# print(y_dummies_test.columns)
# y_dummies_test
y_train = pd.get_dummies(y_train_list, drop_first = True)
# print(y_dummies_train.columns)
# y_dummies_train
X_train.describe()
y_test.describe()
###Output
_____no_output_____
###Markdown
Drop columns that are perfectly correlated to features as they do not contribute to model performance and may even skew performance providing less accurate model results
###Code
train_df.drop(columns=["Unnamed: 0", "index", "pymnt_plan", "recoveries", "collection_recovery_fee", "policy_code", "num_tl_120dpd_2m", "tax_liens", "loan_status","debt_settlement_flag"], inplace=True)
test_df.drop(columns=["Unnamed: 0", "index", "pymnt_plan", "recoveries", "collection_recovery_fee", "policy_code", "num_tl_120dpd_2m", "tax_liens", "loan_status","debt_settlement_flag"], inplace=True)
###Output
_____no_output_____
###Markdown
Assign numerical values for categorical data using get_dummies Convert categorical data to numeric and separate target feature for testing data
###Code
# One-hot encoding the entire dataframe
X_train = pd.get_dummies(train_df, drop_first = True)
# print(X_dummies_train.columns)
# X_dummies_train
# One-hot encoding the entire dataframe
X_test = pd.get_dummies(test_df, drop_first = True)
# print(X_dummies_test.columns)
# X_dummies_test
###Output
_____no_output_____
###Markdown
Consider the models You will be creating and comparing two models on this data: a logistic regression, and a random forests classifier. Before you create, fit, and score the models, make a prediction as to which model you think will perform better. You do not need to be correct! Write down (in markdown cells in your Jupyter Notebook or in a separate document) your prediction, and provide justification for your educated guess. PREDICTION: Random forest will perform better because logistic requires linear data
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
from sklearn.metrics import confusion_matrix
y_true = y_test
y_pred = classifier.predict(X_test)
confusion_matrix(y_true, y_pred)
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
<ipython-input-237-5961d42015b7>:3: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
###Markdown
Which model performed better? How does that compare to your prediction? Write down your results and thoughts. It is difficult to say which mode performed better. Although Random Forrest Classifier model had a higher testing score, the Training Score was much higher thatn testing score indicating over-fitting. The Logistic Regression model score was lower, but training and testing scores were closer together. I don't believe either model performed well and would continue to test other models and methodologies to find a better performing model Scale the data Consider the models PREDICTION: Similar to data that is not scaled, I believe random forest will perform better because logistic requires linear data.
###Code
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
from sklearn.metrics import confusion_matrix
y_true = y_test
y_pred = classifier.predict(X_test)
confusion_matrix(y_true, y_pred)
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
###Output
<ipython-input-240-0c2c02fcce09>:2: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
###Markdown
My prediction is that the random forestor will give better results as it is more designed for the categorical data and using the decision trees to help make it more accurate.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_val, y_val)}")
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_val, y_val)}')
classifier2 = LogisticRegression()
classifier2.fit(X_test_new, y_test)
print(f"Testing Data Score: {classifier2.score(X_test_new, y_test)}")
clf2 = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_test_new, y_test)
print(f'Testing Score: {clf2.score(X_test_new, y_test)}')
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_val)
# Train the Logistic Regression model on the scaled data and print the model score
scale = LogisticRegression()
scale.fit(X_train_scaled, y_train)
print(f"Training Data Score: {scale.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {scale.score(X_test_scaled, y_val)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clfscale = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_val)}')
scaler2 = StandardScaler().fit(X_test_new)
X2_test_scaled = scaler.transform(X_test_new)
scale2 = LogisticRegression()
scale2.fit(X2_test_scaled, y_test)
print(f"Testing Data Score: {scale2.score(X2_test_scaled, y_test)}")
clf2scale = RandomForestClassifier(random_state=1, n_estimators=500).fit(X2_test_scaled, y_test)
print(f'Testing Score: {clf2scale.score(X2_test_scaled, y_test)}')
###Output
Testing Score: 1.0
###Markdown
Convert categorical data to numeric and separate target feature for training data
###Code
# Convert categorical data to numeric and separate target feature for training data
train_df.head()
X_dummies = train_df.drop(['target'], axis=1)
X_train= pd.get_dummies(X_dummies)
X_train.head()
y_train = train_df['target']
y_train
###Output
_____no_output_____
###Markdown
Convert categorical data to numeric and separate target feature for testing data
###Code
# Convert categorical data to numeric and separate target feature for testing data
test_df.head()
X_dummies2 = test_df.drop(['target'], axis=1)
X_test= pd.get_dummies(X_dummies2)
X_test.head()
y_test = test_df['target']
y_test
# add missing dummy variables to testing set
for col in X_train.columns:
if col not in X_test.columns:
X_test[col] = 0
#Check if the sets are the same
print(X_train.shape)
print(X_test.shape)
###Output
(12180, 92)
(4702, 92)
###Markdown
Train the Logistic Regression model on the unscaled data and print the model score My thought behind this regresion model will predict the data to not be equal to each other. The logistc regression only provides a binary regression and in this case we are talking about data from one year to another which varries.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(solver='lbfgs', max_iter=14000)
classifier
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
predictions = classifier.predict(X_test)
pd.DataFrame({"Prediction": predictions, "Actual": y_test})
###Output
_____no_output_____
###Markdown
Train a Random Forest Classifier model and print the model score Using the Random Forest Classifiere model its predicted that the data would be overfitted as based on the two csv files one file has more data information than the other.
###Code
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.646958740961293
###Markdown
Scale the data
###Code
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Before prerforming the models as done before, scaling the data will perform a regression that is more accuarate scaler predicition than before. Train the Logistic Regression model on the scaled data and print the model score
###Code
print(f"Training scaled Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing scaled Data Score: {classifier.score(X_test_scaled, y_test)}")
###Output
Training scaled Data Score: 0.5671592775041051
Testing scaled Data Score: 0.5642279880901744
###Markdown
Train a Random Forest Classifier model on the scaled data and print the model score
###Code
print(f'Training scaled Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing scaled Score: {clf.score(X_test_scaled, y_test)}')
###Output
Training scaled Score: 0.5
Testing scaled Score: 0.5
###Markdown
The RandomForestClassifier model accuracy score is 63% accuracy where as LogisticRegression model accuracy score is only 50% .So the RandomForestClassifier model is more acurate.
###Code
# Scale the data
from sklearn.preprocessing import StandardScaler, MinMaxScaler, LabelEncoder
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier2 = LogisticRegression()
classifier2.fit(X_train_scaled, y_train)
classifier2.score(X_test_scaled, y_test)
#Logistic Regression model score on the scaled data
print(f'Training Score: {classifier2.score(X_train_scaled,y_train)}')
print(f'Testing Score: {classifier2.score(X_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled,y_train)
#Random Forest Classifier model score on the scaled data
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
In the LogisticRegression accuracy increases from 50% on the unscaled data to 76% on the scaled data
The RandomForestClassifier accuracy doesnt show much change from 63% on the unscaled data to 64% scaled data.
This shows that the LogisticRegression could be sensitive to how the data is scaled, while random forests are not.
###Output
_____no_output_____
###Markdown
Preprocessing: Convert categorical data to numeric
###Code
# Convert categorical data to numeric and separate target feature for training data
X_train = pd.get_dummies(train_df.drop(columns=['loan_status']))
y_train = train_df['loan_status']
X_train.head()
y_train.head()
# Convert categorical data to numeric and separate target feature for testing data
X_test = pd.get_dummies(test_df.drop(columns=['loan_status']))
y_test = test_df['loan_status']
X_test.head()
y_test.head()
# add missing dummy variables to testing set
for i in X_train.columns:
if i not in X_test.columns:
X_test[i] = 0
X_test.head()
###Output
_____no_output_____
###Markdown
Consider the ModelsBefore creating, fitting and scoring the Logistic Regression and the Random Forests Classifier models, I predict that the Logistic Regression model will perform better. Logistic Regression is more commonly used to predict categorical circumstances, for example, yes or no, true or false. It helps to determine the probabilities between any two classes. In this case, we will predict if a loan from LendingClub will become a high risk or not (true or false). As a case in point, regression is normally used for market forecasting. Now, let’s test the models and see if my predication is true or false. Fit a Logistic Regression Model and Random Forest Classifier Model
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
print(f"The Logistic Regression Model Score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f"The Random Forest Classifier Model Score: {clf.score(X_test, y_test)}")
###Output
The Random Forest Classifier Model Score: 0.6180348787749894
###Markdown
Unscaled Data AnalysisFollowing the training of the unscaled data, the Random Forest Classifier model performed better by 10%. My prediction was false. The Random Forest Classifier model displays greater accuracy of approximately 62% compared to the Logistic Regression model of approximately 52% in predicting if a loan from LendingClub will become a high risk or not. Revisit the Preprocessing: Scale the Data Predication on Scaled DataWhile scaling is an essential step in machine learning pre-processing, I don't forsee that the accuracy percentage will change much. My predication is that the Random Forest Classifier model will continue to be the best model to predict if a loan from LendingClub will become a high risk or not. Let's see if my prediction is true or false.
###Code
# Scale the data
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.fit_transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier2 = LogisticRegression()
classifier2.fit(X_train_scaled, y_train)
print(f"The Logistic Regression Model Score: {classifier2.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf2 = RandomForestClassifier()
clf2.fit(X_train_scaled, y_train)
print(f"The Logistic Regression Model Score: {clf2.score(X_test_scaled, y_test)}")
###Output
The Logistic Regression Model Score: 0.5716716290940026
###Markdown
Work with Testing Data
###Code
X_test= test_df.drop("loan_status", axis=1)
X_test
X_test = pd.get_dummies(X_test)
print(X_test.columns)
X_test
X_test.shape
X_test
y_test = LabelEncoder().fit_transform(test_df["loan_status"])
y_test
# add missing dummy variables to testing set
X_test.insert(93,"debt_settlement_flag_Y",0)
X_test
###Output
_____no_output_____
###Markdown
I'm guessing that Logistic regression will do less well in this because there is so much potentially noisy information to accomdate. Whereas random forest tends to perform better when there is more noise in a dataset.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6180348787749894
###Markdown
In non-scaled data, Random Forest performed better with testing data scores of 0.62 compared with 0.53.
###Code
features = clf.feature_importances_
print(features)
plt.bar(x = range(len(features)), height=features)
plt.show()
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
clf = LogisticRegression().fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6193109315185028
###Markdown
My prediction is that the Random Forest Classifier model will perform better than Logistic Regression model. I think because of the number categories that the data would not be linear which linear data works well with Logistic Regression. With numerous categories Random Forest Classifier can create mulitple trees to process the data.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_train_df, y_label_train)
print('Uncaled Logistic Regression \n')
print(f"Training Data Score: {classifier.score(X_train_df, y_label_train)}")
print(f"Testing Data Score: {classifier.score(new_X_test, y_label_test)}")
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=10, n_estimators=500).fit(X_train_df, y_label_train)
print('Unscaled Random Forest Classifier \n')
print(f'Training Score: {clf.score(X_train_df, y_label_train)}')
print(f'Testing Score: {clf.score(new_X_test, y_label_test)}')
# Scale the data
scaler = StandardScaler().fit(X_train_df)
X_train_scaled = scaler.transform(X_train_df)
X_test_scaled = scaler.transform(new_X_test)
###Output
_____no_output_____
###Markdown
For the same reasons mentioned before I think the Random Forest Classifier model will perform better than Logistic Regression model.
###Code
# Train the Logistic Regression model on the scaled data and print the model score
scaled_classifier = LogisticRegression()
scaled_classifier.fit(X_train_scaled, y_label_train)
print('Scaled Logistic Regression \n')
print(f"Training Data Score: {scaled_classifier.score(X_train_scaled, y_label_train)}")
print(f"Testing Data Score: {scaled_classifier.score(X_test_scaled, y_label_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
scaled_clf = RandomForestClassifier(random_state=10, n_estimators=500).fit(X_train_scaled, y_label_train)
print('Scaled Random Forest Classifier \n')
print(f'Training Score: {scaled_clf.score(X_train_scaled, y_label_train)}')
print(f'Testing Score: {scaled_clf.score(X_test_scaled, y_label_test)}')
###Output
Scaled Random Forest Classifier
Training Score: 1.0
Testing Score: 0.6448319863887707
###Markdown
Unscaled Data Logistic Regression Model
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
unscaled_lr = LogisticRegression().fit(X_train, y_train)
print(f'Train data: {unscaled_lr.score(X_train, y_train)}')
print(f'Test data: {unscaled_lr.score(X_test, y_test)}')
###Output
Train data: 0.6530377668308702
Test data: 0.5091450446618461
###Markdown
Random Forest Classifier Model
###Code
# Train a Random Forest Classifier model and print the model score
unscaled_rfc=RandomForestClassifier(random_state = 1, n_estimators=50).fit(X_train,y_train)
print(f'Train data: {unscaled_rfc.score(X_train,y_train)}')
print(f'Test data: {unscaled_rfc.score(X_test,y_test)}')
###Output
Train data: 1.0
Test data: 0.6205869842620162
###Markdown
Which model performed better? How does that compare to your prediction? Write down your results and thoughts. Random Forest Classifier model performed better. Scaled Data
###Code
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Logistic Regression Model
###Code
# Train the Logistic Regression model on the scaled data and print the model score
scaled_lr=LogisticRegression().fit(X_train_scaled, y_train)
print(f'Train data: {scaled_lr.score(X_train_scaled, y_train)}')
print(f'Test data: {scaled_lr.score(X_test_scaled, y_test)}')
###Output
Train data: 0.710919540229885
Test data: 0.7598894087622289
###Markdown
Random Forest Classifier Model
###Code
# Train a Random Forest Classifier model on the scaled data and print the model score
scaled_rfc=RandomForestClassifier(random_state = 1, n_estimators=50).fit(X_train_scaled,y_train)
print(f'Train data: {scaled_rfc.score(X_train_scaled,y_train)}')
print(f'Test data: {scaled_rfc.score(X_test_scaled,y_test)}')
###Output
Train data: 1.0
Test data: 0.623139089749043
###Markdown
HypothesisI believe that random forest will have a better score since the data frame has a lot of catergorical data and a lot of columns in general.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_dummies_train, y_label_1)
print(f"Training Data Score: {classifier.score(X_dummies_train, y_label_1)}")
print(f"Testing Data Score: {classifier.score(X_dummies_test, y_label_2)}")
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_dummies_train, y_label_1)
print(f'Training Score: {clf.score(X_dummies_train, y_label_1)}')
print(f'Testing Score: {clf.score(X_dummies_test, y_label_2)}')
###Output
Training Score: 1.0
Testing Score: 0.646958740961293
###Markdown
Hypothesis 2I think that by scaling my scores are going to get better and that the testing and training will be less spread out.
###Code
# Scale the data
scaler = StandardScaler().fit(X_dummies_train)
X_train_scaled = scaler.transform(X_dummies_train)
X_test_scaled = scaler.transform(X_dummies_test)
X_test_scaled
# Train the Logistic Regression model on the scaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_train_scaled, y_label_1)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_label_1)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_label_2)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_label_1)
print(f'Training Score: {clf.score(X_train_scaled, y_label_1)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_label_2)}')
###Output
Training Score: 1.0
Testing Score: 0.6480221182475542
###Markdown
Initial Thought After reading and learning more about the differences between Random Forests and Logistic Regression, I believe Random Forests will perform better for categorical data giving a more important features while increasing the overall accuracy of the result
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
LogisticModel = LogisticRegression().fit(x_train_numeric, y_train)
print(f'training score: {LogisticModel.score(x_train_numeric, y_train)}')
print(f'testing score: {LogisticModel.score(x_test_numeric, y_test)}')
# Train a Random Forest Classifier model and print the model score
ForestModel = RandomForestClassifier().fit(x_train_numeric, y_train)
print(f'training score: {ForestModel.score(x_train_numeric, y_train)}')
print(f'testing score: {ForestModel.score(x_test_numeric, y_test)}')
###Output
training score: 1.0
testing score: 0.6341982135261591
###Markdown
Result before ScalingIt appears that Random Forest Classifier model performed better than Logistic Regression; However, we can clearly see that our training score for the Forest Model is 1.0, a sign of over fitting! After Scaling ThoughtsOnce the data is scaled, I believe both model will perform better than the previous results
###Code
# Scale the data
scaler = StandardScaler().fit(x_train_numeric)
x_train_scaled = scaler.transform(x_train_numeric)
x_test_scaled = scaler.transform(x_test_numeric)
# Train the Logistic Regression model on the scaled data and print the model score
LogisticModel = LogisticRegression().fit(x_train_scaled, y_train)
print(f'training score: {LogisticModel.score(x_train_scaled, y_train)}')
print(f'testing score: {LogisticModel.score(x_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
ForestModel = RandomForestClassifier().fit(x_train_scaled, y_train)
print(f'training score: {ForestModel.score(x_train_scaled, y_train)}')
print(f'testing score: {ForestModel.score(x_test_scaled, y_test)}')
###Output
training score: 1.0
testing score: 0.5961293066780093
###Markdown
After looking at the data, I predict that Random Forest will give us the best model because I do not think that this data will be very linear, which will cause the data to overfit with Logistic Regression.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
print(f"Model Data Score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1).fit(X_train, y_train)
print(f'Model Score: {clf.score(X_test, y_test)}')
###Output
Model Score: 0.6405784772437261
###Markdown
Results:LR: 0.5253083794130158RFC: 0.6405784772437261Looks like I was correct based on my own predictions. This is probably due to what I thought in my predictions. I don't think scaling the data will change much, maybe LR will go up slightly.
###Code
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier.fit(X_train_scaled, y_train)
print(f"Model Data Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf.fit(X_train_scaled, y_train)
print(f'Model Score: {clf.score(X_test_scaled, y_test)}')
###Output
Model Score: 0.6418545299872395
###Markdown
Prediction on Model Performance (Before Scaling) Personally, I see this as a classification problem because we're applying labels to data to be able to make predictions on a set of categories regarding loans (high risk/low risk). Thus, I see the Logistic Regression model (aka a classification model), producing a higher accuracy score.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
model = LogisticRegression(max_iter = 15000)
model.fit(X_train, y_train)
print(f"Training Data Score: {model.score(X_train, y_train)}")
print(f"Testing Data Score: {model.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
model = RandomForestClassifier(random_state = 1, n_estimators = 50).fit(X_train, y_train)
model.fit(X_train, y_train)
print(f"Training Data Score: {model.score(X_train, y_train)}")
print(f"Testing Data Score: {model.score(X_test, y_test)}")
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Prediction on Model Performance (After Scaling)Using StandarScaler(), I predict our Logistic Regression model will produce a higher accuracy. However, in my opinion, since our Random Forest Classifier model was not the model best suited for making our predictions on high/low risk loans in the first place, I don't see our accuracy score very much.
###Code
# Train the Logistic Regression model on the scaled data and print the model score
model = LogisticRegression(max_iter=15000).fit(X_train_scaled, y_train)
print(f'Training Score: {model.score(X_train_scaled, y_train)}')
print(f'Testing Score: {model.score(X_test_scaled, y_test)}')
# Train a Random Forest Classifier model on the scaled data and print the model score
model = RandomForestClassifier(random_state = 1, n_estimators = 50).fit(X_train_scaled, y_train)
print(f'Training Score: {model.score(X_train_scaled, y_train)}')
print(f'Testing Score: {model.score(X_test_scaled, y_test)}')
###Output
Training Score: 0.9999178981937603
Testing Score: 0.6420672054444917
###Markdown
Author: Meet K Sahni In this notebook:1) 2019loans and 2020Q1Loans are read into dataframes.  2) After analyzing the columns and shape of dataframes, the training & testing datasets are broken into X (features) and y (label) values.  3) The training & test numeric data (X) is converted to categorical data using pd.get_dummies. 4) The y values (for both training & test datasets) are converted to numeric using LabelEncoder (since we do not want two different columns for labels) 5) Since the training & test datasets are not equal, the missing column(s) are found and inserted in test dataset. 6) Logistic Regression model is trained on the unscaled data and the model score is printed.7) Random Forest Classifier model is trained on the unscaled data and the model score is printed. 8) The scores of both the models are compared on unscaled data. 9) The training & testing data is then scaled using StandardScaler() function.  10) Both the models (Logical Regression & Random Forest Classifier) are re-applied to our training & testing datasets after they are scaled. 11) The scores are calculated and the code is concluded. 
###Code
import numpy as np
import pandas as pd
from pathlib import Path
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler, LabelEncoder
from sklearn.ensemble import RandomForestClassifier
train_df = pd.read_csv(Path('Resources/2019loans.csv'))
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'))
set(train_df.columns)
test_df.head()
print(train_df.shape)
print(test_df.shape)
train_df.columns
###Output
_____no_output_____
###Markdown
Convert categorical data to numeric and separate target feature for training data
###Code
y_candidate1 = train_df["loan_status"] # define label
X_candidate1 = train_df.drop(columns=["loan_status"]) # drop label from features
#cols=[i for i in train_df.columns if i not in ["loan_status"]]
#for col in cols:
# One-hot encoding the X dataframe
X_train = pd.get_dummies(X_candidate1)
X_train.shape
#add LabelEncoder to y label in the training data
y_train = LabelEncoder().fit_transform(y_candidate1)
y_train
###Output
_____no_output_____
###Markdown
Convert categorical data to numeric and separate target feature for testing data
###Code
test_df.shape
y_candidate2 = test_df["loan_status"] # define label
X_candidate2 = test_df.drop(columns=["loan_status"]) # drop label from features
# One-hot encoding the X dataframe (test)
X_test = pd.get_dummies(X_candidate2)
X_test.shape
#add LabelEncoder to y label in the training data
y_test = LabelEncoder().fit_transform(y_candidate2)
y_test
# add missing dummy variables to testing set
missing_cols = set(X_train.columns) - set(X_test.columns)
missing_cols
cols = X_train.columns
for col in cols:
if col in X_test.columns:
print ("Column found")
else:
print(col)
#Add missing column in test dataset
X_test['debt_settlement_flag_Y'] = 0
#confirm if the column got added in X_test dataset
X_test.shape
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(max_iter=2000,solver = 'lbfgs')
classifier
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
###Output
Training Data Score: 0.6982758620689655
Testing Data Score: 0.5723096554657593
###Markdown
As seen above, the training & Testing Data Scores for Logical Regression Model on Unscaled Data are low.
###Code
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500)
clf
clf.fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6180348787749894
###Markdown
On the other hand, the Training Data Scores for Random Forest Classfier Model on Unscaled Data are 100%. The testing data scores are not good but they are a little better than Logical Regression Model.
###Code
# Scale the data
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier.fit(X_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
###Output
Training Data Score: 0.7128899835796387
Testing Data Score: 0.7201190982560612
###Markdown
After scaling the data, the Logical Regression Model scores improved considerably.
###Code
# Train a Random Forest Classifier model on the scaled data and print the model score
clf.fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
###Output
Training Score: 1.0
Testing Score: 0.6193109315185028
###Markdown
Prediction: I think the random forest model will perform more accurately based on the number of variables there are in the data set. Random forest models work better with higher number of variables.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train, y_train)
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, l1_ratio=None, max_iter=100,
multi_class='auto', n_jobs=None, penalty='l2',
random_state=None, solver='lbfgs', tol=0.0001, verbose=0,
warm_start=False)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
###Output
<ipython-input-12-4ae1110f710d>:4: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
###Markdown
Results: The Random Forest Model performed better than the Logistic Regression, but only slightly and both models did not have a very high testing score for predicting loan risk. I would predict that scaling the data will help bring the results up.
###Code
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier_2 = LogisticRegression()
classifier_2.fit(X_train_scaled, y_train)
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, l1_ratio=None, max_iter=100,
multi_class='auto', n_jobs=None, penalty='l2',
random_state=None, solver='lbfgs', tol=0.0001, verbose=0,
warm_start=False)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf_2 = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f'Training Score: {clf_2.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf_2.score(X_test_scaled, y_test)}')
###Output
<ipython-input-16-32f04808c0d3>:2: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().
clf_2 = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
###Markdown
Prediction before running the models: I would assume that the Random Forest Classifier model would best suit this data set as there are multiple points where smaller logic trees would better suit the type of predictions we need to make an accurate prediction.
###Code
# Complete the Logistic Regression Classifer Model
classifier = LogisticRegression()
classifier.fit(traindata_dummies, ytrain)
print(f"Testing Data Score: {classifier.score(testdummies, ytest)}")
# Complete the Random Forest Classifer Model
clf = RandomForestClassifier(random_state = 1).fit(traindata_dummies, ytrain)
print(f'Testing Score: {clf.score(testdummies, ytest)}')
###Output
Testing Score: 0.6544023819651212
###Markdown
The model that performed better was the Random Forest classifier model, but both models still weren't very accurate. Random Forest Classifier model achieved 0.65 as a testing score, and logistical regression achieved 0.52. Scaling the data should allow the logistical regression model to have more chances to learn as it will have more data to sample from. Now, for scaling the data:
###Code
# Now to scale the data
scaler = StandardScaler().fit(traindata_dummies)
xtrain = scaler.transform(traindata_dummies)
xtest = scaler.transform(testdummies)
xtrain[0]
classifier_scaled = LogisticRegression()
classifier_scaled.fit(xtrain, ytrain)
print(f"Testing Data Score: {classifier_scaled.score(xtest, ytest)}")
clf_scaled = RandomForestClassifier(random_state = 1).fit(xtrain, ytrain)
print(f'Testing Score: {clf_scaled.score(xtest, ytest)}')
###Output
Testing Score: 0.6548277328796257
###Markdown
Prediction for unscaled data As our data is categorical, then random forest should be our first choice. The output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned.Random forests generally outperform decision trees. Random Forest Regression is a supervised learning algorithm that uses ensemble learning method for regression. Ensemble learning method is a technique that combines predictions from multiple machine learning algorithms to make a more accurate prediction than a single model. I predict, Random Forest Model with yield a better result compared to Logistic regression. The major limitation of Logistic Regression is the assumption of linearity between the dependent variable and the independent variables. It not only provides a measure of how appropriate a predictor(coefficient size)is, but also its direction of association (positive or negative). Logistic Regression model on the unscaled data
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train_dummies, y_train_label)
print(f"Training Data Score: {classifier.score(X_train_dummies, y_train_label)}")
print(f"Testing Data Score: {classifier.score(X_test_dummies, y_test_label)}")
predictions = classifier.predict(X_test_dummies)
pd.DataFrame({"Prediction": predictions, "Actual": y_test_label})
from sklearn.metrics import confusion_matrix, classification_report
y_true = y_test_label
y_pred = classifier.predict(X_test_dummies)
confusion_matrix(y_true, y_pred)
tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
accuracy = (tp + tn) / (tp + fp + tn + fn) # (111 + 128) / (111 + 5 + 128 + 6)
print(f"Accuracy: {accuracy}")
print(classification_report(y_true, y_pred))
###Output
precision recall f1-score support
0 0.55 0.21 0.31 2351
1 0.51 0.83 0.63 2351
accuracy 0.52 4702
macro avg 0.53 0.52 0.47 4702
weighted avg 0.53 0.52 0.47 4702
###Markdown
Random Forest Classifier model on the unscaled data
###Code
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_dummies, y_train_label)
print(f'Training Score: {clf.score(X_train_dummies, y_train_label)}')
print(f'Testing Score: {clf.score(X_test_dummies, y_test_label)}')
from sklearn.metrics import confusion_matrix, classification_report
y_true = y_test_label
y_pred = clf.predict(X_test_dummies)
confusion_matrix(y_true, y_pred)
pd.DataFrame({"Prediction": y_pred, "Actual": y_test_label})
tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
accuracy = (tp + tn) / (tp + fp + tn + fn) # (111 + 128) / (111 + 5 + 128 + 6)
print(f"Accuracy: {accuracy}")
print(classification_report(y_true, y_pred))
###Output
precision recall f1-score support
0 0.78 0.45 0.57 2351
1 0.61 0.87 0.72 2351
accuracy 0.66 4702
macro avg 0.70 0.66 0.65 4702
weighted avg 0.70 0.66 0.65 4702
###Markdown
with 66% accuracy, Random Forest Model gave a better result than logistic regression with accuracy of 52% on scaled data Prediction for scaled data. Feature Scaling is required for correct prediction and results. Regression Coefficients are directly influenced by scale of Features. The feature with high magnitude will weigh lot more than features having low magnitude even if they are more crucial in determining the output. Logistic Regression algorithms are very sensitive to the Feature Scaling, while “Tree-Based” Algorithm like Random Forest Regression are insensitive to the Feature scaling. Having said that I predict Logistic Regression would give a better accuracy on the scaled data. Logistic Regression model on the scaled data
###Code
# Scale the data
# Train the Logistic Regression model on the scaled data and print the model score
scaler = StandardScaler().fit(X_train_dummies)
X_train_scaled = scaler.transform(X_train_dummies)
X_test_scaled = scaler.transform(X_test_dummies)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train_label)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test_label)}")
from sklearn.metrics import confusion_matrix, classification_report
y_true = y_test_label
y_pred = classifier.predict(X_test_scaled)
confusion_matrix(y_true, y_pred)
pd.DataFrame({"Prediction": y_pred, "Actual": y_test_label})
tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
accuracy = (tp + tn) / (tp + fp + tn + fn) # (111 + 128) / (111 + 5 + 128 + 6)
print(f"Accuracy: {accuracy}")
print(classification_report(y_true, y_pred))
###Output
precision recall f1-score support
0 0.51 0.22 0.31 2351
1 0.50 0.79 0.62 2351
accuracy 0.51 4702
macro avg 0.51 0.51 0.46 4702
weighted avg 0.51 0.51 0.46 4702
###Markdown
Random Forest Classifier model on the scaled data
###Code
# Train a Random Forest Classifier model on the scaled data and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train_label)
print(f'Training Score: {clf.score(X_train_scaled, y_train_label)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test_label)}')
from sklearn.metrics import confusion_matrix, classification_report
y_true = y_test_label
y_pred = clf.predict(X_test_scaled)
confusion_matrix(y_true, y_pred)
pd.DataFrame({"Prediction": y_pred, "Actual": y_test_label})
tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
accuracy = (tp + tn) / (tp + fp + tn + fn) # (111 + 128) / (111 + 5 + 128 + 6)
print(f"Accuracy: {accuracy}")
print(classification_report(y_true, y_pred))
###Output
precision recall f1-score support
0 0.78 0.45 0.57 2351
1 0.62 0.87 0.72 2351
accuracy 0.66 4702
macro avg 0.70 0.66 0.65 4702
weighted avg 0.70 0.66 0.65 4702
###Markdown
Per instructions, the entire year of 2019 is used to predict the credit risk of loans from the 1st quarter of 2020.
###Code
train_df = pd.read_csv(Path('Resources/2019loans.csv', index_col=False))
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv', index_col=False))
pd.set_option("display.max_columns", None)
train_df.head()
# train_df.dtypes.to_dict()
test_df.head()
###Output
_____no_output_____
###Markdown
Preprocessing: Data Clean up
###Code
# train_df.isnull().sum
# drop label "loan status" to create X-Axis,
# also, to clean up data by removing columns with without data or irrelevant data (index) to avoid skewing the data model.
X_train = train_df.drop(['Unnamed: 0', 'index', 'delinq_amnt', 'num_tl_120dpd_2m', 'num_tl_30dpd', 'tax_liens', 'recoveries', 'collection_recovery_fee', 'policy_code', 'loan_status'], axis=1)
X_test = test_df.drop(['Unnamed: 0', 'index', 'delinq_amnt', 'num_tl_120dpd_2m', 'num_tl_30dpd', 'tax_liens', 'recoveries', 'collection_recovery_fee', 'policy_code', 'loan_status'], axis=1)
X_train.shape
###Output
_____no_output_____
###Markdown
Check for rows with na
###Code
X_train.dropna
X_train.shape
X_test.shape
# drop blank rows
X_test.dropna
X_test.shape
# X_test.dtypes.to_dict()
###Output
_____no_output_____
###Markdown
Create y_train and y_test
###Code
y_train = train_df.loan_status.values
y_test = test_df.loan_status.values
###Output
_____no_output_____
###Markdown
Check for empty data columns
###Code
X_train.describe()
X_train.describe()
###Output
_____no_output_____
###Markdown
Check for duplicated rows in X_train
###Code
X_train.duplicated().sum()
###Output
_____no_output_____
###Markdown
Check for null values for X_train and X_test (data files do not contain null values).
###Code
# pd.set_option("display.max_rows", None)
# X_test.isnull().sum()
###Output
_____no_output_____
###Markdown
Preprocesing: To convert categorical data to numeric and separate target feature for training and testing data
###Code
# X_train_dum = pd.get_dummies(X_train, drop_first=True)
# X_test_dum = pd.get_dummies(X_test, drop_first=True)
# print(X_train.columns)
# print(X_train.shape)
# print(X_train_dum.shape)
# Note: payment plan column for all rows indiates "n" for tain and test, the column has dropped out
# Note: Debt settlement flag for the test file has dropped and needs to be re-added to the test_dummy file for alignment with the train df.
X_train = pd.get_dummies(X_train, columns=['home_ownership', 'verification_status', 'pymnt_plan', 'initial_list_status', 'application_type', 'hardship_flag', 'debt_settlement_flag'],drop_first=True, dtype=float)
X_test = pd.get_dummies(X_test, columns=['home_ownership', 'verification_status', 'pymnt_plan', 'initial_list_status', 'application_type', 'hardship_flag', 'debt_settlement_flag'],drop_first=True, dtype=float)
# X_train.info()
###Output
_____no_output_____
###Markdown
Missing dummy variables added back to testing set
###Code
# Missing column "debt settlement flag" on x_test file
X_train.dtypes.to_dict()
X_train.shape
# Missing column "debt settlement flag" on x_test file
X_test['debt_settlement_flag_Y'] = test_df.debt_settlement_flag.map({'N':0}).astype(float)
X_test.shape
# X_test.info()
# Categorical Data to Numeric Y label train and test
y_train = LabelEncoder().fit_transform(y_train).astype(float)
y_test = LabelEncoder().fit_transform(y_test).astype(float)
# model.coef_
# X_train.columns
X_train.shape
y_train
# variables used for unscaled Logistic Regression and unscaled Random Forest
X_train_log_reg = pd.DataFrame(X_train)
X_test_log_reg = pd.DataFrame(X_test)
X_train_ran_forest = pd.DataFrame(X_train)
X_test_ran_forest = pd.DataFrame(X_test)
###Output
_____no_output_____
###Markdown
Prediction: Which model will perform better Logistic Regression or Random Forest My prediction is Random Forest - with the high number of variables in the Loan files, Random Forest should perform better than Logistic Regression. As the number of explanatory variables increases as in this case with such a large number of columns, the Random Forest model should have a higher possitivity score. Hyper param calculations for Logistic Regression and Random Forest
###Code
model_params = {
'random_forest': {
'model': RandomForestClassifier(),
'params': {
'n_estimators': [30, 40, 50, 60, 70]
}
},
'logistic_regression': {
'model': LogisticRegression(solver='liblinear',multi_class='auto'),
'params': {
'C': [0.5, 1, 1.5]
}
}
}
# Note: this "for loop" takes ~2 minutes to process
from sklearn.model_selection import GridSearchCV
scores = []
for model_name, mp in model_params.items():
clf = GridSearchCV(mp['model'], mp['params'], cv=5, return_train_score=False)
clf.fit(X_train, y_train)
scores.append({
'model': model_name,
'best_score': clf.best_score_,
'best_params': clf.best_params_
})
scores_df = pd.DataFrame(scores,columns=['model', 'best_score', 'best_params'])
scores_df
###Output
_____no_output_____
###Markdown
Train the Logistic Regression model on the unscaled data and print the model score
###Code
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train_log_reg, y_train)
# model.get_params()
print(f"Training Data Score: {model.score(X_train_log_reg, y_train)}")
print(f"Testing Score: {model.score(X_test_log_reg, y_test)}")
print(classification_report(y_test,model.predict(X_test_log_reg), target_names=['high_risk', 'low_risk']))
# confusion matrix
y_true = y_test
y_pred = model.predict(X_test_log_reg)
confusion_matrix(y_true, y_pred, labels=[1,0])
accuracy_score(y_true,y_pred)
# model.predict_proba(X_test[0:10])
###Output
_____no_output_____
###Markdown
Train a Random Forest Classifier (unscaled) model and print the model score
###Code
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(random_state=1, n_estimators=70).fit(X_train_ran_forest, y_train)
print(classification_report(y_test,classifier.predict(X_test_ran_forest), target_names=['high_risk', 'low_risk']))
print(f"Training Data Score: {classifier.score(X_train_ran_forest, y_train)}")
print(f"Testing Score: {classifier.score(X_test_ran_forest, y_test)}")
# confusion matrix
y_true = y_test
y_pred = classifier.predict(X_test)
confusion_matrix(y_true, y_pred, labels=[1,0])
accuracy_score(y_true,y_pred)
###Output
_____no_output_____
###Markdown
Scale X_train and X_test data using Standard Scaler
###Code
# Note: StandardScaler does not work well with outliers
X_scaler = StandardScaler().fit(X_train)
X_scaler.fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Prediction: how does scaling impact the Logistic Regression and Random Forest Regression Models. Scaling the Random Forest model will probably not be necessary as it will not have any impact on model results or scores since Random Forest is a tree based model. For Logistic Regression, we need to preform scaling since Logistic Regression is a distance based algorithim and is sensitive to the ranges used for each data point. Conversly, Decision Trees and Ensemble methods are not sensitive to ranges (variance) used in data. Train the Logistic Regression model on scaled data and print the model score
###Code
from sklearn.linear_model import LogisticRegression
model_scaled = LogisticRegression()
model_scaled.fit(X_train_scaled, y_train)
print(f"Training Data Score: {model_scaled.score(X_train_scaled, y_train)}")
print(f"Testing Score: {model_scaled.score(X_test_scaled, y_test)}")
print(classification_report(y_test,model_scaled.predict(X_test_scaled), target_names=['high_risk', 'low_risk']))
# confusion matrix
y_true = y_test
y_pred = model_scaled.predict(X_test_scaled)
confusion_matrix(y_true, y_pred, labels=[1,0])
accuracy_score(y_true,y_pred)
###Output
_____no_output_____
###Markdown
Train a Random Forest Classifier model on the scaled data and print the model score
###Code
from sklearn.ensemble import RandomForestClassifier
classifier_scaled = RandomForestClassifier(random_state=1, n_estimators=70).fit(X_train_scaled, y_train)
print(classification_report(y_test,classifier_scaled.predict(X_test_scaled), target_names=['high_risk', 'low_risk']))
print(f"Training Data Score: {classifier_scaled.score(X_train_scaled, y_train)}")
print(f"Testing Score: {classifier_scaled.score(X_test_scaled, y_test)}")
# confusion matrix
y_true = y_test
y_pred = classifier_scaled.predict(X_test_scaled)
confusion_matrix(y_true, y_pred, labels=[1,0])
accuracy_score(y_true,y_pred)
###Output
_____no_output_____
###Markdown
**Considering the models**I predict that a logistic regression model will perform better than the random forest model. Based on my understanding, Random Forest is recommended for simpler classification problems and this dataset has 80 variables to consider.
###Code
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(solver='lbfgs',max_iter=200)
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.fit_transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier.fit(X_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
###Output
Testing Score: 0.5
|
analysis/pchic scores and distance.ipynb | ###Markdown
Decide on the thresholds for pCHIC/Javierre 2016
###Code
import pandas as pd
import numpy as np
import seaborn as sns
%matplotlib inline
pchic = pd.read_table('full.tsv')
pchic.head()
genes = pd.read_table('gene_starts.tsv',names=['ensembl_id','tss','chr'])
genes.head()
df = pd.merge(pchic, genes, on='ensembl_id')
print(df.shape)
df = df[df['chrom'] == df['chr']]
print(df.shape)
df.head()
df['delta'] = abs(df.tss - (df.start + df.end)/2)
df.head()
df.plot.scatter("delta","score",loglog=True)
df['delta'].plot(kind='hist',loglog=True,bins=1000)
df['delta'].plot(kind='box',logx=True, vert=False)
df['delta'].describe()
df['delta'].quantile([0.68,0.75,0.9,0.95])
print(df.loc[df['delta'] < 2.45e6].shape)
###Output
(1865719, 9)
|
workflow/notebooks/do_things/Spacy_patterns.ipynb | ###Markdown
Testing Spacy PatternsThe purpose of this notebook is to test and build patterns used for matchingverb constructions in English with informative tags. We will use Spacy's Matcher class for this, alongside the parser:https://spacy.io/usage/rule-based-matchingTo-do list of primary English tense constructions, curated from:https://en.wikipedia.org/wiki/English_verbsExpressing_tenses,_aspects_and_moods```simple present writessimple past wrotepresent progressive is writingpast progressive was writingpresent perfect has writtenpast perfect had writtenpresent perf. progress. has been writingpast perf. progress. had been writingfuture will writefuture perfect will have writtenfuture perf. progress. will have been writing```secondary constructions:```imperative writefuture-in-past would writedo-support does writebe-going-to future is going to write``` Many of these can be found by parsing the sentence and applying Spacy's Matcher with some rules. It would be a good idea if the various constructionalcombinations could be identified modularly, so that the 'perfect' in a past perfect progressive is matched in the same way as a simple past perfect. We can consider dividing these constructions up into 3 columns -- 1 each for tense, aspect, and modality. If a construction contributes to one of these categories,the column gets filled. Otherwise it is left empty. ```"has been writing"tense aspect modality----- ------ ------past perfect progressive```
###Code
import spacy
from spacy.matcher import Matcher
from spacy.tokens import Token, Span
from spacy.util import filter_spans # nice tip: https://stackoverflow.com/a/63303480/8351428
import collections
spacy.explain('VBP')
test_sentences = '''\
He writes. He wrote. She is writing. She was writing.
He has written. He had written. She has been writing.
She had been writing. He will write. He will have written.
She will have been writing. Write. She would write. He does write.
He did write.
He is going to write.
Let it be written.
Let there be writing.
'''
nlp = spacy.load('en_core_web_sm')
# a set of rules to match tense-aspect-modality construtions in English
# NB order of patterns matters
tam_rules = [
(
'imperative', # NB: this must come first so it can be over-written by longer patterns
[
{'TAG': 'VB', 'DEP':'ROOT'},
]
),
(
'future',
[
{'TAG': 'MD', 'LEMMA': 'will'},
{'TAG': 'VB', 'DEP': {'IN': ['ROOT']}},
]
),
(
'present',
[
{'TAG':{'IN':['VBZ', 'VBP']}, 'DEP': {'NOT_IN': ['aux']}},
]
),
(
'past',
[
{'TAG': 'VBD', 'DEP': 'ROOT'},
]
),
(
'present perfect progressive',
[
{'TAG': {'IN': ['VBZ', 'VBP']}, 'LEMMA': 'have'},
{'TAG': 'VBN', 'LEMMA': 'be'},
{'TAG': 'VBG'},
]
),
(
'past perfect progressive',
[
{'TAG': {'IN': ['VBD']}, 'LEMMA': 'have'},
{'TAG': 'VBN', 'LEMMA': 'be'},
{'TAG': 'VBG'},
]
),
(
'present perfect',
[
{'TAG': {'IN': ['VBZ', 'VBP']}, 'LEMMA': 'have'},
{'TAG': 'VBN', 'DEP': {'NOT_IN': ['aux']}},
]
),
(
'past perfect',
[
{'TAG': {'IN': ['VBD']}, 'LEMMA': 'have'},
{'TAG': 'VBN', 'DEP': {'IN': ['ROOT']}},
]
),
(
'future perfect',
[
{'TAG': 'MD', 'LEMMA': 'will'},
{'TAG': {'IN': ['VB']}, 'LEMMA': 'have'},
{'TAG': 'VBN', 'DEP': {'IN': ['ROOT']}},
]
),
(
'future perfect progressive',
[
{'TAG': 'MD', 'LEMMA': 'will'},
{'TAG': {'IN': ['VB']}, 'LEMMA': 'have'},
{'TAG': 'VBN', 'LEMMA': 'be'},
{'TAG': 'VBG'},
]
),
(
'present progressive',
[
{'TAG': {'IN':['VBZ', 'VBP']}, 'LEMMA':'be'},
{'TAG':'VBG', 'LEMMA': {'NOT_IN':['go']}},
]
),
(
'past progressive',
[
{'TAG':'VBD', 'LEMMA':'be'},
{'TAG': 'VBG'},
]
),
(
'future-in-past', # habitual?
[
{'LOWER': 'would', 'DEP': {'IN': ['aux']}},
{'TAG':'VB'}
]
),
(
'do-support present',
[
{'TAG': {'IN': ['VBZ', 'VBP']}, 'LEMMA': 'do'},
{'TAG': 'VB'},
]
),
(
'past perfect (did)',
[
{'TAG': {'IN': ['VBD']}, 'LEMMA': 'do'},
{'TAG': 'VB'},
]
),
(
'be-going-to future',
[
{'TAG': {'IN':['VBZ', 'VBP']}, 'LEMMA':'be'},
{'TAG': 'VBG', 'LEMMA': 'go'},
{'TAG': 'TO'},
{'TAG': 'VB'},
]
),
(
'MODAL-there-be',
[
{'TAG': {'IN':['VB', 'MD']}, 'lower': {'IN':['let', 'may']}},
{'TAG': {'IN':['EX', 'PRP']}}, # EX = 'existential there'
{'TAG': 'VB', 'LOWER': 'be'},
{'TAG': 'VBN', 'OP': '?'}
]
),
# add another modal category:
# "Let him go up"
# i.e. "Let/may ... verb"
]
tam_matches = collections.defaultdict(set)
def on_match(matcher, doc, mid, matches):
for match in matches:
begin, end = match[1:]
tam_matches[(begin, end)].add(match)
getter = lambda token: token._.tam
Span.set_extension('tam', default='', force=True)
matcher = Matcher(nlp.vocab)
for tag, rules in tam_rules:
matcher.add(tag, on_match, rules)
spacy.explain('VBN')
parse = nlp(test_sentences)
matches = matcher(parse)
spans = []
# tag all spans with tam tag
for mid, start, end in matches:
span = parse[start:end]
span._.tam = nlp.vocab.strings[mid]
spans.append(span)
# filter out overlapping spans
filtered_spans = filter_spans(spans)
for span in filtered_spans:
print(span._.tam)
print('\t', span)
print()
###Output
present
writes
past
wrote
present progressive
is writing
past progressive
was writing
present perfect
has written
past perfect
had written
present perfect progressive
has been writing
past perfect progressive
had been writing
future
will write
future perfect
will have written
future perfect progressive
will have been writing
imperative
Write
future-in-past
would write
do-support present
does write
past perfect (did)
did write
be-going-to future
is going to write
MODAL-there-be
Let it be written
MODAL-there-be
Let there be
|
content/002_basics/lectures/Lecture1.ipynb | ###Markdown
Python Basics * Python Syntax* Variables and data types* Storing lots of data in memory: Lists and Tuples* Whitespace* Functions* Comments in code What does python syntax look like?
###Code
salary = 100000
tax_rate = 0.2
salary_after_tax = salary * (1-tax_rate)
print(salary_after_tax)
###Output
80000.0
###Markdown
What parts of Python did we use in that code?
###Code
print('Hello world')
###Output
Hello world
###Markdown
Variables and data types in python* Variables hold *data*. * For example, this might be a number (e.g. an integer) or a text string.* Your computer program uses the data in its operations. Let's create a really simple variable call `simple_sum` as the sum of two integers.
###Code
simple_sum = 1 + 1
print(simple_sum)
###Output
2
###Markdown
* You can conduct mathematical operations on variables| Operator | Name | Description ||--------------|----------------|--------------------------------------------------------|| ``a + b`` | Addition | Sum of ``a`` and ``b`` || ``a - b`` | Subtraction | Difference of ``a`` and ``b`` || ``a * b`` | Multiplication | Product of ``a`` and ``b`` || ``a / b`` | True division | Quotient of ``a`` and ``b`` || ``a // b`` | Floor division | Quotient of ``a`` and ``b``, removing fractional parts || ``a % b`` | Modulus | Integer remainder after division of ``a`` by ``b`` || ``a ** b`` | Exponentiation | ``a`` raised to the power of ``b`` || ``-a`` | Negation | The negative of ``a`` | Example: * the variable `z` is product of variable `x` raised to the power of `y`
###Code
x = 10
y = 2
z = x ** y
print(z)
###Output
100
###Markdown
Example: * the variable `foo` is the negation of variable `bar`
###Code
bar = 10
foo = -bar
print(foo)
###Output
-10
###Markdown
Variables Names* Variables **names** can only contain *letters*, *numbers*, and *underscores* ( _ ). * Underscores are used instead of spaces. * For example, use `student_name` instead of `student name`. * If you include a space then you will get an `SyntaxError`!
###Code
lecturer name = 'tom'
###Output
_____no_output_____
###Markdown
* Each variable has a **data type**. * Python is dynamically typed. * This means that Python does all of the work for you!
###Code
foo = 1000
bar = 'hello everyone'
print(type(foo))
print(type(bar))
foo = True
bar = False
spam = 3.142
eggs = 10000000
print(type(foo))
print(type(bar))
print(type(spam))
print(type(eggs))
###Output
<class 'bool'>
<class 'bool'>
<class 'float'>
<class 'int'>
###Markdown
Introduction to Lists and Tuples * A Python `List` is a simple and flexible way to store variables and any type of data
###Code
foo = [1, 2, 3, 4, 5]
print(foo)
###Output
[1, 2, 3, 4, 5]
###Markdown
* The elements stored within a `List` have a numbered index* Indexes start from **zero** (don't forget)
###Code
foo = [1, 2, 3, 4, 5]
print(foo[0])
print(foo[1])
foo[4] = 999
print(foo)
###Output
1
2
[1, 2, 3, 4, 999]
###Markdown
* A `List` is very flexible and can hold different types of variable
###Code
bar = ['spam', 5, 82.96, True]
bar[1] = 'eggs'
print(bar)
###Output
['spam', 'eggs', 82.96, True]
###Markdown
* A `List` has a length
###Code
length_of_list = len(bar)
print(length_of_list)
###Output
4
###Markdown
Inserting and removing items**New** list items:* can be **appended** to the end of a list a `List`* or **inserted** at a specified index**Existing** list items:* Can be removed from a specified **index*** or by **value**
###Code
foo = []
foo.append('spam')
foo.append('eggs')
print(foo)
#foo.insert(1, 'bar')
#print(foo)
#del foo[2] # Remove a specific index
#print(foo)
#foo.remove('spam') #Remove a specific value
#print(foo)
###Output
['spam', 'eggs']
###Markdown
* A `Tuple` is similar to a `List` with one key difference* A `List` is mutable where as a `Tuple` is **immutable**
###Code
foo = [1, 2, 3, 4, 5]
bar = (1, 2, 3, 4, 5)
print(foo)
print(bar)
foo[1] = 999 #list
bar[1] = 999 #tuple
###Output
_____no_output_____
###Markdown
Functions* We have already encountered a function: `print()` * `print()` is one of Python's built in functions* Python has lots of them!* If you are not sure how they work you can use the `help()` function!
###Code
help(print)
###Output
Help on built-in function print in module builtins:
print(...)
print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)
Prints the values to a stream, or to sys.stdout by default.
Optional keyword arguments:
file: a file-like object (stream); defaults to the current sys.stdout.
sep: string inserted between values, default a space.
end: string appended after the last value, default a newline.
flush: whether to forcibly flush the stream.
###Markdown
Importing functions from Python modules* Functions are stored within **modules*** Use the `import` statement to access the modules you need* Let's generate a random integer between 1 and 100.* We need to a function from the `random` module
###Code
import random as rnd
u = rnd.randint(1, 100)
print(f'I generated a random integer {u}')
###Output
I generated a random integer 31
###Markdown
* We can also import specific functions from modules
###Code
from random import randint, gauss
u1 = randint(1, 100)
u2 = gauss(0, 1)
print(f'I sampled from a random int {u1} and a normally distributed value {u2}')
###Output
I sampled from a random int 80 and a normally distributed value 0.25118187063605696
###Markdown
Custom Functions* You can also code your own bespoke functions* A function is a reusable block of code that has a **single responsibility*** That means you function should do one thing only Motivation* You have been asked to convert a dataset of degrees celsius figures to fahrenheit
###Code
deg_celsius = 20
fahrenheit = 9.0/5.0 * deg_celsius + 32
print(fahrenheit)
deg_celsius = 10
fahrenheit = 9.0/5.0 * deg_celsius + 32
print(fahrenheit)
###Output
68.0
50.0
###Markdown
* A reusable function would come in very handy here!
###Code
def convert_celsius_to_fahrenheit(deg_celsius):
deg_fahrenheit = 9.0/5.0 * deg_celsius + 32
print(f'{deg_celsius} degrees celsius is equivalent to {deg_fahrenheit} degrees fahrenheit')
convert_celsius_to_fahrenheit(20)
convert_celsius_to_fahrenheit(10)
###Output
20 degrees celsius is equivalent to 68.0 degrees fahrenheit
10 degrees celsius is equivalent to 50.0 degrees fahrenheit
###Markdown
* An alterantive way to write the same function* Instead of using `print()` we can `return` the result* And store the result in a new variable `result_fahrenheit`
###Code
def convert_celsius_to_fahrenheit(deg_celsius):
deg_fahrenheit = 9.0/5.0 * deg_celsius + 32
return deg_fahrenheit
result_fahrenheit = convert_celsius_to_fahrenheit(22)
print(result_fahrenheit)
###Output
71.6
###Markdown
* Watch out for whitespace rules!* if you use `:` then you must indent (use tab or 4 spaces) on the next line
###Code
def convert_celsius_to_fahrenheit(deg_celsius):
deg_fahrenheit = 9.0/5.0 * deg_celsius + 32
return deg_fahrenheit
###Output
_____no_output_____
###Markdown
* If a func returns a value you can pass it to another func
###Code
def add(num1, num2):
return num1 + num2
def square(num):
return num ** 2
result = square(add(1, 1))
print(result)
###Output
4
###Markdown
Comments in code
###Code
def convert_celsius_to_fahrenheit(deg_celsius):
'''
Converts d egrees celcius to degrees fahrenheit
Returns a float representing the temperature in degrees fahrenheit.
Parameters:
-----------
deg_celsius: float
a float temperature in degress celsius e.g. 18.5
'''
deg_fahrenheit = 9.0/5.0 * deg_celsius + 32
return deg_fahrenheit
help(convert_celsius_to_fahrenheit)
###Output
Help on function convert_celsius_to_fahrenheit in module __main__:
convert_celsius_to_fahrenheit(deg_celsius)
Converts d egrees celcius to degrees fahrenheit
Returns a float representing the temperature in degrees fahrenheit.
Parameters:
-----------
deg_celsius: float
a float temperature in degress celsius e.g. 18.5
###Markdown
* Using `` is another way to add comments to code* Useful to clarify "complex" code and aid your memory.* Won't be picked up by `help()`
###Code
from math import pi
def area_of_circle(radius):
# pi x squared(radius)
area = pi * radius ** 2
return area
help(area_of_circle)
###Output
Help on function area_of_circle in module __main__:
area_of_circle(radius)
###Markdown
In Python Functions can return multiple values
###Code
def list_info(data):
'''
Returns Sum, Length and Mean of list @data
Parameters:
----------
data: list
a list containing numeric data
'''
list_sum = sum(data)
list_length = len(data)
list_mean = list_sum / list_length
return list_sum, list_length, list_mean
data = [1, 2, 3, 4, 5]
results = list_info(data)
print(f"The variable 'results': {results} has type {type(results)}")
data_sum, data_length, data_mean = list_info(data)
print(f'Seperate variables: sum {data_sum}, length {data_length}, mean {data_mean}')
###Output
The variable 'results': (15, 5, 3.0) has type <class 'tuple'>
Seperate variables: sum 15, length 5, mean 3.0
|
Tuples.ipynb | ###Markdown
Tuples
###Code
x = ('Rama', "Krishna", "Laxminarsimha")
x
print(x)
y = ['Rama', "Krishna", "Laxminarsimha" ]
y
print(y)
x[2]
y[2]
z = (1,2,3,5,8)
z
sum(z)
max(z)
print(max(z))
list = ['a','b','c']
list.pop()
t = tuple()
dir(tuple)
(x,y) = (4, 'fred')
print(x)
print(y)
(a,b) = (99,98)
print(a)
print(b)
# the items method in dictionaries returns a LIST of TUPLES
d = dict()
d['csev'] = 2
d['cwen'] = 4
for (k,v) in d.items():
print(k,v)
tups = d.items()
print(tups)
(0,1,2) < (0,2,2)
(0,1,200000) < (0,3,4)
('Jones', 'Sally') < ('Jones', 'Satny')
('Jones','Sally') > ('Adams','Sam')
(0,1,200000) < (0,0,3)
d = {'a':10, 'b':1, 'c': 22}
t = d.items()
t
t.sort()
t
d = {'a':10, 'c':22, 'b':1}
d.items()
t = sorted(d.items())
t
for k,v in sorted(d.items()):
print(k,v)
for k,v in t:
print(k,v)
c = {'a':10, 'b':1, 'c':22}
tmp = list()
for k,v in c.items():
tmp.append((v,k))
print(tmp)
c = {'a':10, 'b':1, 'c':22}
print(sorted((v,k) for k,v in c.items()))
fhand = open('polarity_pos.txt')
counts = dict()
for line in fhand:
words = line.split()
for word in words:
counts[word] = counts.get(word, 0) + 1
lst = list()
for key,val in counts.items():
lst.append((val,key))
lst.sort(reverse=True)
for var, key in lst[:10]:
print(key,val)
fhand = open('polarity_pos.txt')
counts = dict()
for line in fhand:
words = line.split()
for word in words:
counts[word] = counts.get(word, 0) + 1
print(sorted((v,k) for k,v in counts.items()))
# Shorter version
c = {'a': 10, 'b':1, 'c':22}
print(sorted((v,k) for k,v in c.items()))
###Output
[(1, 'b'), (10, 'a'), (22, 'c')]
###Markdown
Tuples in Python Welcome! This notebook will teach you about the tuples in the Python Programming Language. By the end of this lab, you'll know the basics tuple operations in Python, including indexing, slicing and sorting. Table of Contents About the Dataset Tuples Indexing Slicing Sorting Quiz on Tuples Estimated time needed: 15 min About the Dataset Imagine you received album recommendations from your friends and compiled all of the recommandations into a table, with specific information about each album.The table has one row for each movie and several columns:- **artist** - Name of the artist- **album** - Name of the album- **released_year** - Year the album was released- **length_min_sec** - Length of the album (hours,minutes,seconds)- **genre** - Genre of the album- **music_recording_sales_millions** - Music recording sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com/)- **claimed_sales_millions** - Album's claimed sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com/)- **date_released** - Date on which the album was released- **soundtrack** - Indicates if the album is the movie soundtrack (Y) or (N)- **rating_of_friends** - Indicates the rating from your friends from 1 to 10The dataset can be seen below: Artist Album Released Length Genre Music recording sales (millions) Claimed sales (millions) Released Soundtrack Rating (friends) Michael Jackson Thriller 1982 00:42:19 Pop, rock, R&B 46 65 30-Nov-82 10.0 AC/DC Back in Black 1980 00:42:11 Hard rock 26.1 50 25-Jul-80 8.5 Pink Floyd The Dark Side of the Moon 1973 00:42:49 Progressive rock 24.2 45 01-Mar-73 9.5 Whitney Houston The Bodyguard 1992 00:57:44 Soundtrack/R&B, soul, pop 26.1 50 25-Jul-80 Y 7.0 Meat Loaf Bat Out of Hell 1977 00:46:33 Hard rock, progressive rock 20.6 43 21-Oct-77 7.0 Eagles Their Greatest Hits (1971-1975) 1976 00:43:08 Rock, soft rock, folk rock 32.2 42 17-Feb-76 9.5 Bee Gees Saturday Night Fever 1977 1:15:54 Disco 20.6 40 15-Nov-77 Y 9.0 Fleetwood Mac Rumours 1977 00:40:01 Soft rock 27.9 40 04-Feb-77 9.5 Tuples In Python, there are different data types: string, integer and float. These data types can all be contained in a tuple as follows: Now, let us create your first tuple with string, integer and float.
###Code
# Create your first tuple
tuple1 = ("disco",10,1.2 )
tuple1
###Output
_____no_output_____
###Markdown
The type of variable is a **tuple**.
###Code
# Print the type of the tuple you created
type(tuple1)
###Output
_____no_output_____
###Markdown
Indexing Each element of a tuple can be accessed via an index. The following table represents the relationship between the index and the items in the tuple. Each element can be obtained by the name of the tuple followed by a square bracket with the index number: We can print out each value in the tuple:
###Code
# Print the variable on each index
print(tuple1[0])
print(tuple1[1])
print(tuple1[2])
###Output
disco
10
1.2
###Markdown
We can print out the **type** of each value in the tuple:
###Code
# Print the type of value on each index
print(type(tuple1[0]))
print(type(tuple1[1]))
print(type(tuple1[2]))
###Output
<class 'str'>
<class 'int'>
<class 'float'>
###Markdown
We can also use negative indexing. We use the same table above with corresponding negative values: We can obtain the last element as follows (this time we will not use the print statement to display the values):
###Code
# Use negative index to get the value of the last element
tuple1[-1]
###Output
_____no_output_____
###Markdown
We can display the next two elements as follows:
###Code
# Use negative index to get the value of the second last element
tuple1[-2]
# Use negative index to get the value of the third last element
tuple1[-3]
###Output
_____no_output_____
###Markdown
Concatenate Tuples We can concatenate or combine tuples by using the **+** sign:
###Code
# Concatenate two tuples
tuple2 = tuple1 + ("hard rock", 10)
tuple2
###Output
_____no_output_____
###Markdown
We can slice tuples obtaining multiple values as demonstrated by the figure below: Slicing We can slice tuples, obtaining new tuples with the corresponding elements:
###Code
# Slice from index 0 to index 2
tuple2[0:3]
###Output
_____no_output_____
###Markdown
We can obtain the last two elements of the tuple:
###Code
# Slice from index 3 to index 4
tuple2[3:5]
###Output
_____no_output_____
###Markdown
We can obtain the length of a tuple using the length command:
###Code
# Get the length of tuple
len(tuple2)
###Output
_____no_output_____
###Markdown
This figure shows the number of elements: Sorting Consider the following tuple:
###Code
# A sample tuple
Ratings = (0, 9, 6, 5, 10, 8, 9, 6, 2)
###Output
_____no_output_____
###Markdown
We can sort the values in a tuple and save it to a new tuple:
###Code
# Sort the tuple
RatingsSorted = sorted(Ratings)
RatingsSorted
###Output
_____no_output_____
###Markdown
Nested Tuple A tuple can contain another tuple as well as other more complex data types. This process is called 'nesting'. Consider the following tuple with several elements:
###Code
# Create a nest tuple
NestedT =(1, 2, ("pop", "rock") ,(3,4),("disco",(1,2)))
###Output
_____no_output_____
###Markdown
Each element in the tuple including other tuples can be obtained via an index as shown in the figure:
###Code
# Print element on each index
print("Element 0 of Tuple: ", NestedT[0])
print("Element 1 of Tuple: ", NestedT[1])
print("Element 2 of Tuple: ", NestedT[2])
print("Element 3 of Tuple: ", NestedT[3])
print("Element 4 of Tuple: ", NestedT[4])
###Output
Element 0 of Tuple: 1
Element 1 of Tuple: 2
Element 2 of Tuple: ('pop', 'rock')
Element 3 of Tuple: (3, 4)
Element 4 of Tuple: ('disco', (1, 2))
###Markdown
We can use the second index to access other tuples as demonstrated in the figure: We can access the nested tuples :
###Code
# Print element on each index, including nest indexes
print("Element 2, 0 of Tuple: ", NestedT[2][0])
print("Element 2, 1 of Tuple: ", NestedT[2][1])
print("Element 3, 0 of Tuple: ", NestedT[3][0])
print("Element 3, 1 of Tuple: ", NestedT[3][1])
print("Element 4, 0 of Tuple: ", NestedT[4][0])
print("Element 4, 1 of Tuple: ", NestedT[4][1])
###Output
Element 2, 0 of Tuple: pop
Element 2, 1 of Tuple: rock
Element 3, 0 of Tuple: 3
Element 3, 1 of Tuple: 4
Element 4, 0 of Tuple: disco
Element 4, 1 of Tuple: (1, 2)
###Markdown
We can access strings in the second nested tuples using a third index:
###Code
# Print the first element in the second nested tuples
NestedT[2][1][0]
# Print the second element in the second nested tuples
NestedT[2][1][1]
###Output
_____no_output_____
###Markdown
We can use a tree to visualise the process. Each new index corresponds to a deeper level in the tree: Similarly, we can access elements nested deeper in the tree with a fourth index:
###Code
# Print the first element in the second nested tuples
NestedT[4][1][0]
# Print the second element in the second nested tuples
NestedT[4][1][1]
###Output
_____no_output_____
###Markdown
The following figure shows the relationship of the tree and the element NestedT[4][1][1]: Quiz on Tuples Consider the following tuple:
###Code
# sample tuple
genres_tuple = ("pop", "rock", "soul", "hard rock", "soft rock", \
"R&B", "progressive rock", "disco")
genres_tuple
###Output
_____no_output_____
###Markdown
Find the length of the tuple, genres_tuple:
###Code
# Write your code below and press Shift+Enter to execute
len(genres_tuple)
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:len(genres_tuple)--> Access the element, with respect to index 3:
###Code
# Write your code below and press Shift+Enter to execute
genres_tuple[3]
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:genres_tuple[3]--> Use slicing to obtain indexes 3, 4 and 5:
###Code
# Write your code below and press Shift+Enter to execute
genres_tuple[3:6]
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:genres_tuple[3:6]--> Find the first two elements of the tuple genres_tuple:
###Code
# Write your code below and press Shift+Enter to execute
genres_tuple[0:2]
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:genres_tuple[0:2]--> Find the first index of "disco":
###Code
# Write your code below and press Shift+Enter to execute
x = genres_tuple.index('disco')
x
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:genres_tuple.index("disco")--> Generate a sorted List from the Tuple C_tuple=(-5, 1, -3):
###Code
# Write your code below and press Shift+Enter to execute
C_tuple=(-5, 1, -3)
x = sorted(C_tuple)
list(x)
###Output
_____no_output_____
###Markdown
Very similar to list - biggest difference immutabilityOnce an element is inside a tuple it can not be reassignedTuples use parenthesis (1,2,3)
###Code
# create a tuple
t=(4,5,6)
my_list = [1,2,3]
type(t) # look at the object type
type(my_list)
len(t) # check the length
t = ('one',2) # able to hold different data types
t[0] # indexing a tuple
t[-1] # more indexing
t = ('a','a','b')
t.count('a') # count how many times a element shows up in the tuple
t.index('a') # find the index location the first time an element occurs
# tuple is immutable
t[0] = 'New'
my_list[0] = 'New'
my_list
###Output
_____no_output_____
###Markdown
Tuples Tuples can contain an array of objects but they are immutable meaning the data cannot be changed. Tuples are like immutable lists. Main use case for Tuples are those that need not be changed like list of week days etc...Tuples ensures data integrity. Creating Tuples
###Code
t1 = (1,2,3)
type(t1)
# Tuples can contain mixed object types
t2 = (1,'two',['three','four','five'],{'key':'value'})
print(type(t2))
t2
t2[0]
t2[2][2]
t2[3]['key']
t3 = (1)
t3
t3 = (1,2,3)
t4 = ('a','b')
t3 + t4
t3 * 5
###Output
_____no_output_____
###Markdown
Immutability
###Code
# tuples are immutable
t2[0]=2 # this will create an error
t2
# although tuples are immutable if the object in the tuple is mutable then data can be changed
t2[3]['key']='no value'
t2
t2[2].append('six')
t2
###Output
_____no_output_____
###Markdown
Indexing and Slicing
###Code
t2[0:2]
t2[-1]
len(t2)
t2[::-1]
###Output
_____no_output_____
###Markdown
Tuples Methods
###Code
# index gives the position of the element
t2.index(1)
# count of elements present in the tuple
t2.count(1)
# cannot delete tuple element
print(t2)
del(t2[2])
t2
t2
del(t2)
t2
###Output
_____no_output_____
###Markdown
Iterating through a Tuple
###Code
for name in ('John','Jack'):
print ("Hello ", name)
###Output
Hello John
Hello Jack
###Markdown
Tuple Membership Test
###Code
t3 = ('a','b','c','d')
'a' in t3
'z' in t3
###Output
_____no_output_____
###Markdown
Tuples
###Code
P = (1,2,3)
P
P[0]
P[0] = 'NEW'
thistuple = ("apple", "banana", "cherry")
print(thistuple[-1])
print(thistuple[0])
print(thistuple[-2])
print(thistuple[-3])
tgyfdfxrdectrs = ("apple", "banana", "cherry")
thistuple = ("apple", "banana", "cherry", "orange", "kiwi", "melon", "mango")
print(thistuple[1:6])
thistuple = ("apple", "banana", "cherry", "orange", "kiwi", "melon", "mango")
print(thistuple[:-7])
x = ("apple", "banana", "cherry")
y = list(x)
y.insert(1,"kiwi")
x = tuple(y)
print(y)
print(x)
thistuple = ("apple", "banana", "cherry")
for x in thistuple:
print(x)
thistuple = ("apple", "banana", "cherry")
if "apple" in thistuple:
print("Yes, 'apple' is in the fruits tuple")
thistuple = ("apple", "banana", "cherry")
del thistuple
print(thistuple)
tuple1 = ("a", "b" , "c")
tuple2 = (1, 2, 3)
l=list(tuple2)
tuple3 = tuple1 + tuple2
print(tuple3)
print(l)
###Output
('a', 'b', 'c', 1, 2, 3)
[1, 2, 3]
###Markdown
Tuples
###Code
P = (1,2,3)
P
P[0]
P[0] = 'NEW'
thistuple = ("apple", "banana", "cherry")
print(thistuple[-1])
print(thistuple[-0])
print(thistuple[-2])
print(thistuple[-3])
thistuple = ("apple", "banana", "cherry")
thistuple = ("apple", "banana", "cherry", "orange", "kiwi", "melon", "mango")
print(thistuple[2:5])
thistuple = ("apple", "banana", "cherry", "orange", "kiwi", "melon", "mango")
print(thistuple[2:5])
x = ("apple", "banana", "cherry")
y = list(x)
y[1] = "kiwi"
x = tuple(y)
print(y)
print(x)
thistuple = ("apple", "banana", "cherry")
for x in thistuple:
print(x)
thistuple = ("apple", "banana", "cherry")
if "apple" in thistuple:
print("Yes, 'apple' is in the fruits tuple")
thistuple = ("apple", "banana", "cherry")
del thistuple
tuple1 = ("a", "b" , "c")
tuple2 = (1, 2, 3)
l=list(tuple2)
tuple3 = tuple1 + tuple2
print(tuple3)
print(l)
###Output
('a', 'b', 'c', 1, 2, 3)
[1, 2, 3]
###Markdown
###Code
t = (1,2,3)
t
a = ('1',t,"erica",4.5)
a
type(a)
print(a[1:-1])
print((4,5)+(2,3))
('gada', )*2
del a
a = (3, )* 8
a.count(3)
a.index(3)
print(4 in a)
f = (3,2,5,5,5,3,6,7,8,9)
print(min(f),
max(f),
sorted(f))
###Output
2 9 [2, 3, 3, 5, 5, 5, 6, 7, 8, 9]
###Markdown
[Tuple](https://www.geeksforgeeks.org/tuples-in-python/)
A Tuple is a collection of Python objects separated by commas.
In someways a tuple is similar to a list in terms of indexing,
nested objects and repetition but a tuple is immutable unlike lists which are mutable.
Tuples are faster to access than lists since they are immutable
just like in list we have [ ]
and in dictionary { }
in tuple we have ( )
**Sample Input**
###Code
# Code for converting a list and a string into a tuple
list1 = [0, 1, 2]
print(tuple(list1))
print(tuple('python')) # string 'python'
###Output
_____no_output_____
###Markdown
**Sample Output**
```
(0, 1, 2)
('p', 'y', 't', 'h', 'o', 'n')
```
👇Program Code : Python
###Code
tuple1 = (0, 1, 2, 3)
tuple2=("Buggati","RolceRoyce","Strawberry","Buggati")
print(tuple2)
# Concatenating above two
print(tuple1 + tuple2)
#print length
print(len(tuple2))
#prints the no. of times buggati arrived in tuple :D
print(tuple2.count("Buggati"))
#prints the index of the no or word (considers first occurence)
print("Index of strawberry tuple data is : ",tuple2.index("Strawberry"))
#tuple repetition
tuple3 = ('python',)*3
print(tuple3)
###Output
_____no_output_____
###Markdown
Tuples in python* Tuples are similar to list but they are **IMMUTABLE**, once declared a tuple cannot be manipulated* Uses Parenthesis ()```python my_tuple = (1,2,3)```
###Code
mytuple = (10,25,'string')
type(mytuple)
len(mytuple)
mytuple[1]
###Output
_____no_output_____
###Markdown
tuple has got only two methods1. count()2. index()
###Code
# count() gives how many times an element is in the tuple
print(mytuple.count('string'))
#index() gives the index of the first occurance of the element
print(mytuple.index(10))
# tuple does not support item assignment so can't do this
mytuple[1] = 1
###Output
_____no_output_____ |
09_Recurrent_Neural_Networks/02_Implementing_RNN_for_Spam_Prediction/02_implementing_rnn.ipynb | ###Markdown
Implementing an RNN in TensorFlow----------------------------------This script implements an RNN in TensorFlow to predict spam/ham from texts.We start by loading the necessary libraries and initializing a computation graph in TensorFlow.
###Code
import os
import re
import io
import requests
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from zipfile import ZipFile
from tensorflow.python.framework import ops
ops.reset_default_graph()
# Start a graph
sess = tf.Session()
###Output
_____no_output_____
###Markdown
Next we set the parameters for the RNN model.
###Code
# Set RNN parameters
epochs = 50
batch_size = 250
max_sequence_length = 25
rnn_size = 10
embedding_size = 50
min_word_frequency = 10
learning_rate = 0.0005
dropout_keep_prob = tf.placeholder(tf.float32)
###Output
_____no_output_____
###Markdown
We download and save the data next. First we check if we have saved it before and load it locally, if not, we load it from the internet (UCI machine learning data repository).
###Code
# Download or open data
data_dir = 'temp'
data_file = 'text_data.txt'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
if not os.path.isfile(os.path.join(data_dir, data_file)):
zip_url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip'
r = requests.get(zip_url)
z = ZipFile(io.BytesIO(r.content))
file = z.read('SMSSpamCollection')
# Format Data
text_data = file.decode()
text_data = text_data.encode('ascii', errors='ignore')
text_data = text_data.decode().split('\n')
# Save data to text file
with open(os.path.join(data_dir, data_file), 'w') as file_conn:
for text in text_data:
file_conn.write("{}\n".format(text))
else:
# Open data from text file
text_data = []
with open(os.path.join(data_dir, data_file), 'r') as file_conn:
for row in file_conn:
text_data.append(row)
text_data = text_data[:-1]
text_data = [x.split('\t') for x in text_data if len(x) >= 1]
[text_data_target, text_data_train] = [list(x) for x in zip(*text_data)]
###Output
_____no_output_____
###Markdown
Next, we process the texts and turn them into numeric representations (words --> indices).
###Code
# Create a text cleaning function
def clean_text(text_string):
text_string = re.sub(r'([^\s\w]|_|[0-9])+', '', text_string)
text_string = " ".join(text_string.split())
text_string = text_string.lower()
return text_string
# Clean texts
text_data_train = [clean_text(x) for x in text_data_train]
# Change texts into numeric vectors
vocab_processor = tf.contrib.learn.preprocessing.VocabularyProcessor(max_sequence_length,
min_frequency=min_word_frequency)
text_processed = np.array(list(vocab_processor.fit_transform(text_data_train)))
###Output
_____no_output_____
###Markdown
> Note: there will be a WARNING:... use tensorflow/transform or tf.data. Ignore this for now- there is an issue with getting tensorflow/transform to work. Hopefully this will be fixed soon and the code here will be updated.Now we shuffle and split the texts into train/tests (80% training, 20% testing).
###Code
# Shuffle and split data
text_processed = np.array(text_processed)
text_data_target = np.array([1 if x == 'ham' else 0 for x in text_data_target])
shuffled_ix = np.random.permutation(np.arange(len(text_data_target)))
x_shuffled = text_processed[shuffled_ix]
y_shuffled = text_data_target[shuffled_ix]
# Split train/test set
ix_cutoff = int(len(y_shuffled)*0.80)
x_train, x_test = x_shuffled[:ix_cutoff], x_shuffled[ix_cutoff:]
y_train, y_test = y_shuffled[:ix_cutoff], y_shuffled[ix_cutoff:]
vocab_size = len(vocab_processor.vocabulary_)
print("Vocabulary Size: {:d}".format(vocab_size))
print("80-20 Train Test split: {:d} -- {:d}".format(len(y_train), len(y_test)))
###Output
Vocabulary Size: 933
80-20 Train Test split: 4459 -- 1115
###Markdown
Here we can define our RNN model. We create the placeholders for the data, word embedding matrices (and embedding lookups), and define the rest of the model.The rest of the RNN model will create a dynamic RNN cell (regular RNN type), which will vary the number of RNNs needed for variable input length (different amount of words for input texts), and then output into a fully connected logistic layer to predict spam or ham as output.
###Code
# Create placeholders
x_data = tf.placeholder(tf.int32, [None, max_sequence_length])
y_output = tf.placeholder(tf.int32, [None])
# Create embedding
embedding_mat = tf.Variable(tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0))
embedding_output = tf.nn.embedding_lookup(embedding_mat, x_data)
# Define the RNN cell
# tensorflow change >= 1.0, rnn is put into tensorflow.contrib directory. Prior version not test.
if tf.__version__[0] >= '1':
cell = tf.contrib.rnn.BasicRNNCell(num_units=rnn_size)
else:
cell = tf.nn.rnn_cell.BasicRNNCell(num_units=rnn_size)
output, state = tf.nn.dynamic_rnn(cell, embedding_output, dtype=tf.float32)
output = tf.nn.dropout(output, dropout_keep_prob)
# Get output of RNN sequence
output = tf.transpose(output, [1, 0, 2])
last = tf.gather(output, int(output.get_shape()[0]) - 1)
weight = tf.Variable(tf.truncated_normal([rnn_size, 2], stddev=0.1))
bias = tf.Variable(tf.constant(0.1, shape=[2]))
logits_out = tf.matmul(last, weight) + bias
###Output
_____no_output_____
###Markdown
Next we declare the loss function (softmax cross entropy), an accuracy function, and optimization function (RMSProp).
###Code
# Loss function
losses = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_out, labels=y_output)
loss = tf.reduce_mean(losses)
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(logits_out, 1), tf.cast(y_output, tf.int64)), tf.float32))
optimizer = tf.train.RMSPropOptimizer(learning_rate)
train_step = optimizer.minimize(loss)
###Output
/home/nick/.local/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:108: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
###Markdown
> You may ignore the warning, as the texts are small and our batch size is only 100. If you increase the batch size and/or have longer sequences of texts, this model may consume too much memory.Next we initialize the variables in the computational graph.
###Code
init = tf.global_variables_initializer()
sess.run(init)
train_loss = []
test_loss = []
train_accuracy = []
test_accuracy = []
###Output
_____no_output_____
###Markdown
Now we can start our training!
###Code
# Start training
for epoch in range(epochs):
# Shuffle training data
shuffled_ix = np.random.permutation(np.arange(len(x_train)))
x_train = x_train[shuffled_ix]
y_train = y_train[shuffled_ix]
num_batches = int(len(x_train)/batch_size) + 1
# TO DO CALCULATE GENERATIONS ExACTLY
for i in range(num_batches):
# Select train data
min_ix = i * batch_size
max_ix = np.min([len(x_train), ((i+1) * batch_size)])
x_train_batch = x_train[min_ix:max_ix]
y_train_batch = y_train[min_ix:max_ix]
# Run train step
train_dict = {x_data: x_train_batch, y_output: y_train_batch, dropout_keep_prob:0.5}
sess.run(train_step, feed_dict=train_dict)
# Run loss and accuracy for training
temp_train_loss, temp_train_acc = sess.run([loss, accuracy], feed_dict=train_dict)
train_loss.append(temp_train_loss)
train_accuracy.append(temp_train_acc)
# Run Eval Step
test_dict = {x_data: x_test, y_output: y_test, dropout_keep_prob:1.0}
temp_test_loss, temp_test_acc = sess.run([loss, accuracy], feed_dict=test_dict)
test_loss.append(temp_test_loss)
test_accuracy.append(temp_test_acc)
print('Epoch: {}, Test Loss: {:.2}, Test Acc: {:.2}'.format(epoch+1, temp_test_loss, temp_test_acc))
###Output
Epoch: 1, Test Loss: 0.73, Test Acc: 0.18
Epoch: 2, Test Loss: 0.69, Test Acc: 0.18
Epoch: 3, Test Loss: 0.64, Test Acc: 0.83
Epoch: 4, Test Loss: 0.59, Test Acc: 0.84
Epoch: 5, Test Loss: 0.53, Test Acc: 0.84
Epoch: 6, Test Loss: 0.48, Test Acc: 0.84
Epoch: 7, Test Loss: 0.45, Test Acc: 0.84
Epoch: 8, Test Loss: 0.43, Test Acc: 0.85
Epoch: 9, Test Loss: 0.42, Test Acc: 0.85
Epoch: 10, Test Loss: 0.41, Test Acc: 0.85
Epoch: 11, Test Loss: 0.4, Test Acc: 0.85
Epoch: 12, Test Loss: 0.4, Test Acc: 0.85
Epoch: 13, Test Loss: 0.4, Test Acc: 0.86
Epoch: 14, Test Loss: 0.4, Test Acc: 0.86
Epoch: 15, Test Loss: 0.39, Test Acc: 0.86
Epoch: 16, Test Loss: 0.39, Test Acc: 0.86
Epoch: 17, Test Loss: 0.39, Test Acc: 0.87
Epoch: 18, Test Loss: 0.39, Test Acc: 0.87
Epoch: 19, Test Loss: 0.38, Test Acc: 0.87
Epoch: 20, Test Loss: 0.38, Test Acc: 0.87
Epoch: 21, Test Loss: 0.38, Test Acc: 0.87
Epoch: 22, Test Loss: 0.37, Test Acc: 0.87
Epoch: 23, Test Loss: 0.37, Test Acc: 0.87
Epoch: 24, Test Loss: 0.36, Test Acc: 0.87
Epoch: 25, Test Loss: 0.35, Test Acc: 0.88
Epoch: 26, Test Loss: 0.33, Test Acc: 0.88
Epoch: 27, Test Loss: 0.28, Test Acc: 0.89
Epoch: 28, Test Loss: 0.25, Test Acc: 0.91
Epoch: 29, Test Loss: 0.22, Test Acc: 0.94
Epoch: 30, Test Loss: 0.22, Test Acc: 0.93
Epoch: 31, Test Loss: 0.19, Test Acc: 0.95
Epoch: 32, Test Loss: 0.18, Test Acc: 0.95
Epoch: 33, Test Loss: 0.18, Test Acc: 0.95
Epoch: 34, Test Loss: 0.16, Test Acc: 0.96
Epoch: 35, Test Loss: 0.15, Test Acc: 0.96
Epoch: 36, Test Loss: 0.15, Test Acc: 0.96
Epoch: 37, Test Loss: 0.14, Test Acc: 0.96
Epoch: 38, Test Loss: 0.14, Test Acc: 0.97
Epoch: 39, Test Loss: 0.13, Test Acc: 0.97
Epoch: 40, Test Loss: 0.13, Test Acc: 0.96
Epoch: 41, Test Loss: 0.12, Test Acc: 0.96
Epoch: 42, Test Loss: 0.12, Test Acc: 0.97
Epoch: 43, Test Loss: 0.11, Test Acc: 0.97
Epoch: 44, Test Loss: 0.1, Test Acc: 0.97
Epoch: 45, Test Loss: 0.1, Test Acc: 0.97
Epoch: 46, Test Loss: 0.11, Test Acc: 0.96
Epoch: 47, Test Loss: 0.097, Test Acc: 0.98
Epoch: 48, Test Loss: 0.096, Test Acc: 0.98
Epoch: 49, Test Loss: 0.091, Test Acc: 0.98
Epoch: 50, Test Loss: 0.091, Test Acc: 0.97
###Markdown
Here is matplotlib code to plot the loss and accuracy over the training generations for both the train and test sets.
###Code
%matplotlib inline
# Plot loss over time
epoch_seq = np.arange(1, epochs+1)
plt.plot(epoch_seq, train_loss, 'k--', label='Train Set')
plt.plot(epoch_seq, test_loss, 'r-', label='Test Set')
plt.title('Softmax Loss')
plt.xlabel('Epochs')
plt.ylabel('Softmax Loss')
plt.legend(loc='upper left')
plt.show()
# Plot accuracy over time
plt.plot(epoch_seq, train_accuracy, 'k--', label='Train Set')
plt.plot(epoch_seq, test_accuracy, 'r-', label='Test Set')
plt.title('Test Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
###Output
_____no_output_____
###Markdown
Evaluating New TextsHere, we show how to use our trained model to evaluate new texts (which may or may not be spam/ham)
###Code
sample_texts = ['Hi, please respond 1111 asap to claim your change to win now!',
'Hey what are you doing for dinner tonight?',
'New offer, show this text for 50% off of our inagural sale!',
'Can you take the dog to the vet tomorrow?',
'Congratulations! You have been randomly selected to receive account credit!']
###Output
_____no_output_____
###Markdown
Now we clean our sample texts.
###Code
clean_texts = [clean_text(text) for text in sample_texts]
print(clean_texts)
###Output
['hi please respond asap to claim your change to win now', 'hey what are you doing for dinner tonight', 'new offer show this text for off of our inagural sale', 'can you take the dog to the vet tomorrow', 'congratulations you have been randomly selected to receive account credit']
###Markdown
Next, we transform each text as a sequence of words into a sequence of vocabulary indices.
###Code
processed_texts = np.array(list(vocab_processor.transform(clean_texts)))
print(processed_texts)
###Output
[[ 93 99 0 0 1 114 13 524 1 178 21 0 0 0 0 0 0 0
0 0 0 0 0 0 0]
[121 52 20 3 151 12 332 208 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0]
[ 92 376 483 39 69 12 203 15 86 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0]
[ 28 3 104 5 0 1 5 0 143 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0]
[701 3 17 98 0 420 1 318 301 738 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0]]
###Markdown
Now we can run each of the texts through our model and get the output logits.
###Code
# Remember to wrap the resulting logits in a softmax to get probabilities
eval_feed_dict = {x_data: processed_texts, dropout_keep_prob: 1.0}
model_results = sess.run(tf.nn.softmax(logits_out), feed_dict=eval_feed_dict)
print(model_results)
###Output
[[0.89795506 0.10204492]
[0.00980581 0.99019414]
[0.8838587 0.11614131]
[0.00980508 0.990195 ]
[0.88657707 0.11342289]]
###Markdown
Now print results
###Code
categories = ['spam', 'ham']
for ix, result in enumerate(model_results):
prediction = categories[np.argmax(result)]
print('Text: {}, \nPrediction: {}\n'.format(sample_texts[ix], prediction))
###Output
Text: Hi, please respond 1111 asap to claim your change to win now!,
Prediction: spam
Text: Hey what are you doing for dinner tonight?,
Prediction: ham
Text: New offer, show this text for 50% off of our inagural sale!,
Prediction: spam
Text: Can you take the dog to the vet tomorrow?,
Prediction: ham
Text: Congratulations! You have been randomly selected to receive account credit!,
Prediction: spam
###Markdown
Implementing an RNN in TensorFlow----------------------------------This script implements an RNN in TensorFlow to predict spam/ham from texts.We start by loading the necessary libraries and initializing a computation graph in TensorFlow.
###Code
import os
import re
import io
import requests
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from zipfile import ZipFile
from tensorflow.python.framework import ops
ops.reset_default_graph()
# Start a graph
sess = tf.Session()
###Output
_____no_output_____
###Markdown
Next we set the parameters for the RNN model.
###Code
# Set RNN parameters
epochs = 20
batch_size = 250
max_sequence_length = 25
rnn_size = 10
embedding_size = 50
min_word_frequency = 10
learning_rate = 0.0005
dropout_keep_prob = tf.placeholder(tf.float32)
###Output
_____no_output_____
###Markdown
We download and save the data next. First we check if we have saved it before and load it locally, if not, we load it from the internet (UCI machine learning data repository).
###Code
# Download or open data
data_dir = 'temp'
data_file = 'text_data.txt'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
if not os.path.isfile(os.path.join(data_dir, data_file)):
zip_url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip'
r = requests.get(zip_url)
z = ZipFile(io.BytesIO(r.content))
file = z.read('SMSSpamCollection')
# Format Data
text_data = file.decode()
text_data = text_data.encode('ascii', errors='ignore')
text_data = text_data.decode().split('\n')
# Save data to text file
with open(os.path.join(data_dir, data_file), 'w') as file_conn:
for text in text_data:
file_conn.write("{}\n".format(text))
else:
# Open data from text file
text_data = []
with open(os.path.join(data_dir, data_file), 'r') as file_conn:
for row in file_conn:
text_data.append(row)
text_data = text_data[:-1]
text_data = [x.split('\t') for x in text_data if len(x) >= 1]
[text_data_target, text_data_train] = [list(x) for x in zip(*text_data)]
###Output
_____no_output_____
###Markdown
Next, we process the texts and turn them into numeric representations (words --> indices).
###Code
# Create a text cleaning function
def clean_text(text_string):
text_string = re.sub(r'([^\s\w]|_|[0-9])+', '', text_string)
text_string = " ".join(text_string.split())
text_string = text_string.lower()
return text_string
# Clean texts
text_data_train = [clean_text(x) for x in text_data_train]
# Change texts into numeric vectors
vocab_processor = tf.contrib.learn.preprocessing.VocabularyProcessor(max_sequence_length,
min_frequency=min_word_frequency)
text_processed = np.array(list(vocab_processor.fit_transform(text_data_train)))
###Output
WARNING:tensorflow:From <ipython-input-5-4e6c02d47d3d>:14: VocabularyProcessor.__init__ (from tensorflow.contrib.learn.python.learn.preprocessing.text) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tensorflow/transform or tf.data.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/preprocessing/text.py:154: CategoricalVocabulary.__init__ (from tensorflow.contrib.learn.python.learn.preprocessing.categorical_vocabulary) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tensorflow/transform or tf.data.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/preprocessing/text.py:170: tokenizer (from tensorflow.contrib.learn.python.learn.preprocessing.text) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tensorflow/transform or tf.data.
###Markdown
> Note: there will be a WARNING:... use tensorflow/transform or tf.data. Ignore this for now- there is an issue with getting tensorflow/transform to work. Hopefully this will be fixed soon and the code here will be updated.Now we shuffle and split the texts into train/tests (80% training, 20% testing).
###Code
# Shuffle and split data
text_processed = np.array(text_processed)
text_data_target = np.array([1 if x == 'ham' else 0 for x in text_data_target])
shuffled_ix = np.random.permutation(np.arange(len(text_data_target)))
x_shuffled = text_processed[shuffled_ix]
y_shuffled = text_data_target[shuffled_ix]
# Split train/test set
ix_cutoff = int(len(y_shuffled)*0.80)
x_train, x_test = x_shuffled[:ix_cutoff], x_shuffled[ix_cutoff:]
y_train, y_test = y_shuffled[:ix_cutoff], y_shuffled[ix_cutoff:]
vocab_size = len(vocab_processor.vocabulary_)
print("Vocabulary Size: {:d}".format(vocab_size))
print("80-20 Train Test split: {:d} -- {:d}".format(len(y_train), len(y_test)))
###Output
Vocabulary Size: 933
80-20 Train Test split: 4459 -- 1115
###Markdown
Here we can define our RNN model. We create the placeholders for the data, word embedding matrices (and embedding lookups), and define the rest of the model.The rest of the RNN model will create a dynamic RNN cell (regular RNN type), which will vary the number of RNNs needed for variable input length (different amount of words for input texts), and then output into a fully connected logistic layer to predict spam or ham as output.
###Code
# Create placeholders
x_data = tf.placeholder(tf.int32, [None, max_sequence_length])
y_output = tf.placeholder(tf.int32, [None])
# Create embedding
embedding_mat = tf.Variable(tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0))
embedding_output = tf.nn.embedding_lookup(embedding_mat, x_data)
# Define the RNN cell
# tensorflow change >= 1.0, rnn is put into tensorflow.contrib directory. Prior version not test.
if tf.__version__[0] >= '1':
cell = tf.contrib.rnn.BasicRNNCell(num_units=rnn_size)
else:
cell = tf.nn.rnn_cell.BasicRNNCell(num_units=rnn_size)
output, state = tf.nn.dynamic_rnn(cell, embedding_output, dtype=tf.float32)
output = tf.nn.dropout(output, dropout_keep_prob)
# Get output of RNN sequence
output = tf.transpose(output, [1, 0, 2])
last = tf.gather(output, int(output.get_shape()[0]) - 1)
weight = tf.Variable(tf.truncated_normal([rnn_size, 2], stddev=0.1))
bias = tf.Variable(tf.constant(0.1, shape=[2]))
logits_out = tf.matmul(last, weight) + bias
###Output
_____no_output_____
###Markdown
Next we declare the loss function (softmax cross entropy), an accuracy function, and optimization function (RMSProp).
###Code
# Loss function
losses = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_out, labels=y_output)
loss = tf.reduce_mean(losses)
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(logits_out, 1), tf.cast(y_output, tf.int64)), tf.float32))
optimizer = tf.train.RMSPropOptimizer(learning_rate)
train_step = optimizer.minimize(loss)
###Output
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gradients_impl.py:100: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
###Markdown
> You may ignore the warning, as the texts are small and our batch size is only 100. If you increase the batch size and/or have longer sequences of texts, this model may consume too much memory.Next we initialize the variables in the computational graph.
###Code
init = tf.global_variables_initializer()
sess.run(init)
train_loss = []
test_loss = []
train_accuracy = []
test_accuracy = []
# Start training
for epoch in range(epochs):
# Shuffle training data
shuffled_ix = np.random.permutation(np.arange(len(x_train)))
x_train = x_train[shuffled_ix]
y_train = y_train[shuffled_ix]
num_batches = int(len(x_train)/batch_size) + 1
# TO DO CALCULATE GENERATIONS ExACTLY
for i in range(num_batches):
# Select train data
min_ix = i * batch_size
max_ix = np.min([len(x_train), ((i+1) * batch_size)])
x_train_batch = x_train[min_ix:max_ix]
y_train_batch = y_train[min_ix:max_ix]
# Run train step
train_dict = {x_data: x_train_batch, y_output: y_train_batch, dropout_keep_prob:0.5}
sess.run(train_step, feed_dict=train_dict)
# Run loss and accuracy for training
temp_train_loss, temp_train_acc = sess.run([loss, accuracy], feed_dict=train_dict)
train_loss.append(temp_train_loss)
train_accuracy.append(temp_train_acc)
# Run Eval Step
test_dict = {x_data: x_test, y_output: y_test, dropout_keep_prob:1.0}
temp_test_loss, temp_test_acc = sess.run([loss, accuracy], feed_dict=test_dict)
test_loss.append(temp_test_loss)
test_accuracy.append(temp_test_acc)
print('Epoch: {}, Test Loss: {:.2}, Test Acc: {:.2}'.format(epoch+1, temp_test_loss, temp_test_acc))
###Output
Epoch: 1, Test Loss: 0.75, Test Acc: 0.17
Epoch: 2, Test Loss: 0.71, Test Acc: 0.17
Epoch: 3, Test Loss: 0.64, Test Acc: 0.82
Epoch: 4, Test Loss: 0.57, Test Acc: 0.83
Epoch: 5, Test Loss: 0.51, Test Acc: 0.83
Epoch: 6, Test Loss: 0.47, Test Acc: 0.83
Epoch: 7, Test Loss: 0.44, Test Acc: 0.84
Epoch: 8, Test Loss: 0.43, Test Acc: 0.84
Epoch: 9, Test Loss: 0.42, Test Acc: 0.84
Epoch: 10, Test Loss: 0.42, Test Acc: 0.84
Epoch: 11, Test Loss: 0.41, Test Acc: 0.85
Epoch: 12, Test Loss: 0.41, Test Acc: 0.85
Epoch: 13, Test Loss: 0.41, Test Acc: 0.85
Epoch: 14, Test Loss: 0.4, Test Acc: 0.85
Epoch: 15, Test Loss: 0.4, Test Acc: 0.86
Epoch: 16, Test Loss: 0.4, Test Acc: 0.86
Epoch: 17, Test Loss: 0.39, Test Acc: 0.86
Epoch: 18, Test Loss: 0.38, Test Acc: 0.86
Epoch: 19, Test Loss: 0.36, Test Acc: 0.87
Epoch: 20, Test Loss: 0.33, Test Acc: 0.87
###Markdown
Here is matplotlib code to plot the loss and accuracy over the training generations for both the train and test sets.
###Code
%matplotlib inline
# Plot loss over time
epoch_seq = np.arange(1, epochs+1)
plt.plot(epoch_seq, train_loss, 'k--', label='Train Set')
plt.plot(epoch_seq, test_loss, 'r-', label='Test Set')
plt.title('Softmax Loss')
plt.xlabel('Epochs')
plt.ylabel('Softmax Loss')
plt.legend(loc='upper left')
plt.show()
# Plot accuracy over time
plt.plot(epoch_seq, train_accuracy, 'k--', label='Train Set')
plt.plot(epoch_seq, test_accuracy, 'r-', label='Test Set')
plt.title('Test Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
###Output
_____no_output_____ |
exercise/chap_5/10_svm_reg_california.ipynb | ###Markdown
Important points:* Scaled the data and no other preprocessing of data* Used mean squared error as metric (Ignore use of word accuracy as metric)* Performed RandomizedSearchCV to get better $\gamma$ and C values* Achieved 0.48 mse on training data and 0.55 mse on validation
###Code
import numpy as np
import pandas as pd
import os
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
np.random.seed(42)
%matplotlib inline
# Workound for urllib error
# https://stackoverflow.com/questions/27835619/urllib-and-ssl-certificate-verify-failed-error
# import ssl
# ssl._create_default_https_context = ssl._create_unverified_context
from sklearn.datasets import fetch_california_housing
data = fetch_california_housing()
x = data.data
y = data.target
from sklearn.model_selection import train_test_split
x_train, x_valid, y_train, y_valid = train_test_split(x, y, test_size=0.3, random_state=42)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_valid = scaler.transform(x_valid)
def score(y_true, y_pred, train=False):
accuracy = np.sqrt(mean_squared_error(y_true, y_pred))
if train:
print("Train accuracy:{}".format(accuracy))
else:
print("Val accuracy:{}".format(accuracy))
from sklearn.svm import SVR
svr = SVR(kernel='linear')
svr.fit(x_train, y_train)
# Looks like it is working well
score(y_train, svr.predict(x_train), True)
score(y_valid, svr.predict(x_valid))
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import reciprocal, uniform
# Taken from solutions
params = {"gamma": reciprocal(0.0001, 5), "C": uniform(1, 10),
"kernel": ['linear', 'rbf']}
rand_search = RandomizedSearchCV(SVR(), params, n_iter = 30, verbose=2,
cv = 4, random_state=42, n_jobs=3)
rand_search.fit(x_train, y_train)
rand_search.best_estimator_
# Working better than the solution :-)
y_pred = rand_search.best_estimator_.predict(x_valid)
score(y_valid, y_pred)
y_pred = rand_search.best_estimator_.predict(x_train)
score(y_train, y_pred, True)
###Output
Train accuracy:0.4821623953075446
|
docs/user_guide.ipynb | ###Markdown
Creating, setting up, and initializing a model instanceThe way models are created, setup, and initialized matches [PyMT](https://pymt.readthedocs.io/en/latest/quickstart.htmlrun-a-model) as much as possible.There are three steps:- instantiate (create a python object that represents the model)- setup (create a container with the right model, directories, and configuration files)- initialize (start the model inside the container)To a new user, these steps can be confusing as they seem to be related to "starting a model". However, you will see that there are some useful things that we can do in between each of these steps. As a side effect, splitting these steps also makes it easier to run a lot of models in parallel (e.g. for calibration). Experience tells us that you will quickly get used to it. When a model instance is created, we have to specify the version and pass in a suitable parameter set and forcing.
###Code
model_instance = ewatercycle.models.Wflow(
version="2020.1.1", parameter_set=parameter_set, forcing=forcing
)
###Output
WARNING:ewatercycle.models.wflow:Config file from parameter set is missing API section, adding section
WARNING:ewatercycle.models.wflow:Config file from parameter set is missing RiverRunoff option in API section, added it with value '2, m/s option'
###Markdown
In some specific cases the parameter set (e.g. for marrmot) or the forcing (e.g. when it is already included in the parameter set) is not needed.Most models have a variety of parameters that can be set. An opiniated subset of these parameters is exposed through the eWaterCycle API. We focus on those settings that are relevant from a scientific point of view and prefer to hide technical settings. These parameters and their default values can be inspected as follows:
###Code
model_instance.parameters
###Output
_____no_output_____
###Markdown
The start date and end date are automatically set based on the forcing data. Alternative values for each of these parameters can be passed on to the setup function:
###Code
cfg_file, cfg_dir = model_instance.setup(end_time="1990-12-15T00:00:00Z")
###Output
_____no_output_____
###Markdown
The `setup` function does the following:- Create a config directory which serves as the current working directory for the mode instance- Creates a configuration file in this directory based on the settings- Starts a container with the requested model version and access to the forcing and parameter sets.- Input is mounted read-only, the working directory is mounted read-write (if a model cannot cope with inputs outside the working directory, the input will be copied).- Setup will complain about incompatible model version, parameter_set, and forcing.After `setup` but before `initialize` everything is good-to-go, but nothing has been done yet. This isan opportunity to inspect the generated configuration file, and make any changes manually that could not bedone through the setup method. To modify the config file: print the path, open it in an editor, and save:
###Code
print(cfg_file)
###Output
/scratch/shared/ewatercycle/user_guide/wflow_20210720_122650/wflow_ewatercycle.ini
###Markdown
Once you're happy with the setup, it is time to initialize the model. You'll have to pass in the config file, even if you've not made any changes:
###Code
model_instance.initialize(cfg_file) # for some models, this step can take some time
###Output
_____no_output_____
###Markdown
Running (and interacting with) a modelA model instance can be controlled by calling functions for running a single timestep (`update`), setting variables, and getting variables. Besides the rather lowlevel BMI functions like `get_value` and `set_value`, we also added convenience functions such as `get_value_as_xarray`, `get_value_at_coords`, `time_as_datetime`, and `time_as_isostr`. These make it even more pleasant to interact with the model.For example, to run our model instance from start to finish, fetching the value of variable `discharge` at the location of a grdc station:
###Code
grdc_latitude = 51.756918
grdc_longitude = 6.395395
output = []
while model_instance.time < model_instance.end_time:
model_instance.update()
discharge = model_instance.get_value_at_coords(
"RiverRunoff", lon=[grdc_longitude], lat=[grdc_latitude]
)[0]
output.append(discharge)
# Here you could do whatever you like, e.g. update soil moisture values before doing the next timestep.
print(
model_instance.time_as_isostr, end="\r"
) # "\r" clears the output before printing the next timestamp
###Output
1990-12-15T00:00:00Z
###Markdown
We can also get the entire model field at a single time step. To simply plot it:
###Code
model_instance.get_value_as_xarray("RiverRunoff").plot()
###Output
_____no_output_____
###Markdown
If you want to know which variables are available, you can use
###Code
model_instance.output_var_names
###Output
_____no_output_____
###Markdown
Destroying the modelA model instance running in a container can take up quite a bit of resources on the system. When you're done with an experiment, it is good practice to always finalize the model. This will make sure the model properly performs any tear-down tasks and eventually the container will be destroyed.
###Code
model_instance.finalize()
###Output
_____no_output_____
###Markdown
ObservationseWaterCycle also includes utilities to easily load observations. Currently, eWaterCycle systems provide access to GRDC and USGS data, and we're hoping to expand this in the future.
###Code
import ewatercycle.observation.grdc
###Output
_____no_output_____
###Markdown
To load GRDC station data:
###Code
grdc_station_id = "6335020"
observations, metadata = ewatercycle.observation.grdc.get_grdc_data(
station_id=grdc_station_id,
start_time="1990-01-01T00:00:00Z", # or: model_instance.start_time_as_isostr
end_time="1990-12-15T00:00:00Z",
column="GRDC",
)
observations.head()
###Output
GRDC station 6335020 is selected. The river name is: RHINE RIVER.The coordinates are: (51.756918, 6.395395).The catchment area in km2 is: 159300.0. There are 0 missing values during 1990-01-01T00:00:00Z_1990-12-15T00:00:00Z at this station. See the metadata for more information.
###Markdown
Since not all GRDC stations are complete, some information is stored in metadata to inform you about the data.
###Code
print(metadata)
###Output
{'grdc_file_name': '/lustre1/0/wtrcycle/comparison/GRDC/GRDC_GCOSGTN-H_27_03_2019/6335020_Q_Day.Cmd.txt', 'id_from_grdc': 6335020, 'file_generation_date': '2019-03-27', 'river_name': 'RHINE RIVER', 'station_name': 'REES', 'country_code': 'DE', 'grdc_latitude_in_arc_degree': 51.756918, 'grdc_longitude_in_arc_degree': 6.395395, 'grdc_catchment_area_in_km2': 159300.0, 'altitude_masl': 8.0, 'dataSetContent': 'MEAN DAILY DISCHARGE (Q)', 'units': 'm³/s', 'time_series': '1814-11 - 2016-12', 'no_of_years': 203, 'last_update': '2018-05-24', 'nrMeasurements': 'NA', 'UserStartTime': '1990-01-01T00:00:00Z', 'UserEndTime': '1990-12-15T00:00:00Z', 'nrMissingData': 0}
###Markdown
AnalysisTo easily analyse model output, eWaterCycle also includes an `analysis` module.
###Code
import ewatercycle.analysis
###Output
_____no_output_____
###Markdown
For example, we will plot a hydrograph of the model run and GRDC observations. To this end, we combine the two timeseries in a single dataframe
###Code
combined_discharge = observations
combined_discharge["wflow"] = output
ewatercycle.analysis.hydrograph(
discharge=combined_discharge,
reference="GRDC",
)
###Output
_____no_output_____
###Markdown
User guideThis user manual will explain how the eWaterCycle Python package can be used to perform hydrological experiments. We will walk through the following chapters:- parameter sets- forcing data- model instances- using observations- analysisEach of these chapters correspond to a so-called "subpackage" of eWaterCycle Python package. Before we continue, however, we will briefly explain the configuration file. **Configuration**To be able to find all needed data and models eWaterCycle comes with a configuration object. This configuration contains system settings for eWaterCycle (which container technology to use, where is the data located, etc). In general these should not need to be changed by the user for a specific experiment, and ideally a user would never need to touch this configuration on a properly managed system. However, it is good to know that it is there. You can see the default configuration on your system like so:
###Code
from ewatercycle import CFG
CFG
###Output
_____no_output_____
###Markdown
Note: a path on the local filesystem is always denoted as "dir" (short for directory), instead of folder, path, or location. Especially location can be confusing in the context of geospatial modeling.It is also possible to store and load custom configuration files. For more information, see [system setup](https://ewatercycle.readthedocs.io/en/latest/system_setup.htmlconfigure-ewatercycle) Parameter setsParameter sets are an essential part of many hydrological models, and for the eWaterCycle package as well.
###Code
import ewatercycle.parameter_sets
###Output
_____no_output_____
###Markdown
The default [system setup](https://ewatercycle.readthedocs.io/en/latest/system_setup.htmldownload-example-parameter-sets) includes a number of example parameter sets that can be used directly. System administrators can also add available parameter sets that are globally availble to all users. In the future, we're hoping to add functionality to fetch new parameter sets using a DOI as well.To see the available parameter sets:
###Code
ewatercycle.parameter_sets.available_parameter_sets()
###Output
_____no_output_____
###Markdown
Since most parameter sets are model specific, you can filter the results as well:
###Code
ewatercycle.parameter_sets.available_parameter_sets(target_model="wflow")
###Output
_____no_output_____
###Markdown
Once you have found a suitable parameter set, you can load it and see some more details:
###Code
parameter_set = ewatercycle.parameter_sets.get_parameter_set("wflow_rhine_sbm_nc")
print(parameter_set)
###Output
Parameter set
-------------
name=wflow_rhine_sbm_nc
directory=/gpfs/work1/0/wtrcycle/parameter-sets/wflow_rhine_sbm_nc
config=/gpfs/work1/0/wtrcycle/parameter-sets/wflow_rhine_sbm_nc/wflow_sbm_NC.ini
doi=N/A
target_model=wflow
supported_model_versions={'2020.1.1', '2020.1.2'}
###Markdown
or you can access individual attributes of the parameter sets
###Code
parameter_set.supported_model_versions
###Output
_____no_output_____
###Markdown
Should you wish to configure your own parameter set (e.g. for PCRGlobWB in this case), this is also possible:
###Code
custom_parameter_set = ewatercycle.parameter_sets.ParameterSet(
name="custom_parameter_set",
directory="~/ewatercycle/docs/examples/parameter-sets/pcrglobwb_rhinemeuse_30min",
config="~/ewatercycle/docs/examples/parameter-sets/pcrglobwb_rhinemeuse_30min/setup_natural_test.ini",
target_model="pcrglobwb",
doi="https://doi.org/10.5281/zenodo.1045339",
supported_model_versions={"setters"},
)
###Output
_____no_output_____
###Markdown
As you can see, an eWaterCycle parameter set is defined fully by a directory and a configuration file. The configuration file typically informs the model about the structure of the parameter set (e.g. "what is the filename of the land use data"). It is possible to change these settings later, when [setting up the model](Models). Forcing dataeWaterCycle can load or generate forcing data for a model using the `forcing` module.
###Code
import ewatercycle.forcing
###Output
_____no_output_____
###Markdown
Existing forcing from external sourceWe first show how existing forcing data can be loaded with eWaterCycle. The wflow example parameter set already includes forcing data that was generated manually by the scientists at Deltares.
###Code
forcing = ewatercycle.forcing.load_foreign(
directory=str(parameter_set.directory),
target_model="wflow",
start_time="1991-01-01T00:00:00Z",
end_time="1991-12-31T00:00:00Z",
shape=None,
forcing_info=dict(
# Additional information about the external forcing data needed for the model configuration
netcdfinput="inmaps.nc",
Precipitation="/P",
EvapoTranspiration="/PET",
Temperature="/TEMP",
),
)
print(forcing)
###Output
Forcing data for Wflow
----------------------
Directory: /gpfs/work1/0/wtrcycle/parameter-sets/wflow_rhine_sbm_nc
Start time: 1991-01-01T00:00:00Z
End time: 1991-12-31T00:00:00Z
Shapefile: None
Additional information for model config:
- netcdfinput: inmaps.nc
- Precipitation: /P
- Temperature: /TEMP
- EvapoTranspiration: /PET
- Inflow: None
###Markdown
As you can see, the forcing consists of a generic part which is the same for all eWaterCycle models, and a model-specific part (`forcing_info`). If you're familiar with wflow, you might recognize that the model-specific settings map directly to wflow configuration settings. Generating forcing dataIn most cases, you will not have access to tailor-made forcing data, and manually pre-processing existing datasets can be quite a pain. eWaterCycle includes a forcing generator that can do all the required steps to go from the available datasets (ERA5, ERA-Interim, etc) to whatever format the models require. This is done through [ESMValTool recipes](https://docs.esmvaltool.org/en/latest/recipes/recipe_hydrology.html). For some models (e.g. lisflood) additional computations are done, as some steps require data and/or code that is not available to ESMValTool.Apart from some standard parameters (start time, datasets, etc.), the forcing generator sometimes requires additional model-specific options. For our wflow example case, we need to pass the DEM file to the ESMValTool recipe as well. All model-specific options are listed in the [API documentation](https://ewatercycle.readthedocs.io/en/latest/apidocs/ewatercycle.forcing.htmlewatercycle.forcing.generate). ESMValTool configuration As eWaterCycle relies on ESMValTool for processing forcing data, configuration for forcing is mostly defered to the esmvaltool configuration file. What ESMValTool configuration file to use can be specified in the system setup.
###Code
forcing = ewatercycle.forcing.generate(
target_model="wflow",
dataset="ERA5",
start_time="1990-01-01T00:00:00Z",
end_time="1990-01-31T00:00:00Z",
shape="~/GitHub/ewatercycle/docs/examples/data/Rhine/Rhine.shp",
model_specific_options={
"dem_file": f"{parameter_set.directory}/staticmaps/wflow_dem.map",
},
)
print(forcing)
###Output
{'auxiliary_data_dir': PosixPath('/projects/0/wtrcycle/comparison/recipes_auxiliary_datasets'),
'compress_netcdf': False,
'config_developer_file': None,
'config_file': PosixPath('/home/fakhereh/.esmvaltool/config-user.yml'),
'drs': {'CMIP5': 'default', 'CMIP6': 'default'},
'exit_on_warning': False,
'extra_facets_dir': (),
'log_level': 'info',
'max_parallel_tasks': 1,
'output_dir': PosixPath('/scratch-shared/ewatercycle/recipe_wflow_20211129_150429'),
'output_file_type': 'png',
'plot_dir': PosixPath('/scratch-shared/ewatercycle/recipe_wflow_20211129_150429/plots'),
'preproc_dir': PosixPath('/scratch-shared/ewatercycle/recipe_wflow_20211129_150429/preproc'),
'profile_diagnostic': False,
'remove_preproc_dir': True,
'rootpath': {'CMIP5': [PosixPath('/home/fakhereh/cmip5_inputpath1'),
PosixPath('/home/fakhereh/cmip5_inputpath2')],
'CMIP6': [PosixPath('/home/fakhereh/cmip6_inputpath1'),
PosixPath('/home/fakhereh/cmip6_inputpath2')],
'OBS6': [PosixPath('/projects/0/wtrcycle/comparison/obs6')],
'RAWOBS': [PosixPath('/projects/0/wtrcycle/comparison/rawobs')],
'default': [PosixPath('/projects/0/wtrcycle/comparison')]},
'run_dir': PosixPath('/scratch-shared/ewatercycle/recipe_wflow_20211129_150429/run'),
'save_intermediary_cubes': False,
'work_dir': PosixPath('/scratch-shared/ewatercycle/recipe_wflow_20211129_150429/work'),
'write_netcdf': True,
'write_plots': True}
Shapefile /gpfs/home2/fakhereh/GitHub/ewatercycle/docs/examples/data/Rhine/Rhine.shp is not in forcing directory /gpfs/scratch1/shared/ewatercycle/recipe_wflow_20211129_150429/work/wflow_daily/script. So, it won't be saved in /gpfs/scratch1/shared/ewatercycle/recipe_wflow_20211129_150429/work/wflow_daily/script/ewatercycle_forcing.yaml.
Forcing data for Wflow
----------------------
Directory: /gpfs/scratch1/shared/ewatercycle/recipe_wflow_20211129_150429/work/wflow_daily/script
Start time: 1990-01-01T00:00:00Z
End time: 1990-01-31T00:00:00Z
Shapefile: /gpfs/home2/fakhereh/GitHub/ewatercycle/docs/examples/data/Rhine/Rhine.shp
Additional information for model config:
- netcdfinput: wflow_ERA5_Rhine_1990_1990.nc
- Precipitation: /pr
- Temperature: /tas
- EvapoTranspiration: /pet
- Inflow: None
###Markdown
Generated forcing is automatically saved to the ESMValTool output directory. A `yaml` file is stored there as well, such that you can easily reload the forcing later without having to generate it again. `ewatercycle_forcing.yaml`:```yaml!WflowForcingstart_time: '1990-01-01T00:00:00Z'end_time: '1990-12-31T00:00:00Z'shape:netcdfinput: wflow_ERA5_Rhine_1990_1990.ncPrecipitation: /prEvapoTranspiration: /petTemperature: /tasInflow:```
###Code
reloaded_forcing = ewatercycle.forcing.load(
directory="/scratch-shared/ewatercycle/recipe_wflow_20211129_103921/work/wflow_daily/script"
)
###Output
_____no_output_____
###Markdown
Models
###Code
import ewatercycle.models
###Output
_____no_output_____
###Markdown
eWaterCycle currently integrates the following models:* [wflow](https://ewatercycle.readthedocs.io/en/latest/examples/wflow.html)* [pcrglobwb](https://ewatercycle.readthedocs.io/en/latest/examples/pcrglobwb.html)* [marrmot M01](https://ewatercycle.readthedocs.io/en/latest/examples/marrmotm01.html)* [marrmot M14](https://ewatercycle.readthedocs.io/en/latest/examples/marrmotm14.html)* [lisflood](https://ewatercycle.readthedocs.io/en/latest/examples/lisflood.html)and we're expecting to add more models soon. The process for adding new models is documented in [Adding models](https://ewatercycle.readthedocs.io/en/latest/adding_models.html) Model versionsTo help with reproducibility the version of a model must always be specified when creating a model instance. The available versions can be seen like so:
###Code
import ewatercycle.models
ewatercycle.models.Wflow.available_versions
###Output
_____no_output_____
###Markdown
Creating, setting up, and initializing a model instanceThe way models are created, setup, and initialized matches [PyMT](https://pymt.readthedocs.io/en/latest/quickstart.htmlrun-a-model) as much as possible.There are three steps:- instantiate (create a python object that represents the model)- setup (create a container with the right model, directories, and configuration files)- initialize (start the model inside the container)To a new user, these steps can be confusing as they seem to be related to "starting a model". However, you will see that there are some useful things that we can do in between each of these steps. As a side effect, splitting these steps also makes it easier to run a lot of models in parallel (e.g. for calibration). Experience tells us that you will quickly get used to it. When a model instance is created, we have to specify the version and pass in a suitable parameter set and forcing.
###Code
model_instance = ewatercycle.models.Wflow(
version="2020.1.2", parameter_set=parameter_set, forcing=forcing
)
###Output
Config file from parameter set is missing API section, adding section
Config file from parameter set is missing RiverRunoff option in API section, added it with value '2, m/s option'
###Markdown
In some specific cases the parameter set (e.g. for marrmot) or the forcing (e.g. when it is already included in the parameter set) is not needed.Most models have a variety of parameters that can be set. An opiniated subset of these parameters is exposed through the eWaterCycle API. We focus on those settings that are relevant from a scientific point of view and prefer to hide technical settings. These parameters and their default values can be inspected as follows:
###Code
model_instance.parameters
###Output
_____no_output_____
###Markdown
The start date and end date are automatically set based on the forcing data. Alternative values for each of these parameters can be passed on to the setup function:
###Code
cfg_file, cfg_dir = model_instance.setup(end_time="1990-12-15T00:00:00Z")
###Output
Running /projects/0/wtrcycle/singularity-images/ewatercycle-wflow-grpc4bmi_2020.1.2.sif singularity container on port 35805
###Markdown
The `setup` function does the following:- Create a config directory which serves as the current working directory for the mode instance- Creates a configuration file in this directory based on the settings- Starts a container with the requested model version and access to the forcing and parameter sets.- Input is mounted read-only, the working directory is mounted read-write (if a model cannot cope with inputs outside the working directory, the input will be copied).- Setup will complain about incompatible model version, parameter_set, and forcing.After `setup` but before `initialize` everything is good-to-go, but nothing has been done yet. This isan opportunity to inspect the generated configuration file, and make any changes manually that could not bedone through the setup method. To modify the config file: print the path, open it in an editor, and save:
###Code
model_instance.work_dir
print(cfg_file)
###Output
/gpfs/scratch1/shared/ewatercycle/wflow_20211129_150535/wflow_ewatercycle.ini
###Markdown
Once you're happy with the setup, it is time to initialize the model. You'll have to pass in the config file, even if you've not made any changes:
###Code
model_instance.initialize(cfg_file) # for some models, this step can take some time
###Output
_____no_output_____
###Markdown
Running (and interacting with) a modelA model instance can be controlled by calling functions for running a single timestep (`update`), setting variables, and getting variables. Besides the rather lowlevel BMI functions like `get_value` and `set_value`, we also added convenience functions such as `get_value_as_xarray`, `get_value_at_coords`, `time_as_datetime`, and `time_as_isostr`. These make it even more pleasant to interact with the model.For example, to run our model instance from start to finish, fetching the value of variable `discharge` at the location of a grdc station:
###Code
grdc_latitude = 51.756918
grdc_longitude = 6.395395
output = []
while model_instance.time < model_instance.end_time:
model_instance.update()
discharge = model_instance.get_value_at_coords(
"RiverRunoff", lon=[grdc_longitude], lat=[grdc_latitude]
)[0]
output.append(discharge)
# Here you could do whatever you like, e.g. update soil moisture values before doing the next timestep.
print(
model_instance.time_as_isostr, end="\r"
) # "\r" clears the output before printing the next timestamp
###Output
1990-12-15T00:00:00Z
###Markdown
We can also get the entire model field at a single time step. To simply plot it:
###Code
model_instance.get_value_as_xarray("RiverRunoff").plot()
###Output
_____no_output_____
###Markdown
If you want to know which variables are available, you can use
###Code
model_instance.output_var_names
###Output
_____no_output_____
###Markdown
Destroying the modelA model instance running in a container can take up quite a bit of resources on the system. When you're done with an experiment, it is good practice to always finalize the model. This will make sure the model properly performs any tear-down tasks and eventually the container will be destroyed.
###Code
model_instance.finalize()
###Output
_____no_output_____
###Markdown
ObservationseWaterCycle also includes utilities to easily load observations. Currently, eWaterCycle systems provide access to GRDC and USGS data, and we're hoping to expand this in the future.
###Code
import ewatercycle.observation.grdc
###Output
_____no_output_____
###Markdown
To load GRDC station data:
###Code
grdc_station_id = "6335020"
observations, metadata = ewatercycle.observation.grdc.get_grdc_data(
station_id=grdc_station_id,
start_time="1990-01-01T00:00:00Z", # or: model_instance.start_time_as_isostr
end_time="1990-12-15T00:00:00Z",
column="GRDC",
)
observations.head()
###Output
GRDC station 6335020 is selected. The river name is: RHINE RIVER.The coordinates are: (51.756918, 6.395395).The catchment area in km2 is: 159300.0. There are 0 missing values during 1990-01-01T00:00:00Z_1990-12-15T00:00:00Z at this station. See the metadata for more information.
###Markdown
Since not all GRDC stations are complete, some information is stored in metadata to inform you about the data.
###Code
print(metadata)
###Output
{'grdc_file_name': '/gpfs/work1/0/wtrcycle/GRDC/GRDC_GCOSGTN-H_27_03_2019/6335020_Q_Day.Cmd.txt', 'id_from_grdc': 6335020, 'file_generation_date': '2019-03-27', 'river_name': 'RHINE RIVER', 'station_name': 'REES', 'country_code': 'DE', 'grdc_latitude_in_arc_degree': 51.756918, 'grdc_longitude_in_arc_degree': 6.395395, 'grdc_catchment_area_in_km2': 159300.0, 'altitude_masl': 8.0, 'dataSetContent': 'MEAN DAILY DISCHARGE (Q)', 'units': 'm³/s', 'time_series': '1814-11 - 2016-12', 'no_of_years': 203, 'last_update': '2018-05-24', 'nrMeasurements': 73841, 'UserStartTime': '1990-01-01T00:00:00Z', 'UserEndTime': '1990-12-15T00:00:00Z', 'nrMissingData': 0}
###Markdown
AnalysisTo easily analyse model output, eWaterCycle also includes an `analysis` module.
###Code
import ewatercycle.analysis
###Output
_____no_output_____
###Markdown
For example, we will plot a hydrograph of the model run and GRDC observations. To this end, we combine the two timeseries in a single dataframe
###Code
combined_discharge = observations
combined_discharge["wflow"] = output
ewatercycle.analysis.hydrograph(
discharge=combined_discharge,
reference="GRDC",
)
###Output
_____no_output_____
###Markdown
User guideThis user manual will explain how the eWaterCycle Python package can be used to perform hydrological experiments. We will walk through the following chapters:- parameter sets- forcing data- model instances- using observations- analysisEach of these chapters correspond to a so-called "subpackage" of eWaterCycle Python package. Before we continue, however, we will briefly explain the configuration file. **Configuration**To be able to find all needed data and models eWaterCycle comes with a configuration object. This configuration contains system settings for eWaterCycle (which container technology to use, where is the data located, etc). In general these should not need to be changed by the user for a specific experiment, and ideally a user would never need to touch this configuration on a properly managed system. However, it is good to know that it is there. You can see the default configuration on your system like so:
###Code
from ewatercycle import CFG
CFG
###Output
_____no_output_____
###Markdown
Note: a path on the local filesystem is always denoted as "dir" (short for directory), instead of folder, path, or location. Especially location can be confusing in the context of geospatial modeling.It is also possible to store and load custom configuration files. For more information, see [system setup](https://ewatercycle.readthedocs.io/en/latest/system_setup.htmlconfigure-ewatercycle) Parameter setsParameter sets are an essential part of many hydrological models, and for the eWaterCycle package as well.
###Code
import ewatercycle.parameter_sets
###Output
_____no_output_____
###Markdown
The default [system setup](https://ewatercycle.readthedocs.io/en/latest/system_setup.htmldownload-example-parameter-sets) includes a number of example parameter sets that can be used directly. System administrators can also add available parameter sets that are globally availble to all users. In the future, we're hoping to add functionality to fetch new parameter sets using a DOI as well.To see the available parameter sets:
###Code
ewatercycle.parameter_sets.available_parameter_sets()
###Output
_____no_output_____
###Markdown
Since most parameter sets are model specific, you can filter the results as well:
###Code
ewatercycle.parameter_sets.available_parameter_sets(target_model="wflow")
###Output
_____no_output_____
###Markdown
Once you have found a suitable parameter set, you can load it and see some more details:
###Code
parameter_set = ewatercycle.parameter_sets.get_parameter_set("wflow_rhine_sbm_nc")
print(parameter_set)
###Output
Parameter set
-------------
name=wflow_rhine_sbm_nc
directory=/scratch/shared/ewatercycle/user_guide/wflow_rhine_sbm_nc
config=/scratch/shared/ewatercycle/user_guide/wflow_rhine_sbm_nc/wflow_sbm_NC.ini
doi=N/A
target_model=wflow
supported_model_versions={'2020.1.1'}
###Markdown
or you can access individual attributes of the parameter sets
###Code
parameter_set.supported_model_versions
###Output
_____no_output_____
###Markdown
Should you wish to configure your own parameter set (e.g. for PCRGlobWB in this case), this is also possible:
###Code
custom_parameter_set = ewatercycle.parameter_sets.ParameterSet(
name="custom_parameter_set",
directory="~/ewatercycle/docs/examples/parameter-sets/pcrglobwb_rhinemeuse_30min",
config="~/ewatercycle/docs/examples/parameter-sets/pcrglobwb_rhinemeuse_30min/setup_natural_test.ini",
target_model="pcrglobwb",
doi="https://doi.org/10.5281/zenodo.1045339",
supported_model_versions={"setters"},
)
###Output
_____no_output_____
###Markdown
As you can see, an eWaterCycle parameter set is defined fully by a directory and a configuration file. The configuration file typically informs the model about the structure of the parameter set (e.g. "what is the filename of the land use data"). It is possible to change these settings later, when [setting up the model](Models). Forcing dataeWaterCycle can load or generate forcing data for a model using the `forcing` module.
###Code
import ewatercycle.forcing
###Output
_____no_output_____
###Markdown
Existing forcing from external sourceWe first show how existing forcing data can be loaded with eWaterCycle. The wflow example parameter set already includes forcing data that was generated manually by the scientists at Deltares.
###Code
forcing = ewatercycle.forcing.load_foreign(
directory=str(parameter_set.directory),
target_model="wflow",
start_time="1991-01-01T00:00:00Z",
end_time="1991-12-31T00:00:00Z",
shape=None,
forcing_info=dict(
# Additional information about the external forcing data needed for the model configuration
netcdfinput="inmaps.nc",
Precipitation="/P",
EvapoTranspiration="/PET",
Temperature="/TEMP",
),
)
print(forcing)
###Output
Forcing data for Wflow
----------------------
Directory: /scratch/shared/ewatercycle/user_guide/wflow_rhine_sbm_nc
Start time: 1991-01-01T00:00:00Z
End time: 1991-12-31T00:00:00Z
Shapefile: None
Additional information for model config:
- netcdfinput: inmaps.nc
- Precipitation: /P
- Temperature: /TEMP
- EvapoTranspiration: /PET
- Inflow: None
###Markdown
As you can see, the forcing consists of a generic part which is the same for all eWaterCycle models, and a model-specific part (`forcing_info`). If you're familiar with wflow, you might recognize that the model-specific settings map directly to wflow configuration settings. Generating forcing dataIn most cases, you will not have access to tailor-made forcing data, and manually pre-processing existing datasets can be quite a pain. eWaterCycle includes a forcing generator that can do all the required steps to go from the available datasets (ERA5, ERA-Interim, etc) to whatever format the models require. This is done through [ESMValTool recipes](https://docs.esmvaltool.org/en/latest/recipes/recipe_hydrology.html). For some models (e.g. lisflood) additional computations are done, as some steps require data and/or code that is not available to ESMValTool.Apart from some standard parameters (start time, datasets, etc.), the forcing generator sometimes requires additional model-specific options. For our wflow example case, we need to pass the DEM file to the ESMValTool recipe as well. All model-specific options are listed in the [API documentation](https://ewatercycle.readthedocs.io/en/latest/apidocs/ewatercycle.forcing.htmlewatercycle.forcing.generate). ESMValTool configuration As eWaterCycle relies on ESMValTool for processing forcing data, configuration for forcing is mostly defered to the esmvaltool configuration file. What ESMValTool configuration file to use can be specified in the system setup.
###Code
forcing = ewatercycle.forcing.generate(
target_model="wflow",
dataset="ERA5",
start_time="1990-01-01T00:00:00Z",
end_time="1990-01-31T00:00:00Z",
shape="~/GitHub/ewatercycle/docs/examples/data/Rhine/Rhine.shp",
model_specific_options={
"dem_file": "/scratch-shared/ewatercycle/user_guide/wflow_rhine_sbm_nc/staticmaps/wflow_dem.map",
},
)
print(forcing)
###Output
Forcing data for Wflow
----------------------
Directory: /scratch/shared/ewatercycle/user_guide/recipe_wflow_20210720_122543/work/wflow_daily/script
Start time: 1990-01-01T00:00:00Z
End time: 1990-01-31T00:00:00Z
Shapefile: /nfs/home2/fakhereh/GitHub/ewatercycle/docs/examples/data/Rhine/Rhine.shp
Additional information for model config:
- netcdfinput: wflow_ERA5_Rhine_1990_1990.nc
- Precipitation: /pr
- Temperature: /tas
- EvapoTranspiration: /pet
- Inflow: None
###Markdown
Generated forcing is automatically saved to the ESMValTool output directory. A `yaml` file is stored there as well, such that you can easily reload the forcing later without having to generate it again. `ewatercycle_forcing.yaml`:```yaml!WflowForcingstart_time: '1990-01-01T00:00:00Z'end_time: '1990-12-31T00:00:00Z'shape:netcdfinput: wflow_ERA5_Rhine_1990_1990.ncPrecipitation: /prEvapoTranspiration: /petTemperature: /tasInflow:```
###Code
reloaded_forcing = ewatercycle.forcing.load(
directory="/scratch/shared/ewatercycle/user_guide/recipe_wflow_20210720_122543/work/wflow_daily/script"
)
###Output
_____no_output_____
###Markdown
Models
###Code
import ewatercycle.models
###Output
_____no_output_____
###Markdown
eWaterCycle currently integrates the following models:* [wflow](https://ewatercycle.readthedocs.io/en/latest/examples/wflow.html)* [pcrglobwb](https://ewatercycle.readthedocs.io/en/latest/examples/pcrglobwb.html)* [marrmot M01](https://ewatercycle.readthedocs.io/en/latest/examples/marrmotm01.html)* [marrmot M14](https://ewatercycle.readthedocs.io/en/latest/examples/marrmotm14.html)* [lisflood](https://ewatercycle.readthedocs.io/en/latest/examples/lisflood.html)and we're expecting to add more models soon. The process for adding new models is documented in [Adding models](https://ewatercycle.readthedocs.io/en/latest/adding_models.html) Model versionsTo help with reproducibility the version of a model must always be specified when creating a model instance. The available versions can be seen like so:
###Code
import ewatercycle.models
ewatercycle.models.Wflow.available_versions
###Output
_____no_output_____
###Markdown
Input format==========To demonstrate the python data types required to represent the input, we use the utility function ``make_data`` which returns randomly generated data that we will use for this example. See the function's documentation for more information regarding its parameters.
###Code
from occuspytial.utils import make_data
Q, W, X, y, true_alpha, true_beta, true_tau, true_z = make_data(random_state=0)
###Output
_____no_output_____
###Markdown
The Gibbs samplers expect (at the minimum) as input the design matrix for occupancy covariates, the design matrices for detection covariates, the detection/non-detection data and the spatial precision matrix of the ICAR spatial random effects.``Q`` represents the spatial precision matrix and can either be scipy's sparse matrix object or a numpy array. If a numpy array is used, the sampler convert it to a sparse matrix. It is advised that ``Q`` be in sparse format since the format is more memory efficient.
###Code
Q
###Output
_____no_output_____
###Markdown
Detection/non-detection observed data (``y``) should be depresented using a dictionary whose key is the site number (only sites that were surveyed) and value is a numpy array whose elements are 1's (if the species is detected on that visit) and 0's. The length of the array should represent the number of visits for that site.
###Code
y.keys()
y[15] # 4 visits in site 15 with no detection of the species on
###Output
_____no_output_____
###Markdown
Similarly, the detection covariates (``W``) are represented by a dictionary whose key is the site number (only sites that were surveyed) and value is a 2d numpy array representing the design matrix of the detection covariates of that site.
###Code
W[15]
###Output
_____no_output_____
###Markdown
Occupancy covariates (``X``) are represented by a design matrix which is a 2d numpy array.
###Code
X[:5] # occupancy design matrix for the first 5 sites
###Output
_____no_output_____
###Markdown
The rest of the output from ``make_data`` are the simulated true values for spatial occupancy model parameters.
###Code
print('alpha: ', true_alpha, '\n', 'beta: ', true_beta, '\n', 'tau: ', true_tau, '\n', 'z (occupancy state): ', true_z)
###Output
_____no_output_____
###Markdown
Sampling example===============This section contains basic examples of how to use the available samplers. Gibbs sampling----------------------Here we show how to use the Gibbs sampler presented in [Clark & Altwegg (2019)](https://onlinelibrary.wiley.com/doi/full/10.1002/ece3.4850) using OccuSpytial's sampler API. The name of the class implementing this sampler is ``LogitRSRGibbs``
###Code
import numpy as np
from occuspytial import LogitRSRGibbs
from occuspytial.utils import make_data
# generate fake data with 500 sites in total
Q, W, X, y, true_alpha, true_beta, true_tau, true_z = make_data(500, tau_range=(0.1, 0.5), random_state=0)
###Output
_____no_output_____
###Markdown
Lets print out the true values of the parameters of interest.
###Code
print('alpha: ', true_alpha, '\n', 'beta: ', true_beta, '\n', 'tau: ', true_tau)
rsr = LogitRSRGibbs(Q, W, X, y, random_state=10)
rsr_samples = rsr.sample(2000, burnin=1000, chains=2)
# The progressbar is on by default, but Jupyter notebook only displays it on the console
# so it is not visible in the output cell of this notebook unlike if we are working on the console.
###Output
_____no_output_____
###Markdown
The output of the sample method is an instance of the ``PosteriorParameter`` class and inference of the samples obtained are done via the instance stored in the variable ``rsr_samples``. We can display the summary table using the `summary` attribute.
###Code
rsr_samples.summary
###Output
_____no_output_____
###Markdown
To access the individual parameters we can use the the parameter's name as the key.
###Code
rsr_samples['alpha']
###Output
_____no_output_____
###Markdown
Since we generated 2 chains, ``alpha`` parameter array is three dimensional; where the first dimension is the chain index,the second dimension is the length of the post-burnin samples, and the third dimension is the size of the parameter (in elements).
###Code
rsr_samples['alpha'].shape
###Output
_____no_output_____
###Markdown
``LogitRSRGibbs`` expects the prior distributions for ``alpha`` and ``beta`` to be normal distributed, and the priorfor ``tau`` to be Gamma distributed. We can pass custom values for the hyperparameters of these priors through the `hparams` dictionary parameter during instantiation of the class instance. More details on legal keys and values can be found in the docstring of the class.
###Code
a_size = true_alpha.shape[0]
b_size = true_beta.shape[0]
hypers = {
'a_mu': np.ones(a_size), # alpha mean is an array of 1's
'a_prec': np.eye(a_size) / 1000, # alpha precision matrix (inverse of covariance) is diagonal matrix with entries (1/1000)
'b_mu': np.ones(b_size),
'b_prec': np.eye(a_size) / 1000,
'tau_rate': 2, # tau's rate parameter for the prior gamma distribution
'tau_shape': 2, # tau's shape parameter
}
rsr_hp = LogitRSRGibbs(Q, W, X, y, hparams=hypers, random_state=10)
rsrhp_samples = rsr_hp.sample(2000, burnin=1000, chains=2)
rsrhp_samples.summary
###Output
_____no_output_____
###Markdown
As we can see the MCMC chain did not converge in just 2000 samples with the provided hyper-parameter values. We can eitherimprove the estimates by using a much longer chain or use better starting values in the ``sample`` method via the `start` parameter. Visualizing posterior samples========================``rsr_samples`` instance allows for convenient visualization of posterior samples of the parameters of interest. This functionality uses[arviz](https://arviz-devs.github.io/arviz/index.html) as a backend. We can plot the traces and densities as follows:
###Code
rsr_samples.plot_trace()
###Output
_____no_output_____
###Markdown
Parameters can be passed into the ``plot_trace`` method to configure the output plot. See arviz's [API reference](https://arviz-devs.github.io/arviz/generated/arviz.plot_trace.htmlarviz.plot_trace) for valid input. Similarly, the Effective Sample Size (ESS) can be visualized as follows:
###Code
rsr_samples.plot_ess()
###Output
_____no_output_____ |
docs/_downloads/0fe749eb32c254bb601b5db8fb895022/transfer_learning_tutorial.ipynb | ###Markdown
전이학습(Transfer Learning) 튜토리얼====================================**Author**: `Sasank Chilamkurthy `_ **번역**: `박정환 `_이 튜토리얼에서는 전이학습(Transfer Learning)을 이용하여 신경망을 어떻게 학습시키는지배워보겠습니다. 전이학습에 대해서는`CS231n 노트 `__ 에서 더 많은 내용을읽어보실 수 있습니다.위 노트를 인용해보면, 실제로 충분한 크기의 데이터셋을 갖추기는 상대적으로 드물기 때문에, (무작위 초기화를 통해) 맨 처음부터 합성곱 신경망(Convolutional Network) 전체를 학습하는 사람은 매우 적습니다. 대신, 매우 큰 데이터셋(예. 100가지 분류에 대해 120만개의 이미지가 포함된 ImageNet)에서 합성곱 신경망(ConvNet)을 미리 학습한 후, 이 합성곱 신경망을 관심있는 작업 을 위한 초기 설정 또는 고정된 특징 추출기(fixed feature extractor)로 사용합니다.이러한 전이학습 시나리오의 주요한 2가지는 다음과 같습니다:- **합성곱 신경망의 미세조정(finetuning)**: 무작위 초기화 대신, 신경망을 ImageNet 1000 데이터셋 등으로 미리 학습한 신경망으로 초기화합니다. 학습의 나머지 과정들은 평상시와 같습니다.- **고정된 특징 추출기로써의 합성곱 신경망**: 여기서는 마지막에 완전히 연결 된 계층을 제외한 모든 신경망의 가중치를 고정합니다. 이 마지막의 완전히 연결된 계층은 새로운 무작위의 가중치를 갖는 계층으로 대체되어 이 계층만 학습합니다.
###Code
# License: BSD
# Author: Sasank Chilamkurthy
from __future__ import print_function, division
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
plt.ion() # 대화형 모드
###Output
_____no_output_____
###Markdown
데이터 불러오기---------------데이터를 불러오기 위해 torchvision과 torch.utils.data 패키지를 사용하겠습니다.여기서 풀고자 하는 문제는 **개미** 와 **벌** 을 분류하는 모델을 학습하는 것입니다.개미와 벌 각각의 학습용 이미지는 대략 120장 정도 있고, 75개의 검증용 이미지가있습니다. 일반적으로 맨 처음부터 학습을 한다면 이는 일반화하기에는 아주 작은데이터셋입니다. 하지만 우리는 전이학습을 할 것이므로, 일반화를 제법 잘 할 수 있을것입니다.이 데이터셋은 ImageNet의 아주 작은 일부입니다... Note :: 데이터를 `여기 `_ 에서 다운로드 받아 현재 디렉토리에 압축을 푸십시오.
###Code
# 학습을 위해 데이터 증가(augmentation) 및 일반화(normalization)
# 검증을 위한 일반화
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
data_dir = 'data/hymenoptera_data'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
shuffle=True, num_workers=4)
for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
일부 이미지 시각화하기^^^^^^^^^^^^^^^^^^^^^^^^^데이터 증가를 이해하기 위해 일부 학습용 이미지를 시각화해보겠습니다.
###Code
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # 갱신이 될 때까지 잠시 기다립니다.
# 학습 데이터의 배치를 얻습니다.
inputs, classes = next(iter(dataloaders['train']))
# 배치로부터 격자 형태의 이미지를 만듭니다.
out = torchvision.utils.make_grid(inputs)
imshow(out, title=[class_names[x] for x in classes])
###Output
_____no_output_____
###Markdown
모델 학습하기--------------이제 모델을 학습하기 위한 일반 함수를 작성해보겠습니다. 여기서는 다음 내용들을설명합니다:- 학습율(learning rate) 관리(scheduling)- 최적의 모델 구하기아래에서 ``scheduler`` 매개변수는 ``torch.optim.lr_scheduler`` 의 LR 스케쥴러객체(Object)입니다.
###Code
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# 각 에폭(epoch)은 학습 단계와 검증 단계를 갖습니다.
for phase in ['train', 'val']:
if phase == 'train':
scheduler.step()
model.train() # 모델을 학습 모드로 설정
else:
model.eval() # 모델을 평가 모드로 설정
running_loss = 0.0
running_corrects = 0
# 데이터를 반복
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# 매개변수 경사도를 0으로 설정
optimizer.zero_grad()
# 순전파
# 학습 시에만 연산 기록을 추적
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# 학습 단계인 경우 역전파 + 최적화
if phase == 'train':
loss.backward()
optimizer.step()
# 통계
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# 모델을 깊은 복사(deep copy)함
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# 가장 나은 모델 가중치를 불러옴
model.load_state_dict(best_model_wts)
return model
###Output
_____no_output_____
###Markdown
모델 예측값 시각화하기^^^^^^^^^^^^^^^^^^^^^^^일부 이미지에 대한 예측값을 보여주는 일반화된 함수입니다.
###Code
def visualize_model(model, num_images=6):
was_training = model.training
model.eval()
images_so_far = 0
fig = plt.figure()
with torch.no_grad():
for i, (inputs, labels) in enumerate(dataloaders['val']):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
for j in range(inputs.size()[0]):
images_so_far += 1
ax = plt.subplot(num_images//2, 2, images_so_far)
ax.axis('off')
ax.set_title('predicted: {}'.format(class_names[preds[j]]))
imshow(inputs.cpu().data[j])
if images_so_far == num_images:
model.train(mode=was_training)
return
model.train(mode=was_training)
###Output
_____no_output_____
###Markdown
합성곱 신경망 미세조정(finetuning)----------------------------------미리 학습한 모델을 불러온 후 마지막의 완전히 연결된 계층을 초기화합니다.
###Code
model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# 모든 매개변수들이 최적화되었는지 관찰
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# 7 에폭마다 0.1씩 학습율 감소
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
###Output
_____no_output_____
###Markdown
학습 및 평가하기^^^^^^^^^^^^^^^^^^CPU에서는 15-25분 가량, GPU에서는 1분도 이내의 시간이 걸립니다.
###Code
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=25)
visualize_model(model_ft)
###Output
_____no_output_____
###Markdown
고정된 특징 추출기로써의 합성곱 신경망---------------------------------------이제, 마지막 계층을 제외한 신경망의 모든 부분을 고정해야 합니다.``requires_grad == False`` 로 설정하여 매개변수를 고정하여 ``backward()`` 중에경사도가 계산되지 않도록 해야합니다.이에 대한 문서는`여기 `__에서 확인할 수 있습니다.
###Code
model_conv = torchvision.models.resnet18(pretrained=True)
for param in model_conv.parameters():
param.requires_grad = False
# 새로 생성된 모듈의 매개변수는 기본값이 requires_grad=True 임
num_ftrs = model_conv.fc.in_features
model_conv.fc = nn.Linear(num_ftrs, 2)
model_conv = model_conv.to(device)
criterion = nn.CrossEntropyLoss()
# 이전과는 다르게 마지막 계층의 매개변수들만 최적화되는지 관찰
optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)
# 7 에폭마다 0.1씩 학습율 감소
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)
###Output
_____no_output_____
###Markdown
학습 및 평가하기^^^^^^^^^^^^^^^^^CPU에서 실행하는 경우 이전과 비교했을 때 약 절반 가량의 시간만이 소요될 것입니다.이는 대부분의 신경망에서 경사도를 계산할 필요가 없기 때문입니다. 하지만,순전파는 계산이 필요합니다.
###Code
model_conv = train_model(model_conv, criterion, optimizer_conv,
exp_lr_scheduler, num_epochs=25)
visualize_model(model_conv)
plt.ioff()
plt.show()
###Output
_____no_output_____ |
02-novice/060EarthquakesSolution.ipynb | ###Markdown
Solution: the biggest earthquake in the UK this century Import modulesWe first import the modules we will need to get and plot the data. We will use the `requests` module to query the USGS earthquake catalog, the `math` module to do some coordinate conversion and the `IPython` module to display a map image.
###Code
import requests
import math
import IPython
###Output
_____no_output_____
###Markdown
Download the dataWe can reuse the code provided in the exercise notebook to query the USGS earthquake catalog API using the `requests` module.
###Code
earthquake_catalog_api_url = "http://earthquake.usgs.gov/fdsnws/event/1/query"
query_parameters = {
"format": "geojson",
"starttime": "2000-01-01",
"maxlatitude": "60.830",
"minlatitude": "49.877",
"maxlongitude": "1.767",
"minlongitude": "-8.182",
"minmagnitude": "1",
"orderby": "time-asc"
}
quakes_response = requests.get(earthquake_catalog_api_url, params=query_parameters)
###Output
_____no_output_____
###Markdown
We can check that the returned object is a `Response` as expected
###Code
type(quakes_response)
###Output
_____no_output_____
###Markdown
It is also useful to check whether the status of the returned response corresponds to there not being any client or server errors, with this being indicated by a status code of 200
###Code
assert quakes_response.status_code == 200
###Output
_____no_output_____
###Markdown
Parse the data as JSON We saw in the exercise notebooks that the `Reponse` objects returned by `requests.get` have various attributes that allow accessing the response content in different formats including `Response.content` to get the raw `bytes` content and `Response.text` to get the response as a (Unicode) `str` object. We can print out all of the attributes of an object in Python using the inbuilt `dir` function; typically these will include attributes intended for internal use only which are conventionally indicated by prefixing with an underscore character `_`. We can display all the attributes without an initial underscore character as follows.
###Code
[attribute for attribute in dir(quakes_response) if attribute[0] != '_']
###Output
_____no_output_____
###Markdown
As well as the `content`, `ok`, `status_code` and `text` attributes we already encountered, we can see there is also a `json` attribute, which seems like it could be relevant to our task of decoding the response as JSON. We can find out more about this attribute by using a useful feature of Jupyter / IPython - by adding a question mark `?` to the end of a Python object the documentation string (`docstring`) for that object will be displayed.
###Code
quakes_response.json?
###Output
_____no_output_____
###Markdown
From this we can see that `quakes.response` is a method (function bound to an object) which returns a Python object corresponding to a JSON encoded response, which is exactly what we need. There are no required arguments so we can call the method by just adding a pair of empty parentheses.
###Code
quakes_json = quakes_response.json()
###Output
_____no_output_____
###Markdown
If we had not been aware of the `json` method an alternative would be to use the `json` module directly as we saw previously. For example the following would give an equivalent result to the above.```Pythonimport jsonquakes_json = json.loads(quakes_response.text)``` Investigate the data to discover how it is structured.Now that we have queried and decoded the data into a Python object, we need to do some exploration to find out how it is structure. In some cases there may be documentation we can use to help guide our exploration of data - for example [this page on the USGS earthquake catalog website](https://earthquake.usgs.gov/earthquakes/feed/v1.0/geojson.php) gives a summary of the GeoJSON format. However, in other cases we may not be so lucky and have to work with data with an undocumented format so it is also useful to consider how we might explore that data structure ourselves. A potentially useful first step is to check what the type of the `quakes_json` object is.
###Code
type(quakes_json)
###Output
_____no_output_____
###Markdown
We see that `quakes_json` is a Python `dict` (dictionary) object. We might want to find out what keys are defined in this dictionary - we can do this by calling the `keys` method.
###Code
quakes_json.keys()
###Output
_____no_output_____
###Markdown
We see that the dictionary has three keys, all of which are strings. The `features` key in particular here looks potentially promising for our goal of finding the maximum magnitude event in the data (on the rationale that the magnitude is a *feature* of the event). We can check what type the value associated with the `features` key is.
###Code
type(quakes_json['features'])
###Output
_____no_output_____
###Markdown
We find out that the `features` key contains a list. We can check the length of this list.
###Code
len(quakes_json['features'])
###Output
_____no_output_____
###Markdown
We could also use a set (which we encountered previously in the lesson on dictionaries) to find out what the set of types of all of the elements in the `quakes_json['features']` list is. Similarly to the list comprehensions we encountered in a previous lesson we can use a similar syntax here to succinctly construct the set we required.
###Code
{type(feature) for feature in quakes_json['features']}
###Output
_____no_output_____
###Markdown
From this we see that all of the elements in the `quakes_json['features']` share the same Python `dict` type. We can use a similar approach to find out what keys all these dictionary objects have.
###Code
{tuple(feature.keys()) for feature in quakes_json['features']}
###Output
_____no_output_____
###Markdown
This tells us that as well as all the elements being dictionaries, all of the dictionaries have the same keys. This suggests the list corresponds to a representation of a sequence of objects of the same type, with each dictionary containing the 'features' for a particular object, with in in this case the objects in questions being particular earthquake events.To check this idea, we can look at the value of a particular element in the `features` list - as we know all elements are dictionaries with the same keys, its reasonable to assume the first element `quakes_json['features'][0]` will be representative of the all the other elements in the list. We can start by summarising the keys and types of the values in this dictionary.
###Code
for key, value in quakes_json['features'][0].items():
print(key, type(value).__name__)
###Output
_____no_output_____
###Markdown
We can also view the dictionary directly
###Code
quakes_json['features'][0]
###Output
_____no_output_____
###Markdown
From this we can see the `properties` and `geometry` keys both themselves map to `dict` objects. Within these inner dictionaries are several keys which look relevant to our task of identifying the highest magnitude earthquake event and displaying its location on a map. Specifically the `mag` key in the `properties` dictionary seems likely to represent the magnitude of the event
###Code
quakes_json['features'][0]['properties']['mag']
###Output
_____no_output_____
###Markdown
while the `coordinates` key in the `geometry` dictionary seems to represent the location of the event.
###Code
quakes_json['features'][0]['geometry']['coordinates']
###Output
_____no_output_____
###Markdown
If go to the URL listed as the value for the `url` key in the `properties` dictionary,
###Code
quakes_json['features'][0]['properties']['url']
###Output
_____no_output_____
###Markdown
we can confirm that this is indeed a correct interpretation of the data as the listed magnitude corresponds to the value for the `mag` key while the longitude (East-West axis) and latitude (North-South axis) coordinates (in degrees) of the event location correspond to the first two elements respectively in the list associated with the `coordinates` key (with the third coordinate corresponding to the depth of the event). Find the largest quake(s)Now that we have a handle on the structure of the data, we are ready to search through the data to identify the largest magnitude earthquake event(s). We are interested in finding the element (or elements) in a sequence which maximises some property - this operation is termed the [$\arg\max$ in mathematics and computer science](https://en.wikipedia.org/wiki/Arg_max). While there is built-in `max` function in Python there is no corresponding `argmax` function, though several external libraries including the NumPy library which we encounter in a subsequent lesson do include an `argmax` function.We will therefore loop over all of the event details in the `features` list and construct a list of the event or events for which the magnitude is currently the largest, creating a new list if the magnitude of the current event is larger than the previous largest or adding the event to the previous list if it has an equal magnitude. After iterating through all the events this list should contain the details of the event(s) with the largest magnitude. An example implementation of this approach is as follows.
###Code
largest_magnitude_events = [quakes_json['features'][0]]
for quake in quakes_json['features']:
if quake['properties']['mag'] > largest_magnitude_events[0]['properties']['mag']:
largest_magnitude_events = [quake]
elif quake['properties']['mag'] == largest_magnitude_events[0]['properties']['mag']:
largest_magnitude_events += [quake]
###Output
_____no_output_____
###Markdown
We can now check if there was a single event with the maximum magnitude or multiple
###Code
len(largest_magnitude_events)
###Output
_____no_output_____
###Markdown
It turns out there are two events with the same maximal magnitude. As a sanity check we can print the magnitude of both events to check that they match
###Code
print([quake['properties']['mag'] for quake in largest_magnitude_events])
###Output
_____no_output_____
###Markdown
Get a map at the point of the quakesThere are various different web services which can be used to get map imagery, here we will [OpenStreetMap](https://www.openstreetmap.org/). Specifically we will get a pre-rendered map tile containing a location specified by latitude and longitude coordinates as a *portable network graphic* (PNG) image. [This page](https://wiki.openstreetmap.org/wiki/Slippy_map_tilenamesPython) on the OpenStreeMap wiki gives a Python implementation of a function `deg2num` to convert from a latitude-longitude pair in degrees plus a [zoom level](https://wiki.openstreetmap.org/wiki/Slippy_map_tilenamesZoom_levels), to a pair of indices specifying a specific map tile.
###Code
def deg2num(lat_deg, lon_deg, zoom):
lat_rad = math.radians(lat_deg)
n = 2.0 ** zoom
xtile = int((lon_deg + 180.0) / 360.0 * n)
ytile = int((1.0 - math.asinh(math.tan(lat_rad)) / math.pi) / 2.0 * n)
return (xtile, ytile)
###Output
_____no_output_____
###Markdown
There are various [tile servers](https://wiki.openstreetmap.org/wiki/Slippy_map_tilenamesTile_servers) for the OpenStreetMap data with differing rendering styles and URL templates. Here we use the 'standard' style tiles for which an appropiate URL template is `http://a.tile.openstreetmap.org/{zoom}/{x}/{y}.png` where `{zoom}` indicates the zoom level and `{x}` and `{y}` the two components of tile number returned by `deg2num`. We can use Python [formatted string literals](https://docs.python.org/3/tutorial/inputoutput.htmltut-f-strings) to populate the template with the appropriate values and use the `requests.get` function to request the image corresponding to the URL. Finally we can create an [`IPython.display.Image` object](https://ipython.readthedocs.io/en/stable/api/generated/IPython.display.htmlIPython.display.Image) which will display the raw data corresponding to the contents of the image request as an image in the Jupyter Lab front end. We wrap this all in to a function `get_map_tile_at` to allow us to use it to display images for each of the largest magnitude earthquake events identified earlier.
###Code
def get_map_tile_at(latitude, longitude, zoom=10, satellite=False):
x, y = deg2num(latitude, longitude, zoom)
tile_url = f"http://a.tile.openstreetmap.org/{zoom}/{x}/{y}.png"
response = requests.get(tile_url)
assert response.status_code == 200
return IPython.core.display.Image(response.content)
###Output
_____no_output_____
###Markdown
As a test we can check the map displayed for the coordinates of the [Prime meridian at the Royal Observatory Greenwich](https://geohack.toolforge.org/geohack.php?pagename=Prime_meridian_(Greenwich)¶ms=51_28_40.1_N_0_0_5.3_W_type:landmark_globe:earth_region:GB_scale:1000)
###Code
get_map_tile_at(latitude=51.477806, longitude=-0.001472, zoom=14)
###Output
_____no_output_____
###Markdown
Display the mapsWe can now finally show the maps for the locations of the maximum magnitude earthquake events. As additional check we also print the description under the `title` key in the `properties` dictionary for the event to check it tallies with the location shown in the displayed map.
###Code
for quake in largest_magnitude_events:
longitude = quake['geometry']['coordinates'][0]
latitude = quake['geometry']['coordinates'][1]
print(quake['properties']['title'])
display(get_map_tile_at(latitude, longitude, 12))
###Output
_____no_output_____ |
collman15v2/201710/tight_annotation_analysis.ipynb | ###Markdown
Tight Annotation Analysis on the collman15v2 dataFrom the collman15v2 annotation data we find all the unique identifiers and for each id we grab the pixels that have been annotated sum them and divide by the number of pixels.The results are saved to a csv file.
###Code
import importlib
import numpy as np
import toolbox
import annoTightAll
import h5py
importlib.reload(toolbox)
importlib.reload(annoTightAll)
COLL_NAME = 'collman'
EXP_NAME = 'collman15v2'
ANNO_NAME = 'annotation'
COORD_FRAME = 'collman_collman15v2'
CONFIG_FILE = 'config.ini'
OUTPUT = 'test20171211.csv'
CHAN_NAMES = ['DAPI1st', 'DAPI2nd', 'DAPI3rd', 'GABA488', 'GAD647',
'gephyrin594', 'GS594', 'MBP488', 'NR1594', 'PSD95_488',
'Synapsin647', 'VGluT1_647']
cubes, loc, F0, nonzeros, ids = annoTightAll.main(COLL_NAME, EXP_NAME, COORD_FRAME,
CHAN_NAMES=CHAN_NAMES, ANNO_NAME = ANNO_NAME, num_threads = 6, CONFIG_FILE= 'config.ini')
F0w = np.divide(F0, nonzeros)
toolbox.mainOUT(F0w, CHAN_NAMES, OUTPUT)
#toolbox.toh5(EXP_NAME, OUTPUT + '.h5', CHAN_NAMES, cubes, loc)
###Output
_____no_output_____
###Markdown
Write out locations
###Code
toolbox.mainOUT(np.transpose(loc), ['z','y','x'], 'locations' + OUTPUT)
###Output
_____no_output_____ |
docs_notebooks/Quick Demo/Quick Demo.ipynb | ###Markdown
Quick DemoThis Notebook shows a quick demonstration for the `eprun` package.
###Code
from eprun import eprun
from eprun import eprun
epresult=eprun(ep_dir=r'C:\EnergyPlusV9-4-0',
input_filepath='1ZoneUncontrolled.idf',
epw_filepath='USA_CO_Golden-NREL.724666_TMY3.epw',
sim_dir='simulation_files')
print(type(epresult))
print(epresult.get_end().line)
print(epresult.get_eso().get_environment('RUN PERIOD 1').get_interval_summary())
%matplotlib inline
import matplotlib.pyplot as plt
ax=epresult.get_eso().get_environment('RUN PERIOD 1').get_interval_variable(75).plot()
plt.savefig('quick_demo.png',bbox_inches='tight')
###Output
_____no_output_____ |
examples/getting_started/4_Superdense_coding.ipynb | ###Markdown
Superdense CodingIn this tutorial, we construct an implementation of the superdense coding protocol via Amazon Braket's SDK. Superdense coding is a method of transmitting two classical bits by sending only one qubit. Starting with a pair of entanged qubits, the sender (aka Alice) applies a certain quantum gate to their qubit and sends the result to the receiver (aka Bob), who is then able to decode the full two-bit message.If Alice wants to send a two-bit message to Bob using only classical channels, she would need to send two classical bits. However, with the help of quantum entanglement, Alice can do this by sending just one qubit. By ensuring that Alice and Bob initially share an entangled state of two qubits, they can devise a strategy such that Alice can transmit her two-bit message by sending her single qubit.To implement superdense coding, Alice and Bob need to share or otherwise prepare a maximally entangled pair of qubits (i.e., a Bell pair). Alice then selects one of the four possible messages to send with two classical bits: 00, 01, 10, or 11. Depending on which two-bit string she wants to send, Alice applies a corresponding quantum gate to encode her desired message. Finally, Alice sends her own qubit to Bob, which Bob then uses to decode the message by undoing the initial entangling operation.Note that superdense coding is closely related to quantum teleportation. In teleportation, one uses an entangled pair (an e-bit) and two uses of a classical channel to simulate a single use of a quantum channel. In superdense coding, one uses an e-bit and a single use of a quantum channel to simulate two uses of a classical channel. Detailed Steps1. Alice and Bob initially share a Bell pair. This can be prepared by starting with two qubits in the |0⟩ state, then applying the Hadamard gate (𝐻) to the first qubit to create an equal superposition, and finally applying a CNOT gate (𝐶𝑋) between the two qubits to produce a Bell pair. Alice holds one of these two qubits, while Bob holds the other.2. Alice selects one of the four possible messages to send Bob. Each message corresponds to a unique set of quantum gate(s) to apply to her own qubit, illustrated in the table below. For example, if Alice wants to send the message "01", she would apply the Pauli X gate.3. Alice sends her qubit to Bob through the quantum channel.4. Bob decodes Alice's two-bit message by first applying a CNOT gate using Alice's qubit as the control and his own qubit as the target, and then a Hadamard gate on Alice's qubit to restore the classical message.| Message | Alice's encoding | State Bob receives(non-normalized) | After 𝐶𝑋 gate(non-normalized) | After 𝐻 gate || :---: | :---: | :---: | :---: | :---: || 00 | 𝐼 | \|00⟩ + \|11⟩ | \|00⟩ + \|10⟩ | \|00⟩| 01 | 𝑋 | \|10⟩ + \|01⟩ | \|11⟩ + \|01⟩ | \|01⟩| 10 | 𝑍 | \|00⟩ - \|11⟩ | \|00⟩ - \|10⟩ | \|10⟩| 11 | 𝑍𝑋 | \|01⟩ - \|10⟩ | \|01⟩ - \|11⟩ | \|11⟩ Circuit DiagramCircuit used to send the message "00". To send other messages, swap out the identity (𝐼) gate.![circuit.png](attachment:circuit.png) Code
###Code
# Print version of SDK
!pip show amazon-braket-sdk | grep Version
# Import Braket libraries
from braket.circuits import Circuit, Gate, Moments
from braket.circuits.instruction import Instruction
from braket.aws import AwsDevice
import matplotlib.pyplot as plt
import time
###Output
Version: 1.0.0.post1
###Markdown
Typically, we recommend running circuits with fewer than 25 qubits on the local simulator to avoid latency bottlenecks. The managed, high-performance simulator SV1 is better suited for larger circuits up to 34 qubits. Nevertheless, for demonstration purposes, we are going to continue this example with SV1 but it is easy to switch over to the local simulator by replacing the last line in the cell below with ```device = LocalSimulator()``` and importing the ```LocalSimulator```.
###Code
# Select device arn for the managed simulator
device = AwsDevice("arn:aws:braket:::device/quantum-simulator/amazon/sv1")
# Function to run quantum task, check the status thereof and collect results
def get_result(device, circ):
# get number of qubits
num_qubits = circ.qubit_count
# specify desired results_types
circ.probability()
# submit task: define task (asynchronous)
if device.name == 'StateVectorSimulator':
task = device.run(circ, shots=1000)
else:
task = device.run(circ, shots=1000)
# Get ID of submitted task
task_id = task.id
# print('Task ID :', task_id)
# Wait for job to complete
status_list = []
status = task.state()
status_list += [status]
print('Status:', status)
# Only notify the user when there's a status change
while status != 'COMPLETED':
status = task.state()
if status != status_list[-1]:
print('Status:', status)
status_list += [status]
# get result
result = task.result()
# get metadata
metadata = result.task_metadata
# get output probabilities
probs_values = result.values[0]
# get measurement results
measurement_counts = result.measurement_counts
# print measurement results
print('measurement_counts:', measurement_counts)
# bitstrings
format_bitstring = '{0:0' + str(num_qubits) + 'b}'
bitstring_keys = [format_bitstring.format(ii) for ii in range(2**num_qubits)]
# plot probabalities
plt.bar(bitstring_keys, probs_values)
plt.xlabel('bitstrings')
plt.ylabel('probability')
plt.xticks(rotation=90)
plt.show()
return measurement_counts
###Output
_____no_output_____
###Markdown
Alice and Bob initially share a Bell pair. Let's create this now:
###Code
circ = Circuit()
circ.h([0])
circ.cnot(0,1)
###Output
_____no_output_____
###Markdown
Define Alice's encoding scheme according to the table above. Alice selects one of these messages to send.
###Code
# Four possible messages and their corresponding gates
message = {"00": Circuit().i(0),
"01": Circuit().x(0),
"10": Circuit().z(0),
"11": Circuit().x(0).z(0)
}
# Select message to send. Let's start with '01' for now
m = "01"
###Output
_____no_output_____
###Markdown
Alice encodes her message by applying the gates defined above
###Code
# Encode the message
circ.add_circuit(message[m])
###Output
_____no_output_____
###Markdown
Alice then sends her qubit to Bob so that Bob has both qubits in his lab. Bob decodes Alice's message by disentangling the two qubits:
###Code
circ.cnot(0,1)
circ.h([0])
###Output
_____no_output_____
###Markdown
The full circuit now looks like
###Code
print(circ)
###Output
T : |0|1|2|3|4|
q0 : -H-C-X-C-H-
| |
q1 : ---X---X---
T : |0|1|2|3|4|
###Markdown
By measuring the two qubits in the computational basis, Bob can read off Alice's two qubit message
###Code
counts = get_result(device, circ)
print(counts)
###Output
Status: CREATED
Status: QUEUED
Status: RUNNING
Status: COMPLETED
measurement_counts: Counter({'01': 1000})
###Markdown
We can check that this scheme works for the other possible messages too:
###Code
for m in message:
# Reproduce the full circuit above by concatenating all of the gates:
newcirc = Circuit().h([0]).cnot(0,1).add_circuit(message[m]).cnot(0,1).h([0])
# Run the circuit:
counts = get_result(device, newcirc)
print("Message: " + m + ". Results:")
print(counts)
###Output
Status: CREATED
Status: QUEUED
Status: RUNNING
Status: COMPLETED
measurement_counts: Counter({'00': 1000})
###Markdown
Superdense CodingIn this tutorial, we construct an implementation of the superdense coding protocol via Amazon Braket's SDK. Superdense coding is a method of transmitting two classical bits by sending only one qubit. Starting with a pair of entanged qubits, the sender (aka Alice) applies a certain quantum gate to their qubit and sends the result to the receiver (aka Bob), who is then able to decode the full two-bit message.If Alice wants to send a two-bit message to Bob using only classical channels, she would need to send two classical bits. However, with the help of quantum entanglement, Alice can do this by sending just one qubit. By ensuring that Alice and Bob initially share an entangled state of two qubits, they can devise a strategy such that Alice can transmit her two-bit message by sending her single qubit.To implement superdense coding, Alice and Bob need to share or otherwise prepare a maximally entangled pair of qubits (i.e., a Bell pair). Alice then selects one of the four possible messages to send with two classical bits: 00, 01, 10, or 11. Depending on which two-bit string she wants to send, Alice applies a corresponding quantum gate to encode her desired message. Finally, Alice sends her own qubit to Bob, which Bob then uses to decode the message by undoing the initial entangling operation.Note that superdense coding is closely related to quantum teleportation. In teleportation, one uses an entangled pair (an e-bit) and two uses of a classical channel to simulate a single use of a quantum channel. In superdense coding, one uses an e-bit and a single use of a quantum channel to simulate two uses of a classical channel. Detailed Steps1. Alice and Bob initially share a Bell pair. This can be prepared by starting with two qubits in the |0⟩ state, then applying the Hadamard gate (𝐻) to the first qubit to create an equal superposition, and finally applying a CNOT gate (𝐶𝑋) between the two qubits to produce a Bell pair. Alice holds one of these two qubits, while Bob holds the other.2. Alice selects one of the four possible messages to send Bob. Each message corresponds to a unique set of quantum gate(s) to apply to her own qubit, illustrated in the table below. For example, if Alice wants to send the message "01", she would apply the Pauli X gate.3. Alice sends her qubit to Bob through the quantum channel.4. Bob decodes Alice's two-bit message by first applying a CNOT gate using Alice's qubit as the control and his own qubit as the target, and then a Hadamard gate on Alice's qubit to restore the classical message.| Message | Alice's encoding | State Bob receives(non-normalized) | After 𝐶𝑋 gate(non-normalized) | After 𝐻 gate || :---: | :---: | :---: | :---: | :---: || 00 | 𝐼 | \|00⟩ + \|11⟩ | \|00⟩ + \|10⟩ | \|00⟩| 01 | 𝑋 | \|10⟩ + \|01⟩ | \|11⟩ + \|01⟩ | \|01⟩| 10 | 𝑍 | \|00⟩ - \|11⟩ | \|00⟩ - \|10⟩ | \|10⟩| 11 | 𝑍𝑋 | \|01⟩ - \|10⟩ | \|01⟩ - \|11⟩ | \|11⟩ Circuit DiagramCircuit used to send the message "00". To send other messages, swap out the identity (𝐼) gate.![circuit.png](attachment:circuit.png) Code
###Code
# Print version of SDK
!pip show amazon-braket-sdk | grep Version
# Import Braket libraries
from braket.circuits import Circuit, Gate, Moments
from braket.circuits.instruction import Instruction
from braket.aws import AwsDevice
import matplotlib.pyplot as plt
import time
###Output
Version: 1.0.0.post1
###Markdown
Typically, we recommend running circuits with fewer than 25 qubits on the local simulator to avoid latency bottlenecks. The managed, high-performance simulator SV1 is better suited for larger circuits up to 34 qubits. Nevertheless, for demonstration purposes, we are going to continue this example with SV1 but it is easy to switch over to the local simulator by replacing the last line in the cell below with ```device = LocalSimulator()``` and importing the ```LocalSimulator```.__NOTE__: Please enter your desired device and S3 location (bucket and key) below. If you are working with the local simulator ```LocalSimulator()``` you do not need to specify any S3 location. However, if you are using the managed cloud-based device or any QPU devices you need to specify the S3 location where your results will be stored. In this case, you need to replace the API call ```device.run(circuit, ...)``` below with ```device.run(circuit, s3_folder, ...)```.
###Code
# Please enter the S3 bucket you created during onboarding in the code below
my_bucket = "amazon-braket-Your-Bucket-Name" # the name of the bucket
my_prefix = "Your-Folder-Name" # the name of the folder in the bucket
s3_folder = (my_bucket, my_prefix)
# Select device arn for the managed simulator
device = AwsDevice("arn:aws:braket:::device/quantum-simulator/amazon/sv1")
# Function to run quantum task, check the status thereof and collect results
def get_result(device, circ, s3_folder):
# get number of qubits
num_qubits = circ.qubit_count
# specify desired results_types
circ.probability()
# submit task: define task (asynchronous)
if device.name == 'StateVectorSimulator':
task = device.run(circ, shots=1000)
else:
task = device.run(circ, s3_folder, shots=1000)
# Get ID of submitted task
task_id = task.id
# print('Task ID :', task_id)
# Wait for job to complete
status_list = []
status = task.state()
status_list += [status]
print('Status:', status)
# Only notify the user when there's a status change
while status != 'COMPLETED':
status = task.state()
if status != status_list[-1]:
print('Status:', status)
status_list += [status]
# get result
result = task.result()
# get metadata
metadata = result.task_metadata
# get output probabilities
probs_values = result.values[0]
# get measurment results
measurement_counts = result.measurement_counts
# print measurment results
print('measurement_counts:', measurement_counts)
# bitstrings
format_bitstring = '{0:0' + str(num_qubits) + 'b}'
bitstring_keys = [format_bitstring.format(ii) for ii in range(2**num_qubits)]
# plot probabalities
plt.bar(bitstring_keys, probs_values);
plt.xlabel('bitstrings');
plt.ylabel('probability');
plt.xticks(rotation=90);
plt.show()
return measurement_counts
###Output
_____no_output_____
###Markdown
Alice and Bob initially share a Bell pair. Let's create this now:
###Code
circ = Circuit();
circ.h([0]);
circ.cnot(0,1);
###Output
_____no_output_____
###Markdown
Define Alice's encoding scheme according to the table above. Alice selects one of these messages to send.
###Code
# Four possible messages and their corresponding gates
message = {"00": Circuit().i(0),
"01": Circuit().x(0),
"10": Circuit().z(0),
"11": Circuit().x(0).z(0)
}
# Select message to send. Let's start with '01' for now
m = "01"
###Output
_____no_output_____
###Markdown
Alice encodes her message by applying the gates defined above
###Code
# Encode the message
circ.add_circuit(message[m]);
###Output
_____no_output_____
###Markdown
Alice then sends her qubit to Bob so that Bob has both qubits in his lab. Bob decodes Alice's message by disentangling the two qubits:
###Code
circ.cnot(0,1);
circ.h([0]);
###Output
_____no_output_____
###Markdown
The full circuit now looks like
###Code
print(circ)
###Output
T : |0|1|2|3|4|
q0 : -H-C-X-C-H-
| |
q1 : ---X---X---
T : |0|1|2|3|4|
###Markdown
By measuring the two qubits in the computational basis, Bob can read off Alice's two qubit message
###Code
counts = get_result(device, circ, s3_folder)
print(counts)
###Output
Status: CREATED
Status: QUEUED
Status: RUNNING
Status: COMPLETED
measurement_counts: Counter({'01': 1000})
###Markdown
We can check that this scheme works for the other possible messages too:
###Code
for m in message:
# Reproduce the full circuit above by concatenating all of the gates:
newcirc = Circuit().h([0]).cnot(0,1).add_circuit(message[m]).cnot(0,1).h([0]);
# Run the circuit:
counts = get_result(device, newcirc, s3_folder)
print("Message: " + m + ". Results:")
print(counts)
###Output
Status: CREATED
Status: QUEUED
Status: RUNNING
Status: COMPLETED
measurement_counts: Counter({'00': 1000})
###Markdown
Superdense CodingIn this tutorial, we construct an implementation of the superdense coding protocol via Amazon Braket's SDK. Superdense coding is a method of transmitting two classical bits by sending only one qubit. Starting with a pair of entanged qubits, the sender (aka Alice) applies a certain quantum gate to their qubit and sends the result to the receiver (aka Bob), who is then able to decode the full two-bit message.If Alice wants to send a two-bit message to Bob using only classical channels, she would need to send two classical bits. However, with the help of quantum entanglement, Alice can do this by sending just one qubit. By ensuring that Alice and Bob initially share an entangled state of two qubits, they can devise a strategy such that Alice can transmit her two-bit message by sending her single qubit.To implement superdense coding, Alice and Bob need to share or otherwise prepare a maximally entangled pair of qubits (i.e., a Bell pair). Alice then selects one of the four possible messages to send with two classical bits: 00, 01, 10, or 11. Depending on which two-bit string she wants to send, Alice applies a corresponding quantum gate to encode her desired message. Finally, Alice sends her own qubit to Bob, which Bob then uses to decode the message by undoing the initial entangling operation.Note that superdense coding is closely related to quantum teleportation. In teleportation, one uses an entangled pair (an e-bit) and two uses of a classical channel to simulate a single use of a quantum channel. In superdense coding, one uses an e-bit and a single use of a quantum channel to simulate two uses of a classical channel. Detailed Steps1. Alice and Bob initially share a Bell pair. This can be prepared by starting with two qubits in the |0⟩ state, then applying the Hadamard gate (𝐻) to the first qubit to create an equal superposition, and finally applying a CNOT gate (𝐶𝑋) between the two qubits to produce a Bell pair. Alice holds one of these two qubits, while Bob holds the other.2. Alice selects one of the four possible messages to send Bob. Each message corresponds to a unique set of quantum gate(s) to apply to her own qubit, illustrated in the table below. For example, if Alice wants to send the message "01", she would apply the Pauli X gate.3. Alice sends her qubit to Bob through the quantum channel.4. Bob decodes Alice's two-bit message by first applying a CNOT gate using Alice's qubit as the control and his own qubit as the target, and then a Hadamard gate on Alice's qubit to restore the classical message.| Message | Alice's encoding | State Bob receives(non-normalized) | After 𝐶𝑋 gate(non-normalized) | After 𝐻 gate || :---: | :---: | :---: | :---: | :---: || 00 | 𝐼 | \|00⟩ + \|11⟩ | \|00⟩ + \|10⟩ | \|00⟩| 01 | 𝑋 | \|10⟩ + \|01⟩ | \|11⟩ + \|01⟩ | \|01⟩| 10 | 𝑍 | \|00⟩ - \|11⟩ | \|00⟩ - \|10⟩ | \|10⟩| 11 | 𝑍𝑋 | \|01⟩ - \|10⟩ | \|01⟩ - \|11⟩ | \|11⟩ Circuit DiagramCircuit used to send the message "00". To send other messages, swap out the identity (𝐼) gate.![circuit.png](attachment:circuit.png) Code
###Code
# Print version of SDK
!pip show amazon-braket-sdk | grep Version
# Import Braket libraries
from braket.circuits import Circuit, Gate, Moments
from braket.circuits.instruction import Instruction
from braket.aws import AwsDevice
import matplotlib.pyplot as plt
import time
###Output
Version: 1.0.0.post1
###Markdown
Typically, we recommend running circuits with fewer than 25 qubits on the local simulator to avoid latency bottlenecks. The managed, high-performance simulator SV1 is better suited for larger circuits up to 34 qubits. Nevertheless, for demonstration purposes, we are going to continue this example with SV1 but it is easy to switch over to the local simulator by replacing the last line in the cell below with ```device = LocalSimulator()``` and importing the ```LocalSimulator```.__NOTE__: Please enter your desired device and S3 location (bucket and key) below. If you are working with the local simulator ```LocalSimulator()``` you do not need to specify any S3 location. However, if you are using the managed cloud-based device or any QPU devices you need to specify the S3 location where your results will be stored. In this case, you need to replace the API call ```device.run(circuit, ...)``` below with ```device.run(circuit, s3_folder, ...)```.
###Code
# Please enter the S3 bucket you created during onboarding in the code below
my_bucket = "amazon-braket-Your-Bucket-Name" # the name of the bucket
my_prefix = "Your-Folder-Name" # the name of the folder in the bucket
s3_folder = (my_bucket, my_prefix)
# Select device arn for the managed simulator
device = AwsDevice("arn:aws:braket:::device/quantum-simulator/amazon/sv1")
# Function to run quantum task, check the status thereof and collect results
def get_result(device, circ, s3_folder):
# get number of qubits
num_qubits = circ.qubit_count
# specify desired results_types
circ.probability()
# submit task: define task (asynchronous)
if device.name == 'DefaultSimulator':
task = device.run(circ, shots=1000)
else:
task = device.run(circ, s3_folder, shots=1000)
# Get ID of submitted task
task_id = task.id
# print('Task ID :', task_id)
# Wait for job to complete
status_list = []
status = task.state()
status_list += [status]
print('Status:', status)
# Only notify the user when there's a status change
while status != 'COMPLETED':
status = task.state()
if status != status_list[-1]:
print('Status:', status)
status_list += [status]
# get result
result = task.result()
# get metadata
metadata = result.task_metadata
# get output probabilities
probs_values = result.values[0]
# get measurment results
measurement_counts = result.measurement_counts
# print measurment results
print('measurement_counts:', measurement_counts)
# bitstrings
format_bitstring = '{0:0' + str(num_qubits) + 'b}'
bitstring_keys = [format_bitstring.format(ii) for ii in range(2**num_qubits)]
# plot probabalities
plt.bar(bitstring_keys, probs_values);
plt.xlabel('bitstrings');
plt.ylabel('probability');
plt.xticks(rotation=90);
plt.show()
return measurement_counts
###Output
_____no_output_____
###Markdown
Alice and Bob initially share a Bell pair. Let's create this now:
###Code
circ = Circuit();
circ.h([0]);
circ.cnot(0,1);
###Output
_____no_output_____
###Markdown
Define Alice's encoding scheme according to the table above. Alice selects one of these messages to send.
###Code
# Four possible messages and their corresponding gates
message = {"00": Circuit().i(0),
"01": Circuit().x(0),
"10": Circuit().z(0),
"11": Circuit().x(0).z(0)
}
# Select message to send. Let's start with '01' for now
m = "01"
###Output
_____no_output_____
###Markdown
Alice encodes her message by applying the gates defined above
###Code
# Encode the message
circ.add_circuit(message[m]);
###Output
_____no_output_____
###Markdown
Alice then sends her qubit to Bob so that Bob has both qubits in his lab. Bob decodes Alice's message by disentangling the two qubits:
###Code
circ.cnot(0,1);
circ.h([0]);
###Output
_____no_output_____
###Markdown
The full circuit now looks like
###Code
print(circ)
###Output
T : |0|1|2|3|4|
q0 : -H-C-X-C-H-
| |
q1 : ---X---X---
T : |0|1|2|3|4|
###Markdown
By measuring the two qubits in the computational basis, Bob can read off Alice's two qubit message
###Code
counts = get_result(device, circ, s3_folder)
print(counts)
###Output
Status: CREATED
Status: QUEUED
Status: RUNNING
Status: COMPLETED
measurement_counts: Counter({'01': 1000})
###Markdown
We can check that this scheme works for the other possible messages too:
###Code
for m in message:
# Reproduce the full circuit above by concatenating all of the gates:
newcirc = Circuit().h([0]).cnot(0,1).add_circuit(message[m]).cnot(0,1).h([0]);
# Run the circuit:
counts = get_result(device, newcirc, s3_folder)
print("Message: " + m + ". Results:")
print(counts)
###Output
Status: CREATED
Status: QUEUED
Status: RUNNING
Status: COMPLETED
measurement_counts: Counter({'00': 1000})
|
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning) - Solution.ipynb | ###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2020-03-03 20:34:24-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 44.3MB/s in 1.8s
2020-03-03 20:34:26 (44.3 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
Wrote features to cache file: bow_features.pkl
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
...........................................................................................................................................................................................................................................................................................................................!
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
2020-03-03 21:07:12 Starting - Preparing the instances for training
2020-03-03 21:07:12 Downloading - Downloading input data
2020-03-03 21:07:12 Training - Training image download completed. Training in progress.
2020-03-03 21:07:12 Uploading - Uploading generated training model
2020-03-03 21:07:12 Completed - Training job completed[34mArguments: train[0m
[34m[2020-03-03:20:51:55:INFO] Running standalone xgboost training.[0m
[34m[2020-03-03:20:51:55:INFO] Setting up HPO optimized metric to be : rmse[0m
[34m[2020-03-03:20:51:55:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8520.37mb[0m
[34m[2020-03-03:20:51:55:INFO] Determined delimiter of CSV input is ','[0m
[34m[20:51:55] S3DistributionType set as FullyReplicated[0m
[34m[20:51:57] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-03-03:20:51:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[20:51:57] S3DistributionType set as FullyReplicated[0m
[34m[20:51:58] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[20:52:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 68 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[0]#011train-rmse:0.479854#011validation-rmse:0.481246[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[20:52:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 62 extra nodes, 36 pruned nodes, max_depth=7[0m
[34m[1]#011train-rmse:0.464804#011validation-rmse:0.467634[0m
[34m[20:52:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 74 extra nodes, 28 pruned nodes, max_depth=7[0m
[34m[2]#011train-rmse:0.452756#011validation-rmse:0.456462[0m
[34m[20:52:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 44 pruned nodes, max_depth=7[0m
[34m[3]#011train-rmse:0.443024#011validation-rmse:0.447611[0m
[34m[20:52:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 64 extra nodes, 28 pruned nodes, max_depth=7[0m
[34m[4]#011train-rmse:0.434675#011validation-rmse:0.440489[0m
[34m[20:52:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 38 pruned nodes, max_depth=7[0m
[34m[5]#011train-rmse:0.427457#011validation-rmse:0.433767[0m
[34m[20:52:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 74 extra nodes, 38 pruned nodes, max_depth=7[0m
[34m[6]#011train-rmse:0.420396#011validation-rmse:0.427953[0m
[34m[20:52:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[7]#011train-rmse:0.414475#011validation-rmse:0.422806[0m
[34m[20:52:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 28 pruned nodes, max_depth=7[0m
[34m[8]#011train-rmse:0.409641#011validation-rmse:0.418611[0m
[34m[20:52:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[9]#011train-rmse:0.404958#011validation-rmse:0.414465[0m
[34m[20:52:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[10]#011train-rmse:0.400249#011validation-rmse:0.410507[0m
[34m[20:52:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[11]#011train-rmse:0.39638#011validation-rmse:0.406891[0m
[34m[20:52:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[12]#011train-rmse:0.393106#011validation-rmse:0.403904[0m
[34m[20:52:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 36 pruned nodes, max_depth=7[0m
[34m[13]#011train-rmse:0.38922#011validation-rmse:0.400297[0m
[34m[20:52:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[14]#011train-rmse:0.385935#011validation-rmse:0.397622[0m
[34m[20:52:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 28 pruned nodes, max_depth=7[0m
[34m[15]#011train-rmse:0.382702#011validation-rmse:0.394827[0m
[34m[20:52:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[16]#011train-rmse:0.379185#011validation-rmse:0.392227[0m
[34m[20:52:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 28 pruned nodes, max_depth=7[0m
[34m[17]#011train-rmse:0.376253#011validation-rmse:0.389792[0m
[34m[20:52:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[18]#011train-rmse:0.373163#011validation-rmse:0.387126[0m
[34m[20:52:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 42 pruned nodes, max_depth=7[0m
[34m[19]#011train-rmse:0.370476#011validation-rmse:0.385361[0m
[34m[20:52:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 38 pruned nodes, max_depth=7[0m
[34m[20]#011train-rmse:0.36794#011validation-rmse:0.383231[0m
[34m[20:52:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 36 pruned nodes, max_depth=7[0m
[34m[21]#011train-rmse:0.365446#011validation-rmse:0.381276[0m
[34m[20:52:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 42 pruned nodes, max_depth=7[0m
[34m[22]#011train-rmse:0.363347#011validation-rmse:0.379452[0m
[34m[20:52:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 46 pruned nodes, max_depth=7[0m
[34m[23]#011train-rmse:0.361115#011validation-rmse:0.377286[0m
[34m[20:52:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[24]#011train-rmse:0.35909#011validation-rmse:0.375419[0m
[34m[20:52:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 36 pruned nodes, max_depth=7[0m
[34m[25]#011train-rmse:0.357256#011validation-rmse:0.374098[0m
[34m[20:53:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 38 pruned nodes, max_depth=7[0m
[34m[26]#011train-rmse:0.355397#011validation-rmse:0.372693[0m
[34m[20:53:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 44 pruned nodes, max_depth=7[0m
[34m[27]#011train-rmse:0.353344#011validation-rmse:0.371462[0m
[34m[20:53:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[28]#011train-rmse:0.351486#011validation-rmse:0.370031[0m
[34m[20:53:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[29]#011train-rmse:0.34972#011validation-rmse:0.36875[0m
[34m[20:53:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[30]#011train-rmse:0.347957#011validation-rmse:0.367265[0m
[34m[20:53:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[31]#011train-rmse:0.346041#011validation-rmse:0.36609[0m
[34m[20:53:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[32]#011train-rmse:0.344685#011validation-rmse:0.364903[0m
[34m[20:53:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 38 pruned nodes, max_depth=7[0m
[34m[33]#011train-rmse:0.343235#011validation-rmse:0.36396[0m
[34m[20:53:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 40 pruned nodes, max_depth=7[0m
[34m[34]#011train-rmse:0.341767#011validation-rmse:0.362995[0m
[34m[20:53:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 42 pruned nodes, max_depth=7[0m
[34m[35]#011train-rmse:0.339992#011validation-rmse:0.361674[0m
[34m[20:53:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 48 pruned nodes, max_depth=7[0m
[34m[36]#011train-rmse:0.33834#011validation-rmse:0.360772[0m
[34m[20:53:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 42 pruned nodes, max_depth=7[0m
[34m[37]#011train-rmse:0.337109#011validation-rmse:0.35992[0m
[34m[20:53:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[38]#011train-rmse:0.335923#011validation-rmse:0.358919[0m
[34m[20:53:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[39]#011train-rmse:0.33472#011validation-rmse:0.358041[0m
[34m[20:53:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[40]#011train-rmse:0.333701#011validation-rmse:0.357136[0m
[34m[20:53:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 40 pruned nodes, max_depth=7[0m
[34m[41]#011train-rmse:0.332557#011validation-rmse:0.356096[0m
[34m[20:53:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 38 pruned nodes, max_depth=7[0m
[34m[42]#011train-rmse:0.331099#011validation-rmse:0.355358[0m
[34m[20:53:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 38 pruned nodes, max_depth=7[0m
[34m[43]#011train-rmse:0.330028#011validation-rmse:0.354371[0m
[34m[20:53:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 40 pruned nodes, max_depth=7[0m
[34m[44]#011train-rmse:0.329178#011validation-rmse:0.353559[0m
[34m[20:53:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 54 pruned nodes, max_depth=7[0m
[34m[45]#011train-rmse:0.327783#011validation-rmse:0.35292[0m
[34m[20:53:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[46]#011train-rmse:0.326527#011validation-rmse:0.351881[0m
[34m[20:53:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[47]#011train-rmse:0.325372#011validation-rmse:0.351078[0m
[34m[20:53:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[48]#011train-rmse:0.32411#011validation-rmse:0.350448[0m
[34m[20:53:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 54 pruned nodes, max_depth=7[0m
[34m[49]#011train-rmse:0.322793#011validation-rmse:0.349726[0m
[34m[20:53:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 18 pruned nodes, max_depth=7[0m
[34m[50]#011train-rmse:0.321598#011validation-rmse:0.349199[0m
[34m[20:53:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[51]#011train-rmse:0.320845#011validation-rmse:0.348601[0m
[34m[20:53:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[52]#011train-rmse:0.319849#011validation-rmse:0.347859[0m
[34m[20:53:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 28 pruned nodes, max_depth=7[0m
[34m[53]#011train-rmse:0.318569#011validation-rmse:0.346973[0m
[34m[20:54:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=7[0m
[34m[54]#011train-rmse:0.317621#011validation-rmse:0.346032[0m
[34m[20:54:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[55]#011train-rmse:0.316499#011validation-rmse:0.345263[0m
[34m[20:54:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=7[0m
[34m[56]#011train-rmse:0.315645#011validation-rmse:0.344529[0m
[34m[20:54:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 38 pruned nodes, max_depth=7[0m
[34m[57]#011train-rmse:0.314791#011validation-rmse:0.343711[0m
[34m[20:54:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[58]#011train-rmse:0.313872#011validation-rmse:0.343205[0m
[34m[20:54:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[59]#011train-rmse:0.313064#011validation-rmse:0.342679[0m
[34m[20:54:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[60]#011train-rmse:0.312153#011validation-rmse:0.34203[0m
[34m[20:54:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[61]#011train-rmse:0.311465#011validation-rmse:0.341568[0m
[34m[20:54:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 42 pruned nodes, max_depth=7[0m
[34m[62]#011train-rmse:0.310653#011validation-rmse:0.340798[0m
[34m[20:54:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[63]#011train-rmse:0.309716#011validation-rmse:0.3402[0m
[34m[20:54:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[64]#011train-rmse:0.30885#011validation-rmse:0.339827[0m
[34m[20:54:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[65]#011train-rmse:0.308041#011validation-rmse:0.339419[0m
[34m[20:54:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[66]#011train-rmse:0.307371#011validation-rmse:0.338866[0m
[34m[20:54:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 18 pruned nodes, max_depth=7[0m
[34m[67]#011train-rmse:0.306703#011validation-rmse:0.338358[0m
[34m[20:54:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[68]#011train-rmse:0.305808#011validation-rmse:0.337592[0m
[34m[20:54:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=7[0m
[34m[69]#011train-rmse:0.304998#011validation-rmse:0.337152[0m
[34m[20:54:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[70]#011train-rmse:0.304302#011validation-rmse:0.336503[0m
[34m[20:54:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 16 pruned nodes, max_depth=7[0m
[34m[71]#011train-rmse:0.303453#011validation-rmse:0.335932[0m
[34m[20:54:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 46 pruned nodes, max_depth=7[0m
[34m[72]#011train-rmse:0.302755#011validation-rmse:0.335438[0m
[34m[20:54:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=7[0m
[34m[73]#011train-rmse:0.302217#011validation-rmse:0.334966[0m
[34m[20:54:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 44 pruned nodes, max_depth=7[0m
[34m[74]#011train-rmse:0.301604#011validation-rmse:0.334482[0m
[34m[20:54:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 42 pruned nodes, max_depth=7[0m
[34m[75]#011train-rmse:0.30101#011validation-rmse:0.334104[0m
[34m[20:54:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[76]#011train-rmse:0.30041#011validation-rmse:0.333632[0m
[34m[20:54:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[77]#011train-rmse:0.299738#011validation-rmse:0.333092[0m
[34m[20:54:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 18 pruned nodes, max_depth=7[0m
[34m[78]#011train-rmse:0.299269#011validation-rmse:0.332897[0m
[34m[20:54:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=7[0m
[34m[79]#011train-rmse:0.298544#011validation-rmse:0.332544[0m
[34m[20:54:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 42 pruned nodes, max_depth=7[0m
[34m[80]#011train-rmse:0.297833#011validation-rmse:0.332096[0m
[34m[20:54:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 36 pruned nodes, max_depth=7[0m
[34m[81]#011train-rmse:0.296912#011validation-rmse:0.331691[0m
[34m[20:55:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[82]#011train-rmse:0.296333#011validation-rmse:0.331435[0m
[34m[20:55:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[83]#011train-rmse:0.295968#011validation-rmse:0.331122[0m
[34m[20:55:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 44 pruned nodes, max_depth=7[0m
[34m[84]#011train-rmse:0.29541#011validation-rmse:0.330759[0m
[34m[20:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 50 pruned nodes, max_depth=7[0m
[34m[85]#011train-rmse:0.294813#011validation-rmse:0.330567[0m
[34m[20:55:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 38 pruned nodes, max_depth=7[0m
[34m[86]#011train-rmse:0.294258#011validation-rmse:0.330131[0m
[34m[20:55:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[87]#011train-rmse:0.293742#011validation-rmse:0.329852[0m
[34m[20:55:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[88]#011train-rmse:0.293238#011validation-rmse:0.329642[0m
[34m[20:55:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[89]#011train-rmse:0.292577#011validation-rmse:0.329211[0m
[34m[20:55:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[90]#011train-rmse:0.292119#011validation-rmse:0.329084[0m
[34m[20:55:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 18 pruned nodes, max_depth=7[0m
[34m[91]#011train-rmse:0.291632#011validation-rmse:0.328752[0m
[34m[20:55:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=7[0m
[34m[92]#011train-rmse:0.291107#011validation-rmse:0.328484[0m
[34m[20:55:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 28 pruned nodes, max_depth=7[0m
[34m[93]#011train-rmse:0.290634#011validation-rmse:0.328205[0m
[34m[20:55:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=7[0m
[34m[94]#011train-rmse:0.29023#011validation-rmse:0.328035[0m
[34m[20:55:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[95]#011train-rmse:0.28931#011validation-rmse:0.327797[0m
[34m[20:55:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 42 pruned nodes, max_depth=7[0m
[34m[96]#011train-rmse:0.288895#011validation-rmse:0.327558[0m
[34m[20:55:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 44 pruned nodes, max_depth=7[0m
[34m[97]#011train-rmse:0.287985#011validation-rmse:0.327336[0m
[34m[20:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=7[0m
[34m[98]#011train-rmse:0.28758#011validation-rmse:0.327119[0m
[34m[20:55:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 20 pruned nodes, max_depth=7[0m
[34m[99]#011train-rmse:0.286993#011validation-rmse:0.32692[0m
[34m[20:55:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 48 pruned nodes, max_depth=7[0m
[34m[100]#011train-rmse:0.286557#011validation-rmse:0.326537[0m
[34m[20:55:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 50 pruned nodes, max_depth=7[0m
[34m[101]#011train-rmse:0.285735#011validation-rmse:0.326465[0m
[34m[20:55:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 48 pruned nodes, max_depth=7[0m
[34m[102]#011train-rmse:0.285412#011validation-rmse:0.326129[0m
[34m[20:55:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 18 pruned nodes, max_depth=7[0m
[34m[103]#011train-rmse:0.28502#011validation-rmse:0.325823[0m
[34m[20:55:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 20 pruned nodes, max_depth=7[0m
[34m[104]#011train-rmse:0.284578#011validation-rmse:0.325554[0m
[34m[20:55:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[105]#011train-rmse:0.284233#011validation-rmse:0.325392[0m
[34m[20:55:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[106]#011train-rmse:0.283846#011validation-rmse:0.325275[0m
[34m[20:55:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[107]#011train-rmse:0.283447#011validation-rmse:0.325022[0m
[34m[20:55:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 28 pruned nodes, max_depth=7[0m
[34m[108]#011train-rmse:0.283208#011validation-rmse:0.325071[0m
[34m[20:55:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 36 pruned nodes, max_depth=7[0m
[34m[109]#011train-rmse:0.282661#011validation-rmse:0.324705[0m
[34m[20:56:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[110]#011train-rmse:0.282245#011validation-rmse:0.32438[0m
[34m[20:56:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=7[0m
[34m[111]#011train-rmse:0.281887#011validation-rmse:0.324266[0m
[34m[20:56:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[112]#011train-rmse:0.281536#011validation-rmse:0.324136[0m
[34m[20:56:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 16 pruned nodes, max_depth=7[0m
[34m[113]#011train-rmse:0.281135#011validation-rmse:0.323862[0m
[34m[20:56:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=7[0m
[34m[114]#011train-rmse:0.280686#011validation-rmse:0.323552[0m
[34m[20:56:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 48 pruned nodes, max_depth=7[0m
[34m[115]#011train-rmse:0.280339#011validation-rmse:0.323454[0m
[34m[20:56:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 18 pruned nodes, max_depth=7[0m
[34m[116]#011train-rmse:0.279911#011validation-rmse:0.323268[0m
[34m[20:56:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[117]#011train-rmse:0.279222#011validation-rmse:0.322912[0m
[34m[20:56:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 52 pruned nodes, max_depth=7[0m
[34m[118]#011train-rmse:0.278656#011validation-rmse:0.322855[0m
[34m[20:56:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 20 pruned nodes, max_depth=7[0m
[34m[119]#011train-rmse:0.278315#011validation-rmse:0.322567[0m
[34m[20:56:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 20 pruned nodes, max_depth=7[0m
[34m[120]#011train-rmse:0.277862#011validation-rmse:0.322243[0m
[34m[20:56:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 42 pruned nodes, max_depth=7[0m
[34m[121]#011train-rmse:0.27761#011validation-rmse:0.322119[0m
[34m[20:56:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[122]#011train-rmse:0.27726#011validation-rmse:0.321947[0m
[34m[20:56:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[123]#011train-rmse:0.276962#011validation-rmse:0.322051[0m
[34m[20:56:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[124]#011train-rmse:0.276706#011validation-rmse:0.322015[0m
[34m[20:56:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[125]#011train-rmse:0.276135#011validation-rmse:0.321786[0m
[34m[20:56:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[126]#011train-rmse:0.275695#011validation-rmse:0.321455[0m
[34m[20:56:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[127]#011train-rmse:0.275315#011validation-rmse:0.321447[0m
[34m[20:56:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[128]#011train-rmse:0.274909#011validation-rmse:0.321321[0m
[34m[20:56:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=7[0m
[34m[129]#011train-rmse:0.274461#011validation-rmse:0.32106[0m
[34m[20:56:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[130]#011train-rmse:0.274213#011validation-rmse:0.320939[0m
[34m[20:56:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 36 pruned nodes, max_depth=7[0m
[34m[131]#011train-rmse:0.273858#011validation-rmse:0.32091[0m
[34m[20:56:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 44 pruned nodes, max_depth=7[0m
[34m[132]#011train-rmse:0.273308#011validation-rmse:0.320665[0m
[34m[20:56:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 54 pruned nodes, max_depth=7[0m
[34m[133]#011train-rmse:0.272485#011validation-rmse:0.320311[0m
[34m[20:56:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[134]#011train-rmse:0.272044#011validation-rmse:0.320236[0m
[34m[20:56:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[135]#011train-rmse:0.271658#011validation-rmse:0.319944[0m
[34m[20:56:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[136]#011train-rmse:0.271367#011validation-rmse:0.319642[0m
[34m[20:56:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[137]#011train-rmse:0.271111#011validation-rmse:0.319429[0m
[34m[20:57:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 46 pruned nodes, max_depth=7[0m
[34m[138]#011train-rmse:0.270576#011validation-rmse:0.319509[0m
[34m[20:57:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 46 pruned nodes, max_depth=7[0m
[34m[139]#011train-rmse:0.270176#011validation-rmse:0.319221[0m
[34m[20:57:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[140]#011train-rmse:0.269723#011validation-rmse:0.319061[0m
[34m[20:57:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 20 pruned nodes, max_depth=7[0m
[34m[141]#011train-rmse:0.26933#011validation-rmse:0.319133[0m
[34m[20:57:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[142]#011train-rmse:0.268939#011validation-rmse:0.318902[0m
[34m[20:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=7[0m
[34m[143]#011train-rmse:0.268609#011validation-rmse:0.318684[0m
[34m[20:57:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[144]#011train-rmse:0.268332#011validation-rmse:0.318627[0m
[34m[20:57:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 46 pruned nodes, max_depth=7[0m
[34m[145]#011train-rmse:0.267995#011validation-rmse:0.31839[0m
[34m[20:57:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=7[0m
[34m[146]#011train-rmse:0.267702#011validation-rmse:0.318305[0m
[34m[20:57:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 28 pruned nodes, max_depth=7[0m
[34m[147]#011train-rmse:0.267158#011validation-rmse:0.317984[0m
[34m[20:57:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[148]#011train-rmse:0.266894#011validation-rmse:0.317746[0m
[34m[20:57:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=7[0m
[34m[149]#011train-rmse:0.266603#011validation-rmse:0.317636[0m
[34m[20:57:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[150]#011train-rmse:0.266332#011validation-rmse:0.317568[0m
[34m[20:57:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[151]#011train-rmse:0.266015#011validation-rmse:0.317348[0m
[34m[20:57:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 28 pruned nodes, max_depth=7[0m
[34m[152]#011train-rmse:0.265627#011validation-rmse:0.317173[0m
[34m[20:57:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[153]#011train-rmse:0.26532#011validation-rmse:0.317074[0m
[34m[20:57:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 26 pruned nodes, max_depth=6[0m
[34m[154]#011train-rmse:0.265102#011validation-rmse:0.317015[0m
[34m[20:57:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=7[0m
[34m[155]#011train-rmse:0.264756#011validation-rmse:0.316829[0m
[34m[20:57:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 20 pruned nodes, max_depth=7[0m
[34m[156]#011train-rmse:0.264415#011validation-rmse:0.316647[0m
[34m[20:57:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=7[0m
[34m[157]#011train-rmse:0.264114#011validation-rmse:0.31653[0m
[34m[20:57:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=7[0m
[34m[158]#011train-rmse:0.263891#011validation-rmse:0.316426[0m
[34m[20:57:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=7[0m
[34m[159]#011train-rmse:0.263664#011validation-rmse:0.316254[0m
[34m[20:57:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[160]#011train-rmse:0.26324#011validation-rmse:0.31618[0m
[34m[20:57:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=7[0m
[34m[161]#011train-rmse:0.262878#011validation-rmse:0.315811[0m
[34m[20:57:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 36 pruned nodes, max_depth=7[0m
[34m[162]#011train-rmse:0.262633#011validation-rmse:0.315574[0m
[34m[20:57:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 40 pruned nodes, max_depth=7[0m
[34m[163]#011train-rmse:0.262331#011validation-rmse:0.315313[0m
[34m[20:57:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 36 pruned nodes, max_depth=7[0m
[34m[164]#011train-rmse:0.261934#011validation-rmse:0.315067[0m
[34m[20:57:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=7[0m
[34m[165]#011train-rmse:0.261608#011validation-rmse:0.314857[0m
[34m[20:58:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 28 pruned nodes, max_depth=6[0m
[34m[166]#011train-rmse:0.261367#011validation-rmse:0.314775[0m
[34m[20:58:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[167]#011train-rmse:0.260926#011validation-rmse:0.314796[0m
[34m[20:58:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[168]#011train-rmse:0.260626#011validation-rmse:0.314747[0m
[34m[20:58:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[169]#011train-rmse:0.26037#011validation-rmse:0.314761[0m
[34m[20:58:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 40 pruned nodes, max_depth=7[0m
[34m[170]#011train-rmse:0.260169#011validation-rmse:0.314715[0m
[34m[20:58:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 42 pruned nodes, max_depth=6[0m
[34m[171]#011train-rmse:0.259966#011validation-rmse:0.314718[0m
[34m[20:58:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=7[0m
[34m[172]#011train-rmse:0.259709#011validation-rmse:0.314694[0m
[34m[20:58:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[173]#011train-rmse:0.259521#011validation-rmse:0.314574[0m
[34m[20:58:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 20 pruned nodes, max_depth=7[0m
[34m[174]#011train-rmse:0.259189#011validation-rmse:0.314467[0m
[34m[20:58:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[175]#011train-rmse:0.258855#011validation-rmse:0.314287[0m
[34m[20:58:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 24 pruned nodes, max_depth=6[0m
[34m[176]#011train-rmse:0.258663#011validation-rmse:0.314207[0m
[34m[20:58:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=7[0m
[34m[177]#011train-rmse:0.2584#011validation-rmse:0.313809[0m
[34m[20:58:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 38 pruned nodes, max_depth=7[0m
[34m[178]#011train-rmse:0.258024#011validation-rmse:0.313772[0m
[34m[20:58:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=7[0m
[34m[179]#011train-rmse:0.257714#011validation-rmse:0.313673[0m
[34m[20:58:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 32 pruned nodes, max_depth=4[0m
[34m[180]#011train-rmse:0.257586#011validation-rmse:0.313628[0m
[34m[20:58:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[181]#011train-rmse:0.257052#011validation-rmse:0.313551[0m
[34m[20:58:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=7[0m
[34m[182]#011train-rmse:0.256783#011validation-rmse:0.313639[0m
[34m[20:58:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[183]#011train-rmse:0.25651#011validation-rmse:0.313663[0m
[34m[20:58:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[184]#011train-rmse:0.256148#011validation-rmse:0.313443[0m
[34m[20:58:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 32 pruned nodes, max_depth=6[0m
[34m[185]#011train-rmse:0.255826#011validation-rmse:0.313359[0m
[34m[20:58:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 36 pruned nodes, max_depth=7[0m
[34m[186]#011train-rmse:0.255551#011validation-rmse:0.31331[0m
[34m[20:58:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 50 pruned nodes, max_depth=4[0m
[34m[187]#011train-rmse:0.255484#011validation-rmse:0.313359[0m
[34m[20:58:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 38 pruned nodes, max_depth=7[0m
[34m[188]#011train-rmse:0.255261#011validation-rmse:0.313195[0m
[34m[20:58:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[189]#011train-rmse:0.255001#011validation-rmse:0.313107[0m
[34m[20:58:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 38 pruned nodes, max_depth=7[0m
[34m[190]#011train-rmse:0.254857#011validation-rmse:0.313071[0m
[34m[20:58:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 44 pruned nodes, max_depth=4[0m
[34m[191]#011train-rmse:0.254733#011validation-rmse:0.313076[0m
[34m[20:58:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 26 pruned nodes, max_depth=5[0m
[34m[192]#011train-rmse:0.254563#011validation-rmse:0.312934[0m
[34m[20:58:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=7[0m
[34m[193]#011train-rmse:0.254331#011validation-rmse:0.312818[0m
[34m[20:59:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[194]#011train-rmse:0.254102#011validation-rmse:0.312615[0m
[34m[20:59:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=7[0m
[34m[195]#011train-rmse:0.253834#011validation-rmse:0.312539[0m
[34m[20:59:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 30 pruned nodes, max_depth=5[0m
[34m[196]#011train-rmse:0.253501#011validation-rmse:0.312404[0m
[34m[20:59:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=7[0m
[34m[197]#011train-rmse:0.253239#011validation-rmse:0.312298[0m
[34m[20:59:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 28 pruned nodes, max_depth=5[0m
[34m[198]#011train-rmse:0.253002#011validation-rmse:0.312126[0m
[34m[20:59:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 28 pruned nodes, max_depth=7[0m
[34m[199]#011train-rmse:0.25277#011validation-rmse:0.312158[0m
[34m[20:59:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 38 pruned nodes, max_depth=6[0m
[34m[200]#011train-rmse:0.252525#011validation-rmse:0.312054[0m
[34m[20:59:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=7[0m
[34m[201]#011train-rmse:0.252264#011validation-rmse:0.311776[0m
[34m[20:59:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[202]#011train-rmse:0.252258#011validation-rmse:0.311767[0m
[34m[20:59:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 44 pruned nodes, max_depth=7[0m
[34m[203]#011train-rmse:0.252031#011validation-rmse:0.311755[0m
[34m[20:59:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 38 pruned nodes, max_depth=1[0m
[34m[204]#011train-rmse:0.252012#011validation-rmse:0.311751[0m
[34m[20:59:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 34 pruned nodes, max_depth=6[0m
[34m[205]#011train-rmse:0.251743#011validation-rmse:0.311727[0m
[34m[20:59:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 54 pruned nodes, max_depth=1[0m
[34m[206]#011train-rmse:0.251737#011validation-rmse:0.311711[0m
[34m[20:59:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=7[0m
[34m[207]#011train-rmse:0.251471#011validation-rmse:0.311669[0m
[34m[20:59:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 34 pruned nodes, max_depth=7[0m
[34m[208]#011train-rmse:0.251121#011validation-rmse:0.3117[0m
[34m[20:59:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[209]#011train-rmse:0.25112#011validation-rmse:0.311699[0m
[34m[20:59:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 18 pruned nodes, max_depth=1[0m
[34m[210]#011train-rmse:0.251112#011validation-rmse:0.311747[0m
[34m[20:59:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=7[0m
[34m[211]#011train-rmse:0.250698#011validation-rmse:0.311721[0m
[34m[20:59:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=7[0m
[34m[212]#011train-rmse:0.250468#011validation-rmse:0.311732[0m
[34m[20:59:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 22 pruned nodes, max_depth=3[0m
[34m[213]#011train-rmse:0.250375#011validation-rmse:0.31161[0m
[34m[20:59:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 20 pruned nodes, max_depth=7[0m
[34m[214]#011train-rmse:0.250181#011validation-rmse:0.311646[0m
[34m[20:59:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 22 pruned nodes, max_depth=6[0m
[34m[215]#011train-rmse:0.249991#011validation-rmse:0.31148[0m
[34m[20:59:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 22 pruned nodes, max_depth=6[0m
[34m[216]#011train-rmse:0.249749#011validation-rmse:0.311418[0m
[34m[20:59:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=7[0m
[34m[217]#011train-rmse:0.249543#011validation-rmse:0.311284[0m
[34m[20:59:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=6[0m
[34m[218]#011train-rmse:0.249288#011validation-rmse:0.311178[0m
[34m[20:59:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 36 pruned nodes, max_depth=7[0m
[34m[219]#011train-rmse:0.24908#011validation-rmse:0.311158[0m
[34m[20:59:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 38 pruned nodes, max_depth=3[0m
[34m[220]#011train-rmse:0.248964#011validation-rmse:0.311056[0m
[34m[20:59:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 34 pruned nodes, max_depth=5[0m
[34m[221]#011train-rmse:0.248818#011validation-rmse:0.311008[0m
[34m[21:00:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[222]#011train-rmse:0.248624#011validation-rmse:0.310989[0m
[34m[21:00:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[223]#011train-rmse:0.248499#011validation-rmse:0.310927[0m
[34m[21:00:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 20 pruned nodes, max_depth=7[0m
[34m[224]#011train-rmse:0.248282#011validation-rmse:0.310988[0m
[34m[21:00:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 24 pruned nodes, max_depth=2[0m
[34m[225]#011train-rmse:0.248173#011validation-rmse:0.310908[0m
[34m[21:00:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 26 pruned nodes, max_depth=7[0m
[34m[226]#011train-rmse:0.247958#011validation-rmse:0.310823[0m
[34m[21:00:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[227]#011train-rmse:0.247704#011validation-rmse:0.31074[0m
[34m[21:00:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 40 pruned nodes, max_depth=4[0m
[34m[228]#011train-rmse:0.247525#011validation-rmse:0.310765[0m
[34m[21:00:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 30 pruned nodes, max_depth=0[0m
[34m[229]#011train-rmse:0.247529#011validation-rmse:0.31077[0m
[34m[21:00:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 26 pruned nodes, max_depth=5[0m
[34m[230]#011train-rmse:0.247413#011validation-rmse:0.310714[0m
[34m[21:00:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[231]#011train-rmse:0.247096#011validation-rmse:0.310596[0m
[34m[21:00:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[232]#011train-rmse:0.24692#011validation-rmse:0.310689[0m
[34m[21:00:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 60 pruned nodes, max_depth=4[0m
[34m[233]#011train-rmse:0.246801#011validation-rmse:0.310604[0m
[34m[21:00:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 38 pruned nodes, max_depth=6[0m
[34m[234]#011train-rmse:0.246521#011validation-rmse:0.310496[0m
[34m[21:00:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[235]#011train-rmse:0.246339#011validation-rmse:0.310405[0m
[34m[21:00:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=7[0m
[34m[236]#011train-rmse:0.246068#011validation-rmse:0.310332[0m
[34m[21:00:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 44 pruned nodes, max_depth=5[0m
[34m[237]#011train-rmse:0.245757#011validation-rmse:0.310217[0m
[34m[21:00:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 30 pruned nodes, max_depth=3[0m
[34m[238]#011train-rmse:0.245701#011validation-rmse:0.31029[0m
[34m[21:00:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 18 pruned nodes, max_depth=4[0m
[34m[239]#011train-rmse:0.245491#011validation-rmse:0.310215[0m
[34m[21:00:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 38 pruned nodes, max_depth=5[0m
[34m[240]#011train-rmse:0.245286#011validation-rmse:0.310287[0m
[34m[21:00:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 18 pruned nodes, max_depth=1[0m
[34m[241]#011train-rmse:0.24527#011validation-rmse:0.310295[0m
[34m[21:00:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=7[0m
[34m[242]#011train-rmse:0.24508#011validation-rmse:0.310145[0m
[34m[21:00:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[243]#011train-rmse:0.245072#011validation-rmse:0.310132[0m
[34m[21:00:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 34 pruned nodes, max_depth=2[0m
[34m[244]#011train-rmse:0.245048#011validation-rmse:0.310103[0m
[34m[21:00:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 38 pruned nodes, max_depth=3[0m
[34m[245]#011train-rmse:0.244957#011validation-rmse:0.310107[0m
[34m[21:00:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 16 pruned nodes, max_depth=3[0m
[34m[246]#011train-rmse:0.244879#011validation-rmse:0.310162[0m
[34m[21:00:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 32 pruned nodes, max_depth=7[0m
[34m[247]#011train-rmse:0.244701#011validation-rmse:0.310201[0m
[34m[21:00:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 44 pruned nodes, max_depth=4[0m
[34m[248]#011train-rmse:0.244615#011validation-rmse:0.310147[0m
[34m[21:01:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=6[0m
[34m[249]#011train-rmse:0.244429#011validation-rmse:0.310035[0m
[34m[21:01:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 30 pruned nodes, max_depth=0[0m
[34m[250]#011train-rmse:0.244431#011validation-rmse:0.310038[0m
[34m[21:01:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 20 pruned nodes, max_depth=6[0m
[34m[251]#011train-rmse:0.244251#011validation-rmse:0.309963[0m
[34m[21:01:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[252]#011train-rmse:0.244253#011validation-rmse:0.309966[0m
[34m[21:01:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[253]#011train-rmse:0.244131#011validation-rmse:0.309881[0m
[34m[21:01:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 34 pruned nodes, max_depth=3[0m
[34m[254]#011train-rmse:0.243915#011validation-rmse:0.309934[0m
[34m[21:01:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 28 pruned nodes, max_depth=6[0m
[34m[255]#011train-rmse:0.24376#011validation-rmse:0.309739[0m
[34m[21:01:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 22 pruned nodes, max_depth=1[0m
[34m[256]#011train-rmse:0.243767#011validation-rmse:0.309649[0m
[34m[21:01:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 28 pruned nodes, max_depth=5[0m
[34m[257]#011train-rmse:0.24362#011validation-rmse:0.309576[0m
[34m[21:01:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 28 pruned nodes, max_depth=2[0m
[34m[258]#011train-rmse:0.243578#011validation-rmse:0.30952[0m
[34m[21:01:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 42 pruned nodes, max_depth=5[0m
[34m[259]#011train-rmse:0.24348#011validation-rmse:0.309473[0m
[34m[21:01:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=7[0m
[34m[260]#011train-rmse:0.243268#011validation-rmse:0.309402[0m
[34m[21:01:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 28 pruned nodes, max_depth=1[0m
[34m[261]#011train-rmse:0.243239#011validation-rmse:0.309406[0m
[34m[21:01:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 22 pruned nodes, max_depth=7[0m
[34m[262]#011train-rmse:0.243061#011validation-rmse:0.309276[0m
[34m[21:01:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[263]#011train-rmse:0.242987#011validation-rmse:0.309272[0m
[34m[21:01:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 16 pruned nodes, max_depth=6[0m
[34m[264]#011train-rmse:0.242811#011validation-rmse:0.309152[0m
[34m[21:01:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[34m[265]#011train-rmse:0.242824#011validation-rmse:0.30917[0m
[34m[21:01:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=6[0m
[34m[266]#011train-rmse:0.24265#011validation-rmse:0.309071[0m
[34m[21:01:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[267]#011train-rmse:0.242504#011validation-rmse:0.308976[0m
[34m[21:01:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 18 pruned nodes, max_depth=7[0m
[34m[268]#011train-rmse:0.242241#011validation-rmse:0.309118[0m
[34m[21:01:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 20 pruned nodes, max_depth=1[0m
[34m[269]#011train-rmse:0.242207#011validation-rmse:0.309157[0m
[34m[21:01:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 28 pruned nodes, max_depth=1[0m
[34m[270]#011train-rmse:0.242213#011validation-rmse:0.309179[0m
[34m[21:01:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 30 pruned nodes, max_depth=7[0m
[34m[271]#011train-rmse:0.241978#011validation-rmse:0.309072[0m
[34m[21:01:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 20 pruned nodes, max_depth=3[0m
[34m[272]#011train-rmse:0.241903#011validation-rmse:0.309139[0m
[34m[21:01:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 26 pruned nodes, max_depth=4[0m
[34m[273]#011train-rmse:0.241863#011validation-rmse:0.30907[0m
[34m[21:01:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 24 pruned nodes, max_depth=1[0m
[34m[274]#011train-rmse:0.241843#011validation-rmse:0.309064[0m
[34m[21:01:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0[0m
[34m[275]#011train-rmse:0.241847#011validation-rmse:0.309068[0m
[34m[21:01:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 28 pruned nodes, max_depth=4[0m
[34m[276]#011train-rmse:0.241712#011validation-rmse:0.309034[0m
[34m[21:01:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[277]#011train-rmse:0.241453#011validation-rmse:0.308956[0m
[34m[21:02:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 22 pruned nodes, max_depth=2[0m
[34m[278]#011train-rmse:0.241355#011validation-rmse:0.308899[0m
[34m[21:02:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 42 pruned nodes, max_depth=6[0m
[34m[279]#011train-rmse:0.241074#011validation-rmse:0.308811[0m
[34m[21:02:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=7[0m
[34m[280]#011train-rmse:0.240932#011validation-rmse:0.308821[0m
[34m[21:02:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[281]#011train-rmse:0.240779#011validation-rmse:0.308731[0m
[34m[21:02:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 46 pruned nodes, max_depth=5[0m
[34m[282]#011train-rmse:0.24055#011validation-rmse:0.308742[0m
[34m[21:02:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 20 pruned nodes, max_depth=4[0m
[34m[283]#011train-rmse:0.240425#011validation-rmse:0.308704[0m
[34m[21:02:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 28 pruned nodes, max_depth=2[0m
[34m[284]#011train-rmse:0.240366#011validation-rmse:0.308592[0m
[34m[21:02:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 16 pruned nodes, max_depth=3[0m
[34m[285]#011train-rmse:0.240275#011validation-rmse:0.308451[0m
[34m[21:02:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 30 pruned nodes, max_depth=4[0m
[34m[286]#011train-rmse:0.240157#011validation-rmse:0.30844[0m
[34m[21:02:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[34m[287]#011train-rmse:0.240157#011validation-rmse:0.30844[0m
[34m[21:02:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=6[0m
[34m[288]#011train-rmse:0.239985#011validation-rmse:0.308455[0m
[34m[21:02:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[289]#011train-rmse:0.239981#011validation-rmse:0.308451[0m
[34m[21:02:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 32 pruned nodes, max_depth=6[0m
[34m[290]#011train-rmse:0.239877#011validation-rmse:0.308382[0m
[34m[21:02:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 28 pruned nodes, max_depth=1[0m
[34m[291]#011train-rmse:0.239872#011validation-rmse:0.308384[0m
[34m[21:02:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 20 pruned nodes, max_depth=3[0m
[34m[292]#011train-rmse:0.239779#011validation-rmse:0.30831[0m
[34m[21:02:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 30 pruned nodes, max_depth=4[0m
[34m[293]#011train-rmse:0.239641#011validation-rmse:0.308331[0m
[34m[21:02:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 20 pruned nodes, max_depth=6[0m
[34m[294]#011train-rmse:0.239454#011validation-rmse:0.308338[0m
[34m[21:02:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 30 pruned nodes, max_depth=5[0m
[34m[295]#011train-rmse:0.23921#011validation-rmse:0.308258[0m
[34m[21:02:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 42 pruned nodes, max_depth=5[0m
[34m[296]#011train-rmse:0.239044#011validation-rmse:0.308235[0m
[34m[21:02:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 36 pruned nodes, max_depth=4[0m
[34m[297]#011train-rmse:0.238904#011validation-rmse:0.308116[0m
[34m[21:02:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 20 pruned nodes, max_depth=7[0m
[34m[298]#011train-rmse:0.238769#011validation-rmse:0.30812[0m
[34m[21:02:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 18 pruned nodes, max_depth=6[0m
[34m[299]#011train-rmse:0.238667#011validation-rmse:0.308001[0m
[34m[21:02:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 28 pruned nodes, max_depth=2[0m
[34m[300]#011train-rmse:0.238625#011validation-rmse:0.307945[0m
[34m[21:02:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 30 pruned nodes, max_depth=0[0m
[34m[301]#011train-rmse:0.23863#011validation-rmse:0.307952[0m
[34m[21:02:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 20 pruned nodes, max_depth=1[0m
[34m[302]#011train-rmse:0.2386#011validation-rmse:0.307961[0m
[34m[21:02:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=6[0m
[34m[303]#011train-rmse:0.238453#011validation-rmse:0.307899[0m
[34m[21:02:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 30 pruned nodes, max_depth=5[0m
[34m[304]#011train-rmse:0.23826#011validation-rmse:0.307872[0m
[34m[21:03:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[34m[305]#011train-rmse:0.238252#011validation-rmse:0.307863[0m
[34m[21:03:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 40 pruned nodes, max_depth=5[0m
[34m[306]#011train-rmse:0.238119#011validation-rmse:0.307818[0m
[34m[21:03:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[307]#011train-rmse:0.238143#011validation-rmse:0.307847[0m
[34m[21:03:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 32 pruned nodes, max_depth=3[0m
[34m[308]#011train-rmse:0.238075#011validation-rmse:0.307786[0m
[34m[21:03:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 22 pruned nodes, max_depth=1[0m
[34m[309]#011train-rmse:0.238035#011validation-rmse:0.307786[0m
[34m[21:03:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 36 pruned nodes, max_depth=5[0m
[34m[310]#011train-rmse:0.237898#011validation-rmse:0.307851[0m
[34m[21:03:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 26 pruned nodes, max_depth=6[0m
[34m[311]#011train-rmse:0.237663#011validation-rmse:0.307786[0m
[34m[21:03:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 16 pruned nodes, max_depth=4[0m
[34m[312]#011train-rmse:0.237576#011validation-rmse:0.307792[0m
[34m[21:03:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 66 pruned nodes, max_depth=0[0m
[34m[313]#011train-rmse:0.237562#011validation-rmse:0.307777[0m
[34m[21:03:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 26 pruned nodes, max_depth=3[0m
[34m[314]#011train-rmse:0.237472#011validation-rmse:0.307761[0m
[34m[21:03:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[34m[315]#011train-rmse:0.237473#011validation-rmse:0.307762[0m
[34m[21:03:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 36 pruned nodes, max_depth=6[0m
[34m[316]#011train-rmse:0.237255#011validation-rmse:0.307654[0m
[34m[21:03:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 38 pruned nodes, max_depth=1[0m
[34m[317]#011train-rmse:0.237234#011validation-rmse:0.307606[0m
[34m[21:03:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 18 pruned nodes, max_depth=6[0m
[34m[318]#011train-rmse:0.237051#011validation-rmse:0.307671[0m
[34m[21:03:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[319]#011train-rmse:0.236799#011validation-rmse:0.307546[0m
[34m[21:03:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[34m[320]#011train-rmse:0.236797#011validation-rmse:0.307543[0m
[34m[21:03:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 24 pruned nodes, max_depth=1[0m
[34m[321]#011train-rmse:0.236773#011validation-rmse:0.307575[0m
[34m[21:03:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 26 pruned nodes, max_depth=6[0m
[34m[322]#011train-rmse:0.236554#011validation-rmse:0.307419[0m
[34m[21:03:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 20 pruned nodes, max_depth=3[0m
[34m[323]#011train-rmse:0.236434#011validation-rmse:0.307441[0m
[34m[21:03:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 28 pruned nodes, max_depth=1[0m
[34m[324]#011train-rmse:0.236406#011validation-rmse:0.307429[0m
[34m[21:03:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0[0m
[34m[325]#011train-rmse:0.23641#011validation-rmse:0.307433[0m
[34m[21:03:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 36 pruned nodes, max_depth=2[0m
[34m[326]#011train-rmse:0.236339#011validation-rmse:0.307413[0m
[34m[21:03:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[34m[327]#011train-rmse:0.23633#011validation-rmse:0.307402[0m
[34m[21:03:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0[0m
[34m[328]#011train-rmse:0.236332#011validation-rmse:0.307404[0m
[34m[21:03:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 30 pruned nodes, max_depth=4[0m
[34m[329]#011train-rmse:0.236239#011validation-rmse:0.307367[0m
[34m[21:03:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 14 pruned nodes, max_depth=1[0m
[34m[330]#011train-rmse:0.236183#011validation-rmse:0.307345[0m
[34m[21:03:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 32 pruned nodes, max_depth=2[0m
[34m[331]#011train-rmse:0.236186#011validation-rmse:0.307325[0m
[34m[21:03:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[332]#011train-rmse:0.236191#011validation-rmse:0.307333[0m
[34m[21:04:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 34 pruned nodes, max_depth=4[0m
[34m[333]#011train-rmse:0.236097#011validation-rmse:0.307343[0m
[34m[21:04:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[334]#011train-rmse:0.236098#011validation-rmse:0.307345[0m
[34m[21:04:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[34m[335]#011train-rmse:0.236099#011validation-rmse:0.307346[0m
[34m[21:04:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 40 pruned nodes, max_depth=2[0m
[34m[336]#011train-rmse:0.236033#011validation-rmse:0.307256[0m
[34m[21:04:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[337]#011train-rmse:0.236038#011validation-rmse:0.307263[0m
[34m[21:04:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 28 pruned nodes, max_depth=3[0m
[34m[338]#011train-rmse:0.235969#011validation-rmse:0.307323[0m
[34m[21:04:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[339]#011train-rmse:0.235962#011validation-rmse:0.307315[0m
[34m[21:04:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 42 pruned nodes, max_depth=0[0m
[34m[340]#011train-rmse:0.235981#011validation-rmse:0.307338[0m
[34m[21:04:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 48 pruned nodes, max_depth=2[0m
[34m[341]#011train-rmse:0.235895#011validation-rmse:0.307365[0m
[34m[21:04:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 24 pruned nodes, max_depth=6[0m
[34m[342]#011train-rmse:0.235734#011validation-rmse:0.307282[0m
[34m[21:04:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 40 pruned nodes, max_depth=2[0m
[34m[343]#011train-rmse:0.235742#011validation-rmse:0.307209[0m
[34m[21:04:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 42 pruned nodes, max_depth=0[0m
[34m[344]#011train-rmse:0.235753#011validation-rmse:0.307221[0m
[34m[21:04:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 24 pruned nodes, max_depth=7[0m
[34m[345]#011train-rmse:0.235599#011validation-rmse:0.307196[0m
[34m[21:04:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 32 pruned nodes, max_depth=5[0m
[34m[346]#011train-rmse:0.235498#011validation-rmse:0.307074[0m
[34m[21:04:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[34m[347]#011train-rmse:0.235504#011validation-rmse:0.30708[0m
[34m[21:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 22 pruned nodes, max_depth=2[0m
[34m[348]#011train-rmse:0.235441#011validation-rmse:0.307006[0m
[34m[21:04:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 44 pruned nodes, max_depth=2[0m
[34m[349]#011train-rmse:0.235345#011validation-rmse:0.306973[0m
[34m[21:04:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 28 pruned nodes, max_depth=5[0m
[34m[350]#011train-rmse:0.235246#011validation-rmse:0.306987[0m
[34m[21:04:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 44 pruned nodes, max_depth=0[0m
[34m[351]#011train-rmse:0.235235#011validation-rmse:0.306974[0m
[34m[21:04:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0[0m
[34m[352]#011train-rmse:0.235245#011validation-rmse:0.306986[0m
[34m[21:04:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 36 pruned nodes, max_depth=7[0m
[34m[353]#011train-rmse:0.234965#011validation-rmse:0.306994[0m
[34m[21:04:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 36 pruned nodes, max_depth=1[0m
[34m[354]#011train-rmse:0.234955#011validation-rmse:0.306982[0m
[34m[21:04:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[34m[355]#011train-rmse:0.234944#011validation-rmse:0.30697[0m
[34m[21:04:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 50 pruned nodes, max_depth=4[0m
[34m[356]#011train-rmse:0.234757#011validation-rmse:0.306944[0m
[34m[21:04:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 38 pruned nodes, max_depth=5[0m
[34m[357]#011train-rmse:0.234574#011validation-rmse:0.306889[0m
[34m[21:04:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 30 pruned nodes, max_depth=4[0m
[34m[358]#011train-rmse:0.234427#011validation-rmse:0.306774[0m
[34m[21:04:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=7[0m
[34m[359]#011train-rmse:0.234214#011validation-rmse:0.306758[0m
[34m[21:04:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0[0m
[34m[360]#011train-rmse:0.234217#011validation-rmse:0.306761[0m
[34m[21:05:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 40 pruned nodes, max_depth=1[0m
[34m[361]#011train-rmse:0.234223#011validation-rmse:0.306764[0m
[34m[21:05:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 28 pruned nodes, max_depth=2[0m
[34m[362]#011train-rmse:0.234172#011validation-rmse:0.306718[0m
[34m[21:05:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 26 pruned nodes, max_depth=5[0m
[34m[363]#011train-rmse:0.234087#011validation-rmse:0.306688[0m
[34m[21:05:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[364]#011train-rmse:0.233984#011validation-rmse:0.306636[0m
[34m[21:05:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0[0m
[34m[365]#011train-rmse:0.23398#011validation-rmse:0.306632[0m
[34m[21:05:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 36 pruned nodes, max_depth=2[0m
[34m[366]#011train-rmse:0.233907#011validation-rmse:0.306638[0m
[34m[21:05:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[34m[367]#011train-rmse:0.23391#011validation-rmse:0.306642[0m
[34m[21:05:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[368]#011train-rmse:0.233919#011validation-rmse:0.306651[0m
[34m[21:05:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 32 pruned nodes, max_depth=5[0m
[34m[369]#011train-rmse:0.233745#011validation-rmse:0.3066[0m
[34m[21:05:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[370]#011train-rmse:0.23374#011validation-rmse:0.306595[0m
[34m[21:05:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[34m[371]#011train-rmse:0.233758#011validation-rmse:0.306613[0m
[34m[21:05:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 26 pruned nodes, max_depth=2[0m
[34m[372]#011train-rmse:0.23367#011validation-rmse:0.306603[0m
[34m[21:05:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[34m[373]#011train-rmse:0.233661#011validation-rmse:0.306595[0m
[34m[21:05:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 44 pruned nodes, max_depth=1[0m
[34m[374]#011train-rmse:0.233657#011validation-rmse:0.306568[0m
[34m[21:05:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0[0m
[34m[375]#011train-rmse:0.233676#011validation-rmse:0.306586[0m
[34m[21:05:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[376]#011train-rmse:0.233668#011validation-rmse:0.306578[0m
[34m[21:05:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 28 pruned nodes, max_depth=6[0m
[34m[377]#011train-rmse:0.233334#011validation-rmse:0.306648[0m
[34m[21:05:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 38 pruned nodes, max_depth=2[0m
[34m[378]#011train-rmse:0.233276#011validation-rmse:0.306667[0m
[34m[21:05:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 52 pruned nodes, max_depth=3[0m
[34m[379]#011train-rmse:0.23316#011validation-rmse:0.306615[0m
[34m[21:05:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 36 pruned nodes, max_depth=1[0m
[34m[380]#011train-rmse:0.233131#011validation-rmse:0.306609[0m
[34m[21:05:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[381]#011train-rmse:0.233138#011validation-rmse:0.306617[0m
[34m[21:05:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 24 pruned nodes, max_depth=1[0m
[34m[382]#011train-rmse:0.233093#011validation-rmse:0.306623[0m
[34m[21:05:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 44 pruned nodes, max_depth=2[0m
[34m[383]#011train-rmse:0.23305#011validation-rmse:0.306602[0m
[34m[21:05:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 34 pruned nodes, max_depth=1[0m
[34m[384]#011train-rmse:0.233032#011validation-rmse:0.306531[0m
[34m[21:05:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 40 pruned nodes, max_depth=2[0m
[34m[385]#011train-rmse:0.233026#011validation-rmse:0.306556[0m
[34m[21:05:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 26 pruned nodes, max_depth=1[0m
[34m[386]#011train-rmse:0.233016#011validation-rmse:0.30651[0m
[34m[21:05:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 16 pruned nodes, max_depth=2[0m
[34m[387]#011train-rmse:0.232947#011validation-rmse:0.306554[0m
[34m[21:05:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 36 pruned nodes, max_depth=1[0m
[34m[388]#011train-rmse:0.232923#011validation-rmse:0.306543[0m
[34m[21:06:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[34m[389]#011train-rmse:0.232932#011validation-rmse:0.306551[0m
[34m[21:06:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[390]#011train-rmse:0.232944#011validation-rmse:0.306562[0m
[34m[21:06:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 26 pruned nodes, max_depth=1[0m
[34m[391]#011train-rmse:0.232952#011validation-rmse:0.30654[0m
[34m[21:06:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[392]#011train-rmse:0.232948#011validation-rmse:0.306537[0m
[34m[21:06:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0[0m
[34m[393]#011train-rmse:0.232934#011validation-rmse:0.306524[0m
[34m[21:06:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[394]#011train-rmse:0.232922#011validation-rmse:0.306512[0m
[34m[21:06:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[395]#011train-rmse:0.232805#011validation-rmse:0.30649[0m
[34m[21:06:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 44 pruned nodes, max_depth=0[0m
[34m[396]#011train-rmse:0.232805#011validation-rmse:0.30649[0m
[34m[21:06:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[34m[397]#011train-rmse:0.232809#011validation-rmse:0.306494[0m
[34m[21:06:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 50 pruned nodes, max_depth=1[0m
[34m[398]#011train-rmse:0.232786#011validation-rmse:0.306488[0m
[34m[21:06:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 56 pruned nodes, max_depth=3[0m
[34m[399]#011train-rmse:0.232694#011validation-rmse:0.306448[0m
[34m[21:06:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 32 pruned nodes, max_depth=4[0m
[34m[400]#011train-rmse:0.232577#011validation-rmse:0.306503[0m
[34m[21:06:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 46 pruned nodes, max_depth=4[0m
[34m[401]#011train-rmse:0.232458#011validation-rmse:0.306434[0m
[34m[21:06:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=7[0m
[34m[402]#011train-rmse:0.232246#011validation-rmse:0.30637[0m
[34m[21:06:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 48 pruned nodes, max_depth=4[0m
[34m[403]#011train-rmse:0.231907#011validation-rmse:0.306456[0m
[34m[21:06:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 66 pruned nodes, max_depth=1[0m
[34m[404]#011train-rmse:0.231875#011validation-rmse:0.306542[0m
[34m[21:06:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 38 pruned nodes, max_depth=4[0m
[34m[405]#011train-rmse:0.23179#011validation-rmse:0.306511[0m
[34m[21:06:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0[0m
[34m[406]#011train-rmse:0.231777#011validation-rmse:0.306499[0m
[34m[21:06:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[34m[407]#011train-rmse:0.231779#011validation-rmse:0.3065[0m
[34m[21:06:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 22 pruned nodes, max_depth=1[0m
[34m[408]#011train-rmse:0.231764#011validation-rmse:0.306482[0m
[34m[21:06:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 54 pruned nodes, max_depth=0[0m
[34m[409]#011train-rmse:0.231766#011validation-rmse:0.306483[0m
[34m[21:06:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0[0m
[34m[410]#011train-rmse:0.231767#011validation-rmse:0.306484[0m
[34m[21:06:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[34m[411]#011train-rmse:0.231764#011validation-rmse:0.306482[0m
[34m[21:06:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 18 pruned nodes, max_depth=2[0m
[34m[412]#011train-rmse:0.231696#011validation-rmse:0.306451[0m
[34mStopping. Best iteration:[0m
[34m[402]#011train-rmse:0.232246#011validation-rmse:0.30637
[0m
Training seconds: 957
Billable seconds: 957
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
.....................[34mArguments: serve[0m
[34m[2020-03-03 21:11:50 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[34m[2020-03-03 21:11:50 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2020-03-03 21:11:50 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2020-03-03 21:11:50 +0000] [38] [INFO] Booting worker with pid: 38[0m
[34m[2020-03-03 21:11:50 +0000] [39] [INFO] Booting worker with pid: 39[0m
[34m[2020-03-03 21:11:50 +0000] [40] [INFO] Booting worker with pid: 40[0m
[34m[2020-03-03 21:11:50 +0000] [41] [INFO] Booting worker with pid: 41[0m
[34m[2020-03-03:21:11:50:INFO] Model loaded successfully for worker : 38[0m
[34m[2020-03-03:21:11:50:INFO] Model loaded successfully for worker : 39[0m
[34m[2020-03-03:21:11:50:INFO] Model loaded successfully for worker : 40[0m
[34m[2020-03-03:21:11:50:INFO] Model loaded successfully for worker : 41[0m
[32m2020-03-03T21:12:09.940:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34m[2020-03-03:21:12:12:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:12:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:12:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:12:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:12:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:12:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:12:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:12:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:13:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:13:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:13:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:13:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:13:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:13:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:13:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:13:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:14:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:14:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:14:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:14:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:15:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:15:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:15:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:15:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:15:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:15:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:15:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:15:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:15:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:15:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:15:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:15:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:18:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:18:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:18:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:18:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:20:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:20:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:20:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:20:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:20:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:20:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:20:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:20:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:20:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:20:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:20:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:20:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:20:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:20:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:20:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:20:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:22:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:22:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:22:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:22:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:22:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:22:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:22:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:22:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:22:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:22:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:22:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:22:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:23:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:23:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:23:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:23:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:24:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:24:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:25:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:25:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:25:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:25:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:25:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:25:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:24:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:24:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:25:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:25:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:25:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:25:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:25:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:25:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:29:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:29:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:29:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:29:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:30:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:30:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:30:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:30:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:30:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:30:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:30:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:30:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:30:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:30:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:30:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:30:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:32:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:32:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:32:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:32:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:32:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:32:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:32:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:32:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:32:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:32:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:32:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:32:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:34:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:34:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:34:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:34:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:35:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:35:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:35:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:35:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-03-03:21:12:35:INFO] Sniff delimiter as ','[0m
[34m[2020-03-03:21:12:35:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:35:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:35:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:35:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:35:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-03-03:21:12:35:INFO] Sniff delimiter as ','[0m
[35m[2020-03-03:21:12:35:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
Completed 256.0 KiB/372.9 KiB (3.8 MiB/s) with 1 file(s) remaining
Completed 372.9 KiB/372.9 KiB (5.5 MiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-west-2-388853279755/xgboost-200303-2042-005-a649ad62-2020-03-03-21-08-40-947/test.csv.out to ../data/xgboost/test.csv.out
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
from sagemaker.pytorch import PyTorchModel
PyTorchModel??
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2020-07-19 22:30:09-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 23.9MB/s in 4.3s
2020-07-19 22:30:13 (18.7 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---As our first example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a hypertuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 20, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tunerparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Requirement already satisfied: sagemaker==1.72.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (1.72.0)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.8)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.4.0)
Requirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.16.63)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.14.0)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1)
Requirement already satisfied: smdebug-rulesconfig==0.1.4 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.4)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.4)
Requirement already satisfied: botocore<1.20.0,>=1.19.63 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.63)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.63->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.63->boto3>=1.14.12->sagemaker==1.72.0) (1.26.2)
Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0)
[33mWARNING: You are using pip version 20.3.3; however, version 21.0.1 is available.
You should consider upgrading via the '/home/ec2-user/anaconda3/envs/python3/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2021-02-19 06:08:14-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 23.2MB/s in 4.9s
2021-02-19 06:08:19 (16.3 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([pd.DataFrame(test_y), pd.DataFrame(test_X)], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([pd.DataFrame(val_y), pd.DataFrame(val_X)], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X)], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2019-07-09 22:20:22-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 23.4MB/s in 4.3s
2019-07-09 22:20:27 (18.8 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
Wrote features to cache file: bow_features.pkl
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
.........................................................................................................................................................................................................................................................................................................................................................!
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
2019-07-09 22:46:53 Starting - Preparing the instances for training
2019-07-09 22:46:53 Downloading - Downloading input data
2019-07-09 22:46:53 Training - Training image download completed. Training in progress.
2019-07-09 22:46:53 Uploading - Uploading generated training model
2019-07-09 22:46:53 Completed - Training job completed[31mArguments: train[0m
[31m[2019-07-09:22:36:36:INFO] Running standalone xgboost training.[0m
[31m[2019-07-09:22:36:36:INFO] Setting up HPO optimized metric to be : rmse[0m
[31m[2019-07-09:22:36:36:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8445.2mb[0m
[31m[2019-07-09:22:36:36:INFO] Determined delimiter of CSV input is ','[0m
[31m[22:36:36] S3DistributionType set as FullyReplicated[0m
[31m[22:36:38] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[31m[2019-07-09:22:36:38:INFO] Determined delimiter of CSV input is ','[0m
[31m[22:36:38] S3DistributionType set as FullyReplicated[0m
[31m[22:36:39] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[31m[22:36:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 104 extra nodes, 66 pruned nodes, max_depth=10[0m
[31m[0]#011train-rmse:0.454198#011validation-rmse:0.458967[0m
[31mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[31mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[31m[22:36:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 112 extra nodes, 90 pruned nodes, max_depth=10[0m
[31m[1]#011train-rmse:0.428239#011validation-rmse:0.436149[0m
[31m[22:36:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 64 extra nodes, 68 pruned nodes, max_depth=10[0m
[31m[2]#011train-rmse:0.411639#011validation-rmse:0.421366[0m
[31m[22:36:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 84 extra nodes, 80 pruned nodes, max_depth=10[0m
[31m[3]#011train-rmse:0.398364#011validation-rmse:0.410606[0m
[31m[22:36:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 70 pruned nodes, max_depth=10[0m
[31m[4]#011train-rmse:0.388268#011validation-rmse:0.402851[0m
[31m[22:36:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 74 pruned nodes, max_depth=10[0m
[31m[5]#011train-rmse:0.380218#011validation-rmse:0.396188[0m
[31m[22:37:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 82 pruned nodes, max_depth=10[0m
[31m[6]#011train-rmse:0.372806#011validation-rmse:0.390445[0m
[31m[22:37:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 66 pruned nodes, max_depth=10[0m
[31m[7]#011train-rmse:0.365424#011validation-rmse:0.385095[0m
[31m[22:37:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 68 extra nodes, 76 pruned nodes, max_depth=10[0m
[31m[8]#011train-rmse:0.358151#011validation-rmse:0.380788[0m
[31m[22:37:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 104 pruned nodes, max_depth=10[0m
[31m[9]#011train-rmse:0.353028#011validation-rmse:0.376145[0m
[31m[22:37:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 60 pruned nodes, max_depth=10[0m
[31m[10]#011train-rmse:0.348633#011validation-rmse:0.372802[0m
[31m[22:37:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 64 pruned nodes, max_depth=10[0m
[31m[11]#011train-rmse:0.344053#011validation-rmse:0.369278[0m
[31m[22:37:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 78 pruned nodes, max_depth=10[0m
[31m[12]#011train-rmse:0.340206#011validation-rmse:0.366822[0m
[31m[22:37:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 114 pruned nodes, max_depth=10[0m
[31m[13]#011train-rmse:0.335314#011validation-rmse:0.363911[0m
[31m[22:37:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 90 pruned nodes, max_depth=10[0m
[31m[14]#011train-rmse:0.331831#011validation-rmse:0.361416[0m
[31m[22:37:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 26 pruned nodes, max_depth=10[0m
[31m[15]#011train-rmse:0.328065#011validation-rmse:0.358772[0m
[31m[22:37:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 64 pruned nodes, max_depth=10[0m
[31m[16]#011train-rmse:0.324406#011validation-rmse:0.356843[0m
[31m[22:37:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 66 pruned nodes, max_depth=10[0m
[31m[17]#011train-rmse:0.320964#011validation-rmse:0.354284[0m
[31m[22:37:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 42 pruned nodes, max_depth=10[0m
[31m[18]#011train-rmse:0.318313#011validation-rmse:0.351877[0m
[31m[22:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 60 pruned nodes, max_depth=10[0m
[31m[19]#011train-rmse:0.316215#011validation-rmse:0.350496[0m
[31m[22:37:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 54 pruned nodes, max_depth=10[0m
[31m[20]#011train-rmse:0.314186#011validation-rmse:0.349284[0m
[31m[22:37:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 88 pruned nodes, max_depth=10[0m
[31m[21]#011train-rmse:0.31054#011validation-rmse:0.347256[0m
[31m[22:37:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 58 pruned nodes, max_depth=10[0m
[31m[22]#011train-rmse:0.308258#011validation-rmse:0.345882[0m
[31m[22:37:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 54 pruned nodes, max_depth=10[0m
[31m[23]#011train-rmse:0.306033#011validation-rmse:0.344859[0m
[31m[22:37:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 30 pruned nodes, max_depth=10[0m
[31m[24]#011train-rmse:0.304082#011validation-rmse:0.343536[0m
[31m[22:37:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 48 pruned nodes, max_depth=10[0m
[31m[25]#011train-rmse:0.301938#011validation-rmse:0.34249[0m
[31m[22:37:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 54 pruned nodes, max_depth=10[0m
[31m[26]#011train-rmse:0.300339#011validation-rmse:0.341454[0m
[31m[22:37:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 44 pruned nodes, max_depth=10[0m
[31m[27]#011train-rmse:0.298887#011validation-rmse:0.340385[0m
[31m[22:37:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 68 pruned nodes, max_depth=10[0m
[31m[28]#011train-rmse:0.297202#011validation-rmse:0.339585[0m
[31m[22:38:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 32 pruned nodes, max_depth=10[0m
[31m[29]#011train-rmse:0.295111#011validation-rmse:0.338701[0m
[31m[22:38:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 64 pruned nodes, max_depth=10[0m
[31m[30]#011train-rmse:0.293492#011validation-rmse:0.337874[0m
[31m[22:38:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 58 pruned nodes, max_depth=10[0m
[31m[31]#011train-rmse:0.291508#011validation-rmse:0.337095[0m
[31m[22:38:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 80 pruned nodes, max_depth=10[0m
[31m[32]#011train-rmse:0.29008#011validation-rmse:0.335956[0m
[31m[22:38:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 28 pruned nodes, max_depth=10[0m
[31m[33]#011train-rmse:0.288051#011validation-rmse:0.335296[0m
[31m[22:38:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 30 pruned nodes, max_depth=10[0m
[31m[34]#011train-rmse:0.286726#011validation-rmse:0.334235[0m
[31m[22:38:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 24 pruned nodes, max_depth=10[0m
[31m[35]#011train-rmse:0.285526#011validation-rmse:0.33372[0m
[31m[22:38:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 44 pruned nodes, max_depth=10[0m
[31m[36]#011train-rmse:0.28367#011validation-rmse:0.332885[0m
[31m[22:38:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 46 pruned nodes, max_depth=10[0m
[31m[37]#011train-rmse:0.282305#011validation-rmse:0.332249[0m
[31m[22:38:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 36 pruned nodes, max_depth=10[0m
[31m[38]#011train-rmse:0.280877#011validation-rmse:0.331659[0m
[31m[22:38:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 72 pruned nodes, max_depth=10[0m
[31m[39]#011train-rmse:0.279765#011validation-rmse:0.331052[0m
[31m[22:38:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 102 pruned nodes, max_depth=10[0m
[31m[40]#011train-rmse:0.278683#011validation-rmse:0.330399[0m
[31m[22:38:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 70 pruned nodes, max_depth=10[0m
[31m[41]#011train-rmse:0.276787#011validation-rmse:0.329973[0m
[31m[22:38:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 74 pruned nodes, max_depth=10[0m
[31m[42]#011train-rmse:0.275861#011validation-rmse:0.328988[0m
[31m[22:38:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 36 pruned nodes, max_depth=10[0m
[31m[43]#011train-rmse:0.274695#011validation-rmse:0.328494[0m
[31m[22:38:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 30 pruned nodes, max_depth=10[0m
[31m[44]#011train-rmse:0.273499#011validation-rmse:0.328118[0m
[31m[22:38:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 68 pruned nodes, max_depth=10[0m
[31m[45]#011train-rmse:0.272347#011validation-rmse:0.327736[0m
[31m[22:38:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 28 pruned nodes, max_depth=10[0m
[31m[46]#011train-rmse:0.271232#011validation-rmse:0.326956[0m
[31m[22:38:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 28 pruned nodes, max_depth=10[0m
[31m[47]#011train-rmse:0.270293#011validation-rmse:0.326377[0m
[31m[22:38:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 44 pruned nodes, max_depth=10[0m
[31m[48]#011train-rmse:0.269234#011validation-rmse:0.325593[0m
[31m[22:38:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=8[0m
[31m[49]#011train-rmse:0.268392#011validation-rmse:0.32535[0m
[31m[22:38:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 34 pruned nodes, max_depth=10[0m
[31m[50]#011train-rmse:0.267501#011validation-rmse:0.324789[0m
[31m[22:38:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 38 pruned nodes, max_depth=10[0m
[31m[51]#011train-rmse:0.266191#011validation-rmse:0.324404[0m
[31m[22:38:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 38 pruned nodes, max_depth=9[0m
[31m[52]#011train-rmse:0.265437#011validation-rmse:0.324614[0m
[31m[22:39:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 30 pruned nodes, max_depth=4[0m
[31m[53]#011train-rmse:0.265017#011validation-rmse:0.32448[0m
[31m[22:39:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 38 pruned nodes, max_depth=8[0m
[31m[54]#011train-rmse:0.264287#011validation-rmse:0.324015[0m
[31m[22:39:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 60 pruned nodes, max_depth=4[0m
[31m[55]#011train-rmse:0.263953#011validation-rmse:0.32367[0m
[31m[22:39:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 62 pruned nodes, max_depth=8[0m
[31m[56]#011train-rmse:0.263249#011validation-rmse:0.323243[0m
[31m[22:39:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 74 pruned nodes, max_depth=9[0m
[31m[57]#011train-rmse:0.262487#011validation-rmse:0.322712[0m
[31m[22:39:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 44 pruned nodes, max_depth=9[0m
[31m[58]#011train-rmse:0.261673#011validation-rmse:0.322623[0m
[31m[22:39:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 30 pruned nodes, max_depth=8[0m
[31m[59]#011train-rmse:0.261027#011validation-rmse:0.321966[0m
[31m[22:39:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 50 pruned nodes, max_depth=8[0m
[31m[60]#011train-rmse:0.260469#011validation-rmse:0.321791[0m
[31m[22:39:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 60 pruned nodes, max_depth=5[0m
[31m[61]#011train-rmse:0.260093#011validation-rmse:0.321422[0m
[31m[22:39:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 30 pruned nodes, max_depth=9[0m
[31m[62]#011train-rmse:0.259311#011validation-rmse:0.321375[0m
[31m[22:39:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 46 pruned nodes, max_depth=10[0m
[31m[63]#011train-rmse:0.258479#011validation-rmse:0.32096[0m
[31m[22:39:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 36 pruned nodes, max_depth=2[0m
[31m[64]#011train-rmse:0.258379#011validation-rmse:0.320737[0m
[31m[22:39:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 84 pruned nodes, max_depth=6[0m
[31m[65]#011train-rmse:0.25793#011validation-rmse:0.320864[0m
[31m[22:39:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 30 pruned nodes, max_depth=9[0m
[31m[66]#011train-rmse:0.257225#011validation-rmse:0.320436[0m
[31m[22:39:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 40 pruned nodes, max_depth=3[0m
[31m[67]#011train-rmse:0.256956#011validation-rmse:0.320184[0m
[31m[22:39:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 54 pruned nodes, max_depth=10[0m
[31m[68]#011train-rmse:0.256147#011validation-rmse:0.319881[0m
[31m[22:39:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 32 pruned nodes, max_depth=10[0m
[31m[69]#011train-rmse:0.255477#011validation-rmse:0.319519[0m
[31m[22:39:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 30 pruned nodes, max_depth=10[0m
[31m[70]#011train-rmse:0.254759#011validation-rmse:0.319638[0m
[31m[22:39:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 42 pruned nodes, max_depth=1[0m
[31m[71]#011train-rmse:0.254633#011validation-rmse:0.319632[0m
[31m[22:39:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 32 pruned nodes, max_depth=5[0m
[31m[72]#011train-rmse:0.254205#011validation-rmse:0.319479[0m
[31m[22:39:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 60 pruned nodes, max_depth=0[0m
[31m[73]#011train-rmse:0.254203#011validation-rmse:0.319477[0m
[31m[22:39:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=10[0m
[31m[74]#011train-rmse:0.253567#011validation-rmse:0.319239[0m
[31m[22:39:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 20 pruned nodes, max_depth=10[0m
[31m[75]#011train-rmse:0.252851#011validation-rmse:0.318982[0m
[31m[22:40:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 20 pruned nodes, max_depth=10[0m
[31m[76]#011train-rmse:0.252134#011validation-rmse:0.318866[0m
[31m[22:40:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 22 pruned nodes, max_depth=10[0m
[31m[77]#011train-rmse:0.251359#011validation-rmse:0.318502[0m
[31m[22:40:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 44 pruned nodes, max_depth=5[0m
[31m[78]#011train-rmse:0.251032#011validation-rmse:0.318171[0m
[31m[22:40:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 50 pruned nodes, max_depth=0[0m
[31m[79]#011train-rmse:0.251037#011validation-rmse:0.318174[0m
[31m[22:40:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 64 pruned nodes, max_depth=0[0m
[31m[80]#011train-rmse:0.251053#011validation-rmse:0.318186[0m
[31m[22:40:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[31m[81]#011train-rmse:0.251075#011validation-rmse:0.318202[0m
[31m[22:40:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0[0m
[31m[82]#011train-rmse:0.251081#011validation-rmse:0.318207[0m
[31m[22:40:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 30 pruned nodes, max_depth=6[0m
[31m[83]#011train-rmse:0.250483#011validation-rmse:0.31799[0m
[31m[22:40:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 30 pruned nodes, max_depth=9[0m
[31m[84]#011train-rmse:0.249848#011validation-rmse:0.31766[0m
[31m[22:40:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 54 pruned nodes, max_depth=0[0m
[31m[85]#011train-rmse:0.249845#011validation-rmse:0.317658[0m
[31m[22:40:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[86]#011train-rmse:0.249831#011validation-rmse:0.317648[0m
[31m[22:40:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 46 pruned nodes, max_depth=0[0m
[31m[87]#011train-rmse:0.249835#011validation-rmse:0.317651[0m
[31m[22:40:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[88]#011train-rmse:0.249843#011validation-rmse:0.317656[0m
[31m[22:40:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 78 pruned nodes, max_depth=4[0m
[31m[89]#011train-rmse:0.249646#011validation-rmse:0.317762[0m
[31m[22:40:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 42 pruned nodes, max_depth=4[0m
[31m[90]#011train-rmse:0.249366#011validation-rmse:0.317727[0m
[31m[22:40:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 36 pruned nodes, max_depth=1[0m
[31m[91]#011train-rmse:0.249301#011validation-rmse:0.317739[0m
[31m[22:40:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 54 pruned nodes, max_depth=0[0m
[31m[92]#011train-rmse:0.2493#011validation-rmse:0.317739[0m
[31m[22:40:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 76 pruned nodes, max_depth=0[0m
[31m[93]#011train-rmse:0.24932#011validation-rmse:0.317753[0m
[31m[22:40:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 78 pruned nodes, max_depth=4[0m
[31m[94]#011train-rmse:0.248942#011validation-rmse:0.317616[0m
[31m[22:40:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 72 pruned nodes, max_depth=0[0m
[31m[95]#011train-rmse:0.248952#011validation-rmse:0.317623[0m
[31m[22:40:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 84 pruned nodes, max_depth=2[0m
[31m[96]#011train-rmse:0.248839#011validation-rmse:0.317629[0m
[31m[22:40:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0[0m
[31m[97]#011train-rmse:0.248848#011validation-rmse:0.317636[0m
[31m[22:40:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[31m[98]#011train-rmse:0.248848#011validation-rmse:0.317636[0m
[31m[22:41:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 42 pruned nodes, max_depth=1[0m
[31m[99]#011train-rmse:0.248733#011validation-rmse:0.317598[0m
[31m[22:41:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[31m[100]#011train-rmse:0.248728#011validation-rmse:0.317595[0m
[31m[22:41:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0[0m
[31m[101]#011train-rmse:0.248733#011validation-rmse:0.317598[0m
[31m[22:41:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 26 pruned nodes, max_depth=10[0m
[31m[102]#011train-rmse:0.247936#011validation-rmse:0.317382[0m
[31m[22:41:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 50 pruned nodes, max_depth=5[0m
[31m[103]#011train-rmse:0.247656#011validation-rmse:0.317281[0m
[31m[22:41:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 40 pruned nodes, max_depth=5[0m
[31m[104]#011train-rmse:0.247204#011validation-rmse:0.317119[0m
[31m[22:41:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 30 pruned nodes, max_depth=2[0m
[31m[105]#011train-rmse:0.247037#011validation-rmse:0.31713[0m
[31m[22:41:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0[0m
[31m[106]#011train-rmse:0.247045#011validation-rmse:0.317135[0m
[31m[22:41:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0[0m
[31m[107]#011train-rmse:0.247046#011validation-rmse:0.317136[0m
[31m[22:41:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 60 pruned nodes, max_depth=2[0m
[31m[108]#011train-rmse:0.246907#011validation-rmse:0.31712[0m
[31m[22:41:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[109]#011train-rmse:0.246909#011validation-rmse:0.317121[0m
[31m[22:41:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 58 pruned nodes, max_depth=1[0m
[31m[110]#011train-rmse:0.246802#011validation-rmse:0.317113[0m
[31m[22:41:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 44 pruned nodes, max_depth=7[0m
[31m[111]#011train-rmse:0.246298#011validation-rmse:0.317153[0m
[31m[22:41:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 88 pruned nodes, max_depth=0[0m
[31m[112]#011train-rmse:0.246309#011validation-rmse:0.317161[0m
[31m[22:41:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 58 pruned nodes, max_depth=4[0m
[31m[113]#011train-rmse:0.246022#011validation-rmse:0.317156[0m
[31m[22:41:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 56 pruned nodes, max_depth=0[0m
[31m[114]#011train-rmse:0.246022#011validation-rmse:0.317156[0m
[31m[22:41:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0[0m
[31m[115]#011train-rmse:0.246012#011validation-rmse:0.317149[0m
[31m[22:41:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 66 pruned nodes, max_depth=0[0m
[31m[116]#011train-rmse:0.24601#011validation-rmse:0.317148[0m
[31m[22:41:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 46 pruned nodes, max_depth=0[0m
[31m[117]#011train-rmse:0.246032#011validation-rmse:0.317163[0m
[31m[22:41:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 36 pruned nodes, max_depth=8[0m
[31m[118]#011train-rmse:0.245523#011validation-rmse:0.317006[0m
[31m[22:41:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 40 pruned nodes, max_depth=2[0m
[31m[119]#011train-rmse:0.245371#011validation-rmse:0.316977[0m
[31m[22:41:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 60 pruned nodes, max_depth=0[0m
[31m[120]#011train-rmse:0.245352#011validation-rmse:0.316962[0m
[31m[22:41:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 70 pruned nodes, max_depth=0[0m
[31m[121]#011train-rmse:0.245364#011validation-rmse:0.316971[0m
[31m[22:41:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[31m[122]#011train-rmse:0.245343#011validation-rmse:0.316955[0m
[31m[22:42:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 46 pruned nodes, max_depth=9[0m
[31m[123]#011train-rmse:0.244796#011validation-rmse:0.316923[0m
[31m[22:42:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0[0m
[31m[124]#011train-rmse:0.244799#011validation-rmse:0.316926[0m
[31m[22:42:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 60 pruned nodes, max_depth=0[0m
[31m[125]#011train-rmse:0.244816#011validation-rmse:0.316939[0m
[31m[22:42:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 72 pruned nodes, max_depth=6[0m
[31m[126]#011train-rmse:0.244473#011validation-rmse:0.316864[0m
[31m[22:42:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 46 pruned nodes, max_depth=0[0m
[31m[127]#011train-rmse:0.244457#011validation-rmse:0.316852[0m
[31m[22:42:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 56 pruned nodes, max_depth=0[0m
[31m[128]#011train-rmse:0.244457#011validation-rmse:0.316852[0m
[31m[22:42:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 52 pruned nodes, max_depth=6[0m
[31m[129]#011train-rmse:0.244046#011validation-rmse:0.316838[0m
[31m[22:42:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0[0m
[31m[130]#011train-rmse:0.244033#011validation-rmse:0.316829[0m
[31m[22:42:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 62 pruned nodes, max_depth=0[0m
[31m[131]#011train-rmse:0.244041#011validation-rmse:0.316835[0m
[31m[22:42:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 48 pruned nodes, max_depth=0[0m
[31m[132]#011train-rmse:0.244039#011validation-rmse:0.316834[0m
[31m[22:42:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0[0m
[31m[133]#011train-rmse:0.244033#011validation-rmse:0.316829[0m
[31m[22:42:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 82 pruned nodes, max_depth=0[0m
[31m[134]#011train-rmse:0.244046#011validation-rmse:0.316839[0m
[31m[22:42:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 58 pruned nodes, max_depth=1[0m
[31m[135]#011train-rmse:0.243947#011validation-rmse:0.316775[0m
[31m[22:42:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[31m[136]#011train-rmse:0.243953#011validation-rmse:0.316779[0m
[31m[22:42:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 42 pruned nodes, max_depth=10[0m
[31m[137]#011train-rmse:0.243293#011validation-rmse:0.316526[0m
[31m[22:42:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[31m[138]#011train-rmse:0.243313#011validation-rmse:0.31654[0m
[31m[22:42:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 44 pruned nodes, max_depth=0[0m
[31m[139]#011train-rmse:0.243292#011validation-rmse:0.316526[0m
[31m[22:42:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 60 pruned nodes, max_depth=0[0m
[31m[140]#011train-rmse:0.243307#011validation-rmse:0.316536[0m
[31m[22:42:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 48 pruned nodes, max_depth=9[0m
[31m[141]#011train-rmse:0.242723#011validation-rmse:0.316419[0m
[31m[22:42:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 48 pruned nodes, max_depth=0[0m
[31m[142]#011train-rmse:0.242733#011validation-rmse:0.316425[0m
[31m[22:42:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 40 pruned nodes, max_depth=1[0m
[31m[143]#011train-rmse:0.242617#011validation-rmse:0.316425[0m
[31m[22:42:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 46 pruned nodes, max_depth=0[0m
[31m[144]#011train-rmse:0.242613#011validation-rmse:0.316421[0m
[31m[22:42:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[31m[145]#011train-rmse:0.242606#011validation-rmse:0.316416[0m
[31m[22:43:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 60 pruned nodes, max_depth=0[0m
[31m[146]#011train-rmse:0.242621#011validation-rmse:0.316427[0m
[31m[22:43:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0[0m
[31m[147]#011train-rmse:0.242639#011validation-rmse:0.31644[0m
[31m[22:43:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 46 pruned nodes, max_depth=4[0m
[31m[148]#011train-rmse:0.242297#011validation-rmse:0.316379[0m
[31m[22:43:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 68 pruned nodes, max_depth=2[0m
[31m[149]#011train-rmse:0.242169#011validation-rmse:0.316334[0m
[31m[22:43:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0[0m
[31m[150]#011train-rmse:0.242173#011validation-rmse:0.316337[0m
[31m[22:43:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 42 pruned nodes, max_depth=0[0m
[31m[151]#011train-rmse:0.242181#011validation-rmse:0.316343[0m
[31m[22:43:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 64 pruned nodes, max_depth=0[0m
[31m[152]#011train-rmse:0.242163#011validation-rmse:0.31633[0m
[31m[22:43:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0[0m
[31m[153]#011train-rmse:0.24217#011validation-rmse:0.316335[0m
[31m[22:43:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 62 pruned nodes, max_depth=2[0m
[31m[154]#011train-rmse:0.242076#011validation-rmse:0.316353[0m
[31m[22:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 24 pruned nodes, max_depth=1[0m
[31m[155]#011train-rmse:0.242005#011validation-rmse:0.316277[0m
[31m[22:43:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[156]#011train-rmse:0.242002#011validation-rmse:0.316275[0m
[31m[22:43:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[157]#011train-rmse:0.24199#011validation-rmse:0.316266[0m
[31m[22:43:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[31m[158]#011train-rmse:0.24198#011validation-rmse:0.316258[0m
[31m[22:43:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 40 pruned nodes, max_depth=3[0m
[31m[159]#011train-rmse:0.241765#011validation-rmse:0.316204[0m
[31m[22:43:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[31m[160]#011train-rmse:0.241768#011validation-rmse:0.316206[0m
[31m[22:43:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0[0m
[31m[161]#011train-rmse:0.241757#011validation-rmse:0.316198[0m
[31m[22:43:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0[0m
[31m[162]#011train-rmse:0.241759#011validation-rmse:0.316199[0m
[31m[22:43:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[163]#011train-rmse:0.241762#011validation-rmse:0.316201[0m
[31m[22:43:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 62 pruned nodes, max_depth=0[0m
[31m[164]#011train-rmse:0.241758#011validation-rmse:0.316199[0m
[31m[22:43:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[165]#011train-rmse:0.241744#011validation-rmse:0.316188[0m
[31m[22:43:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 42 pruned nodes, max_depth=0[0m
[31m[166]#011train-rmse:0.241755#011validation-rmse:0.316196[0m
[31m[22:43:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0[0m
[31m[167]#011train-rmse:0.24176#011validation-rmse:0.3162[0m
[31m[22:43:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0[0m
[31m[168]#011train-rmse:0.241772#011validation-rmse:0.316209[0m
[31m[22:44:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0[0m
[31m[169]#011train-rmse:0.241761#011validation-rmse:0.316201[0m
[31m[22:44:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0[0m
[31m[170]#011train-rmse:0.241759#011validation-rmse:0.316199[0m
[31m[22:44:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 56 pruned nodes, max_depth=0[0m
[31m[171]#011train-rmse:0.241771#011validation-rmse:0.316208[0m
[31m[22:44:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0[0m
[31m[172]#011train-rmse:0.24175#011validation-rmse:0.316193[0m
[31m[22:44:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 46 pruned nodes, max_depth=0[0m
[31m[173]#011train-rmse:0.241766#011validation-rmse:0.316204[0m
[31m[22:44:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 54 pruned nodes, max_depth=0[0m
[31m[174]#011train-rmse:0.241768#011validation-rmse:0.316206[0m
[31m[22:44:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 58 pruned nodes, max_depth=1[0m
[31m[175]#011train-rmse:0.241715#011validation-rmse:0.316131[0m
[31m[22:44:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 30 pruned nodes, max_depth=0[0m
[31m[176]#011train-rmse:0.241715#011validation-rmse:0.316131[0m
[31m[22:44:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[31m[177]#011train-rmse:0.241715#011validation-rmse:0.316131[0m
[31m[22:44:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 56 pruned nodes, max_depth=0[0m
[31m[178]#011train-rmse:0.24171#011validation-rmse:0.316127[0m
[31m[22:44:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0[0m
[31m[179]#011train-rmse:0.241709#011validation-rmse:0.316126[0m
[31m[22:44:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0[0m
[31m[180]#011train-rmse:0.241723#011validation-rmse:0.316136[0m
[31m[22:44:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0[0m
[31m[181]#011train-rmse:0.241721#011validation-rmse:0.316136[0m
[31m[22:44:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0[0m
[31m[182]#011train-rmse:0.24173#011validation-rmse:0.316142[0m
[31m[22:44:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[31m[183]#011train-rmse:0.241737#011validation-rmse:0.316147[0m
[31m[22:44:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 56 pruned nodes, max_depth=3[0m
[31m[184]#011train-rmse:0.24159#011validation-rmse:0.316014[0m
[31m[22:44:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[185]#011train-rmse:0.241588#011validation-rmse:0.316012[0m
[31m[22:44:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[186]#011train-rmse:0.241593#011validation-rmse:0.316016[0m
[31m[22:44:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 64 pruned nodes, max_depth=0[0m
[31m[187]#011train-rmse:0.241588#011validation-rmse:0.316013[0m
[31m[22:44:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0[0m
[31m[188]#011train-rmse:0.241583#011validation-rmse:0.316009[0m
[31m[22:44:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0[0m
[31m[189]#011train-rmse:0.241595#011validation-rmse:0.316018[0m
[31m[22:44:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[190]#011train-rmse:0.241596#011validation-rmse:0.316018[0m
[31m[22:44:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0[0m
[31m[191]#011train-rmse:0.241611#011validation-rmse:0.31603[0m
[31m[22:45:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[31m[192]#011train-rmse:0.241659#011validation-rmse:0.316065[0m
[31m[22:45:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 30 pruned nodes, max_depth=1[0m
[31m[193]#011train-rmse:0.241532#011validation-rmse:0.316071[0m
[31m[22:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 44 pruned nodes, max_depth=0[0m
[31m[194]#011train-rmse:0.241546#011validation-rmse:0.316081[0m
[31m[22:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 34 pruned nodes, max_depth=9[0m
[31m[195]#011train-rmse:0.240891#011validation-rmse:0.315809[0m
[31m[22:45:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 58 pruned nodes, max_depth=0[0m
[31m[196]#011train-rmse:0.24089#011validation-rmse:0.315808[0m
[31m[22:45:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 50 pruned nodes, max_depth=0[0m
[31m[197]#011train-rmse:0.240886#011validation-rmse:0.315805[0m
[31m[22:45:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[198]#011train-rmse:0.240872#011validation-rmse:0.315795[0m
[31m[22:45:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[31m[199]#011train-rmse:0.24087#011validation-rmse:0.315793[0m
[31m[22:45:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 54 pruned nodes, max_depth=0[0m
[31m[200]#011train-rmse:0.240878#011validation-rmse:0.315799[0m
[31m[22:45:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0[0m
[31m[201]#011train-rmse:0.240885#011validation-rmse:0.315804[0m
[31m[22:45:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 46 pruned nodes, max_depth=0[0m
[31m[202]#011train-rmse:0.240881#011validation-rmse:0.315801[0m
[31m[22:45:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 50 pruned nodes, max_depth=0[0m
[31m[203]#011train-rmse:0.240888#011validation-rmse:0.315806[0m
[31m[22:45:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 34 pruned nodes, max_depth=8[0m
[31m[204]#011train-rmse:0.240581#011validation-rmse:0.31587[0m
[31m[22:45:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0[0m
[31m[205]#011train-rmse:0.24059#011validation-rmse:0.315877[0m
[31m[22:45:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 62 pruned nodes, max_depth=0[0m
[31m[206]#011train-rmse:0.240588#011validation-rmse:0.315875[0m
[31m[22:45:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 70 pruned nodes, max_depth=5[0m
[31m[207]#011train-rmse:0.240178#011validation-rmse:0.315688[0m
[31m[22:45:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 50 pruned nodes, max_depth=0[0m
[31m[208]#011train-rmse:0.240175#011validation-rmse:0.315686[0m
[31m[22:45:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 66 pruned nodes, max_depth=0[0m
[31m[209]#011train-rmse:0.240174#011validation-rmse:0.315684[0m
[31m[22:45:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 40 pruned nodes, max_depth=3[0m
[31m[210]#011train-rmse:0.240034#011validation-rmse:0.315625[0m
[31m[22:45:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 44 pruned nodes, max_depth=0[0m
[31m[211]#011train-rmse:0.240044#011validation-rmse:0.315633[0m
[31m[22:45:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[31m[212]#011train-rmse:0.240054#011validation-rmse:0.31564[0m
[31m[22:45:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 34 pruned nodes, max_depth=4[0m
[31m[213]#011train-rmse:0.239752#011validation-rmse:0.315638[0m
[31m[22:45:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[214]#011train-rmse:0.239746#011validation-rmse:0.315634[0m
[31m[22:46:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0[0m
[31m[215]#011train-rmse:0.239737#011validation-rmse:0.315628[0m
[31m[22:46:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[31m[216]#011train-rmse:0.23973#011validation-rmse:0.315623[0m
[31m[22:46:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 44 pruned nodes, max_depth=7[0m
[31m[217]#011train-rmse:0.239277#011validation-rmse:0.315657[0m
[31m[22:46:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 60 pruned nodes, max_depth=5[0m
[31m[218]#011train-rmse:0.239019#011validation-rmse:0.31546[0m
[31m[22:46:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 44 pruned nodes, max_depth=0[0m
[31m[219]#011train-rmse:0.239017#011validation-rmse:0.315459[0m
[31m[22:46:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 76 pruned nodes, max_depth=0[0m
[31m[220]#011train-rmse:0.239009#011validation-rmse:0.315453[0m
[31m[22:46:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0[0m
[31m[221]#011train-rmse:0.239013#011validation-rmse:0.315456[0m
[31m[22:46:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[222]#011train-rmse:0.239014#011validation-rmse:0.315457[0m
[31m[22:46:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 54 pruned nodes, max_depth=0[0m
[31m[223]#011train-rmse:0.239#011validation-rmse:0.315447[0m
[31m[22:46:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 68 pruned nodes, max_depth=0[0m
[31m[224]#011train-rmse:0.23901#011validation-rmse:0.315454[0m
[31m[22:46:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0[0m
[31m[225]#011train-rmse:0.239009#011validation-rmse:0.315454[0m
[31m[22:46:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[31m[226]#011train-rmse:0.239007#011validation-rmse:0.315452[0m
[31m[22:46:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[31m[227]#011train-rmse:0.239027#011validation-rmse:0.315466[0m
[31m[22:46:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0[0m
[31m[228]#011train-rmse:0.239035#011validation-rmse:0.315472[0m
[31m[22:46:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[31m[229]#011train-rmse:0.239041#011validation-rmse:0.315476[0m
[31m[22:46:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0[0m
[31m[230]#011train-rmse:0.239019#011validation-rmse:0.315461[0m
[31m[22:46:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 42 pruned nodes, max_depth=0[0m
[31m[231]#011train-rmse:0.239014#011validation-rmse:0.315457[0m
[31m[22:46:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0[0m
[31m[232]#011train-rmse:0.239028#011validation-rmse:0.315466[0m
[31m[22:46:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 74 pruned nodes, max_depth=0[0m
[31m[233]#011train-rmse:0.239039#011validation-rmse:0.315474[0m
[31mStopping. Best iteration:[0m
[31m[223]#011train-rmse:0.239#011validation-rmse:0.315447
[0m
Billable seconds: 655
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
............................................!
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
Completed 256.0 KiB/372.5 KiB (3.3 MiB/s) with 1 file(s) remaining
Completed 372.5 KiB/372.5 KiB (4.6 MiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-2-559643369662/xgboost-190709-2225-005-fe3591b2-2019-07-09-22-54-45-140/test.csv.out to ../data/xgboost/test.csv.out
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2020-10-02 11:55:31-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 20.8MB/s in 6.6s
2020-10-02 11:55:38 (12.2 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
# from sklearn.externals
import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
Read features from cache file: bow_features.pkl
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
...............................................................................................................................................................................................................................................................!
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
............................[32m2020-10-02T12:31:29.902:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34mArguments: serve[0m
[34m[2020-10-02 12:31:29 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[34m[2020-10-02 12:31:29 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2020-10-02 12:31:29 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2020-10-02 12:31:29 +0000] [37] [INFO] Booting worker with pid: 37[0m
[34m[2020-10-02 12:31:29 +0000] [38] [INFO] Booting worker with pid: 38[0m
[34m[2020-10-02:12:31:29:INFO] Model loaded successfully for worker : 37[0m
[34m[2020-10-02 12:31:29 +0000] [39] [INFO] Booting worker with pid: 39[0m
[34m[2020-10-02:12:31:29:INFO] Model loaded successfully for worker : 38[0m
[34m[2020-10-02 12:31:29 +0000] [40] [INFO] Booting worker with pid: 40[0m
[34m[2020-10-02:12:31:30:INFO] Model loaded successfully for worker : 39[0m
[34m[2020-10-02:12:31:30:INFO] Model loaded successfully for worker : 40[0m
[34m[2020-10-02:12:31:30:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:30:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:30:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:30:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:30:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:30:INFO] Determined delimiter of CSV input is ','[0m
[35mArguments: serve[0m
[35m[2020-10-02 12:31:29 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[35m[2020-10-02 12:31:29 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[35m[2020-10-02 12:31:29 +0000] [1] [INFO] Using worker: gevent[0m
[35m[2020-10-02 12:31:29 +0000] [37] [INFO] Booting worker with pid: 37[0m
[35m[2020-10-02 12:31:29 +0000] [38] [INFO] Booting worker with pid: 38[0m
[35m[2020-10-02:12:31:29:INFO] Model loaded successfully for worker : 37[0m
[35m[2020-10-02 12:31:29 +0000] [39] [INFO] Booting worker with pid: 39[0m
[35m[2020-10-02:12:31:29:INFO] Model loaded successfully for worker : 38[0m
[35m[2020-10-02 12:31:29 +0000] [40] [INFO] Booting worker with pid: 40[0m
[35m[2020-10-02:12:31:30:INFO] Model loaded successfully for worker : 39[0m
[35m[2020-10-02:12:31:30:INFO] Model loaded successfully for worker : 40[0m
[35m[2020-10-02:12:31:30:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:30:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:30:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:30:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:30:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:30:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:30:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:30:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:30:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:30:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:32:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:32:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:33:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:32:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:32:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:32:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:32:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:33:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:33:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:33:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:33:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:33:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:33:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:33:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:35:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:35:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:35:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:35:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:35:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:35:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:35:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:35:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:35:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:35:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:35:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:35:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:37:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:37:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:37:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:37:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:37:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:37:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:37:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:37:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:37:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:37:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:37:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:37:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:38:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:38:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:38:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:38:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:40:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:40:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:40:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:40:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:40:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:40:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:40:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:40:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:40:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:40:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:40:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:40:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:40:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:40:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:40:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:40:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:42:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:42:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:42:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:42:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:42:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:42:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:42:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:42:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:42:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:42:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:42:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:42:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:43:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:43:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:43:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:43:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-02:12:31:45:INFO] Sniff delimiter as ','[0m
[35m[2020-10-02:12:31:45:INFO] Sniff delimiter as ','[0m
[34m[2020-10-02:12:31:45:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-02:12:31:45:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
Completed 256.0 KiB/372.6 KiB (2.9 MiB/s) with 1 file(s) remaining
Completed 372.6 KiB/372.6 KiB (4.0 MiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-956613579044/xgboost-201002-1205-003-ddba5be5-2020-10-02-12-26-55-467/test.csv.out to ../data/xgboost/test.csv.out
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2019-01-22 08:26:52-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 5.96MB/s in 15s
2019-01-22 08:27:08 (5.19 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there may be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
# Solution:
# The test data shouldn't contain the ground truth labels as they are what the model is
# trying to predict. We will end up using them afterward to compare the predictions to.
# pd.concat([test_y, test_X], axis=1).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a tuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = None
# Solution:
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
# Solution:
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = None
# Solution:
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
_____no_output_____
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = None
# Solution:
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
_____no_output_____
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = None
# Solution:
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
rm: cannot remove ‘/bin’: Is a directory
rm: cannot remove ‘/boot’: Is a directory
rm: cannot remove ‘/cgroup’: Is a directory
rm: cannot remove ‘/dev’: Is a directory
rm: cannot remove ‘/etc’: Is a directory
rm: cannot remove ‘/home’: Is a directory
rm: cannot remove ‘/include’: Is a directory
rm: cannot remove ‘/lib’: Is a directory
rm: cannot remove ‘/lib64’: Is a directory
rm: cannot remove ‘/local’: Is a directory
rm: cannot remove ‘/lost+found’: Is a directory
rm: cannot remove ‘/media’: Is a directory
rm: cannot remove ‘/mnt’: Is a directory
rm: cannot remove ‘/opt’: Is a directory
rm: cannot remove ‘/proc’: Is a directory
rm: cannot remove ‘/root’: Is a directory
rm: cannot remove ‘/run’: Is a directory
rm: cannot remove ‘/sbin’: Is a directory
rm: cannot remove ‘/selinux’: Is a directory
rm: cannot remove ‘/srv’: Is a directory
rm: cannot remove ‘/sys’: Is a directory
rm: cannot remove ‘/tmp’: Is a directory
rm: cannot remove ‘/usr’: Is a directory
rm: cannot remove ‘/var’: Is a directory
rmdir: missing operand
Try 'rmdir --help' for more information.
rm: cannot remove ‘/bin’: Is a directory
rm: cannot remove ‘/boot’: Is a directory
rm: cannot remove ‘/cgroup’: Is a directory
rm: cannot remove ‘/dev’: Is a directory
rm: cannot remove ‘/etc’: Is a directory
rm: cannot remove ‘/home’: Is a directory
rm: cannot remove ‘/include’: Is a directory
rm: cannot remove ‘/lib’: Is a directory
rm: cannot remove ‘/lib64’: Is a directory
rm: cannot remove ‘/local’: Is a directory
rm: cannot remove ‘/lost+found’: Is a directory
rm: cannot remove ‘/media’: Is a directory
rm: cannot remove ‘/mnt’: Is a directory
rm: cannot remove ‘/opt’: Is a directory
rm: cannot remove ‘/proc’: Is a directory
rm: cannot remove ‘/root’: Is a directory
rm: cannot remove ‘/run’: Is a directory
rm: cannot remove ‘/sbin’: Is a directory
rm: cannot remove ‘/selinux’: Is a directory
rm: cannot remove ‘/srv’: Is a directory
rm: cannot remove ‘/sys’: Is a directory
rm: cannot remove ‘/tmp’: Is a directory
rm: cannot remove ‘/usr’: Is a directory
rm: cannot remove ‘/var’: Is a directory
rmdir: missing operand
Try 'rmdir --help' for more information.
|
code/.ipynb_checkpoints/bck_1-checkpoint.ipynb | ###Markdown
PTSD Model Inference with IRT Features [Center for Health Statistics](http://www.healthstats.org) [The Zero Knowledge Discovery Lab](http://zed.uchicago.edu)---
###Code
import ccx as cx
import pylab as plt
plt.style.use('ggplot')
import pickle
import pandas as pd
%matplotlib inline
datafile='../data/CAD-PTSDData.csv'
def processDATA(datafile):
'''
process data file
into training data X, target labels y
'''
Df=pd.read_csv(datafile)
X=Df.drop(['record_id','PTSDDx'],axis=1).values
y=Df.drop(['record_id'],axis=1).PTSDDx.values
[nsamples,nfeatures]=X.shape
return X,y,nfeatures,nsamples
X,y,nfeatures,nsamples=processDATA(datafile)
Perf23,Dperf23,Models23,Nitems23=cx.getSystem(X,y,max_depth=2,n_estimators=3)
print(Nitems23)
cx.PLOT(Dperf23,Nitems23,dn='23')
Perf32,Dperf32,Models32,Nitems32=cx.getSystem(X,y,max_depth=3,n_estimators=2)
cx.PLOT(Dperf32,Nitems32,dn='32')
cx.pickleModel(Models23,threshold=.88,filename='../model/model_2_3.pkl')
print("--")
cx.pickleModel(Models32,threshold=.895,filename='../model/model_3_2.pkl')
drawTrees(loadModel('model_2_3.pkl'),1)
FS23=getCoverage(load('model_2_3.pkl'))
FS32=getCoverage(load('model_3_2.pkl'))
###Output
_____no_output_____ |
Model backlog/Inference/134-tweet-inference-5fold-roberta-base-lr1e5-last.ipynb | ###Markdown
Dependencies
###Code
import json, glob
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras import layers
from tensorflow.keras.models import Model
###Output
_____no_output_____
###Markdown
Load data
###Code
test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv')
print('Test samples: %s' % len(test))
display(test.head())
###Output
Test samples: 3534
###Markdown
Model parameters
###Code
input_base_path = '/kaggle/input/134roberta-base-last/'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
# vocab_path = input_base_path + 'vocab.json'
# merges_path = input_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
vocab_path = base_path + 'roberta-base-vocab.json'
merges_path = base_path + 'roberta-base-merges.txt'
config['base_model_path'] = base_path + 'roberta-base-tf_model.h5'
config['config_path'] = base_path + 'roberta-base-config.json'
model_path_list = glob.glob(input_base_path + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep = "\n")
###Output
Models to predict:
/kaggle/input/134roberta-base-last/last_model_fold_1.h5
/kaggle/input/134roberta-base-last/last_model_fold_2.h5
/kaggle/input/134roberta-base-last/last_model_fold_3.h5
###Markdown
Tokenizer
###Code
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
###Output
_____no_output_____
###Markdown
Pre process
###Code
test['text'].fillna('', inplace=True)
test["text"] = test["text"].apply(lambda x: x.lower())
test["text"] = test["text"].apply(lambda x: x.strip())
x_test = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test)
###Output
_____no_output_____
###Markdown
Model
###Code
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
x_start = layers.Dropout(.1)(last_hidden_state)
x_start = layers.Dense(1)(x_start)
x_start = layers.Flatten()(x_start)
y_start = layers.Activation('softmax', name='y_start')(x_start)
x_end = layers.Dropout(.1)(last_hidden_state)
x_end = layers.Dense(1)(x_end)
x_end = layers.Flatten()(x_end)
y_end = layers.Activation('softmax', name='y_end')(x_end)
model = Model(inputs=[input_ids, attention_mask], outputs=[y_start, y_end])
return model
###Output
_____no_output_____
###Markdown
Make predictions
###Code
NUM_TEST_IMAGES = len(test)
test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
for model_path in model_path_list:
print(model_path)
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
test_preds = model.predict(x_test)
test_start_preds += test_preds[0]
test_end_preds += test_preds[1]
###Output
/kaggle/input/134roberta-base-last/last_model_fold_1.h5
/kaggle/input/134roberta-base-last/last_model_fold_2.h5
/kaggle/input/134roberta-base-last/last_model_fold_3.h5
###Markdown
Post process
###Code
test['start'] = test_start_preds.argmax(axis=-1)
test['end'] = test_end_preds.argmax(axis=-1)
test['text_len'] = test['text'].apply(lambda x : len(x))
test['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' ')))
test["end"].clip(0, test["text_len"], inplace=True)
test["start"].clip(0, test["end"], inplace=True)
test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
test["selected_text"].fillna(test["text"], inplace=True)
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
display(test.head(10))
###Output
_____no_output_____
###Markdown
Test set predictions
###Code
submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv')
submission['selected_text'] = test["selected_text"]
submission.to_csv('submission.csv', index=False)
submission.head(10)
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/building_production_ml_systems/labs/4b_streaming_data_inference.ipynb | ###Markdown
Working with Streaming DataLearning Objectives 1. Learn how to process real-time data for ML models using Cloud Dataflow 2. Learn how to serve online predictions using real-time data IntroductionIt can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial. Typically you will have the following: - A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis) - A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub) - A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow) - A persistent store to keep the processed data (in our case this is BigQuery)These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below. Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below. In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of `trips_last_5min` data as an additional feature. This is our proxy for real-time traffic.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import os
import googleapiclient.discovery
import shutil
from google.cloud import bigquery
from google.api_core.client_options import ClientOptions
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
###Output
_____no_output_____
###Markdown
Re-train our model with `trips_last_5min` featureIn this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook `training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs/4a_streaming_data_training.ipynb`. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for `trips_last_5min` in the model and the dataset. Simulate Real Time Taxi DataSince we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.Inspect the `iot_devices.py` script in the `taxicab_traffic` folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery. In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub. To execute the `iot_devices.py` script, launch a terminal and navigate to the `training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs` directory. Then run the following two commands. ```bashPROJECT_ID=$(gcloud config list project --format "value(core.project)")python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID``` You will see new messages being published every 5 seconds. **Keep this terminal open** so it continues to publish events to the Pub/Sub topic. If you open [Pub/Sub in your Google Cloud Console](https://console.cloud.google.com/cloudpubsub/topic/list), you should be able to see a topic called `taxi_rides`. Create a BigQuery table to collect the processed dataIn the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called `taxifare` and a table within that dataset called `traffic_realtime`.
###Code
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except:
print("Dataset already exists.")
###Output
_____no_output_____
###Markdown
Next, we create a table called `traffic_realtime` and set up the schema.
###Code
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except:
print("Table already exists.")
###Output
_____no_output_____
###Markdown
Launch Streaming Dataflow PipelineNow that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.The pipeline is defined in `./taxicab_traffic/streaming_count.py`. Open that file and inspect it. There are 5 transformations being applied: - Read from PubSub - Window the messages - Count number of messages in the window - Format the count for BigQuery - Write results to BigQuery**TODO:** Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the [beam programming guide](https://beam.apache.org/documentation/programming-guide/windowing) for guidance. To check your answer reference the solution. For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds. In a new terminal, launch the dataflow pipeline using the command below. You can change the `BUCKET` variable, if necessary. Here it is assumed to be your `PROJECT_ID`. ```bashPROJECT_ID=$(gcloud config list project --format "value(core.project)")BUCKET=$PROJECT_ID CHANGE AS NECESSARY python3 ./taxicab_traffic/streaming_count.py \ --input_topic taxi_rides \ --runner=DataflowRunner \ --project=$PROJECT_ID \ --temp_location=gs://$BUCKET/dataflow_streaming``` Once you've submitted the command above you can examine the progress of that job in the [Dataflow section of Cloud console](https://console.cloud.google.com/dataflow). Explore the data in the table After a few moments, you should also see new data written to your BigQuery table as well. Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.
###Code
%load_ext google.cloud.bigquery
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
###Output
_____no_output_____
###Markdown
Make predictions from the new dataIn the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the `4a_streaming_data_training.ipynb` notebook. The `add_traffic_last_5min` function below will query the `traffic_realtime` table to find the most recent traffic information and add that feature to our instance for prediction. **Exercise.** Complete the code in the function below. Write a SQL query that will return the most recent entry in `traffic_realtime` and add it to the instance.
###Code
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string = """
TODO: Your code goes here
"""
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instance['traffic_last_5min'] = # TODO: Your code goes here.
return instance
###Output
_____no_output_____
###Markdown
The `traffic_realtime` table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the `traffic_last_5min` feature added to the instance and change over time.
###Code
add_traffic_last_5min(instance={'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
###Output
_____no_output_____
###Markdown
Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well. **Exercise.** Complete the code below to call prediction on an instance incorporating realtime traffic info. You should- use the function `add_traffic_last_5min` to add the most recent realtime traffic data to the prediction instance- call prediction on your model for this realtime instance and save the result as a variable called `response`- parse the json of `response` to print the predicted taxifare cost
###Code
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at the "Serving online predictions" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
MODEL_NAME = 'taxifare'
VERSION_NAME = 'traffic'
endpoint = f'https://{REGION}-ml.googleapis.com'
client_options = ClientOptions(api_endpoint=endpoint)
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False, client_options=client_options)
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT,
MODEL_NAME,
VERSION_NAME)
instance = {'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07}
instance = # TODO: Your code goes here.
response = # TODO: Your code goes here.
if 'error' in response:
raise RuntimeError(response['error'])
else:
print( # TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Working with Streaming DataLearning Objectives 1. Learn how to process real-time data for ML models using Cloud Dataflow 2. Learn how to serve online predictions using real-time data IntroductionIt can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial. Typically you will have the following: - A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis) - A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub) - A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow) - A persistent store to keep the processed data (in our case this is BigQuery)These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below. Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below. In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of `trips_last_5min` data as an additional feature. This is our proxy for real-time traffic.
###Code
!pip install --user apache-beam[gcp]
###Output
_____no_output_____
###Markdown
**Restart** the kernel before proceeding further (On the Notebook menu - Kernel - Restart Kernel).
###Code
import os
import googleapiclient.discovery
import shutil
from google.cloud import bigquery
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
###Output
_____no_output_____
###Markdown
Re-train our model with `trips_last_5min` featureIn this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook `training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs/4a_streaming_data_training.ipynb`. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for `trips_last_5min` in the model and the dataset. Simulate Real Time Taxi DataSince we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.Inspect the `iot_devices.py` script in the `taxicab_traffic` folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery. In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub. To execute the `iot_devices.py` script, launch a terminal and navigate to the `training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs` directory. Then run the following two commands. ```bashPROJECT_ID=$(gcloud config list project --format "value(core.project)")python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID``` You will see new messages being published every 5 seconds. **Keep this terminal open** so it continues to publish events to the Pub/Sub topic. If you open [Pub/Sub in your Google Cloud Console](https://console.cloud.google.com/cloudpubsub/topic/list), you should be able to see a topic called `taxi_rides`. Create a BigQuery table to collect the processed dataIn the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called `taxifare` and a table within that dataset called `traffic_realtime`.
###Code
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except:
print("Dataset already exists.")
###Output
_____no_output_____
###Markdown
Next, we create a table called `traffic_realtime` and set up the schema.
###Code
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except:
print("Table already exists.")
###Output
_____no_output_____
###Markdown
Launch Streaming Dataflow PipelineNow that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.The pipeline is defined in `./taxicab_traffic/streaming_count.py`. Open that file and inspect it. There are 5 transformations being applied: - Read from PubSub - Window the messages - Count number of messages in the window - Format the count for BigQuery - Write results to BigQuery**TODO:** Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the [beam programming guide](https://beam.apache.org/documentation/programming-guide/windowing) for guidance. To check your answer reference the solution. For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds. In a new terminal, launch the dataflow pipeline using the command below. You can change the `BUCKET` variable, if necessary. Here it is assumed to be your `PROJECT_ID`. ```bashPROJECT_ID=$(gcloud config list project --format "value(core.project)")BUCKET=$PROJECT_ID CHANGE AS NECESSARY python3 ./taxicab_traffic/streaming_count.py \ --input_topic taxi_rides \ --runner=DataflowRunner \ --project=$PROJECT_ID \ --temp_location=gs://$BUCKET/dataflow_streaming``` Once you've submitted the command above you can examine the progress of that job in the [Dataflow section of Cloud console](https://console.cloud.google.com/dataflow). Explore the data in the table After a few moments, you should also see new data written to your BigQuery table as well. Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.
###Code
%load_ext google.cloud.bigquery
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
###Output
_____no_output_____
###Markdown
Make predictions from the new dataIn the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the `train.ipynb` notebook. The `add_traffic_last_5min` function below will query the `traffic_realtime` table to find the most recent traffic information and add that feature to our instance for prediction. **Exercise.** Complete the code in the function below. Write a SQL query that will return the most recent entry in `traffic_realtime` and add it to the instance.
###Code
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string = """
TODO: Your code goes here
"""
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instance['traffic_last_5min'] = # TODO: Your code goes here.
return instance
###Output
_____no_output_____
###Markdown
The `traffic_realtime` table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the `traffic_last_5min` feature added to the instance and change over time.
###Code
add_traffic_last_5min(instance={'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
###Output
_____no_output_____
###Markdown
Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well. **Exercise.** Complete the code below to call prediction on an instance incorporating realtime traffic info. You should- use the function `add_traffic_last_5min` to add the most recent realtime traffic data to the prediction instance- call prediction on your model for this realtime instance and save the result as a variable called `response`- parse the json of `response` to print the predicted taxifare cost
###Code
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at the "Serving online predictions" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
MODEL_NAME = 'taxifare'
VERSION_NAME = 'traffic'
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False)
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT,
MODEL_NAME,
VERSION_NAME)
instance = {'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07}
instance = # TODO: Your code goes here.
response = # TODO: Your code goes here.
if 'error' in response:
raise RuntimeError(response['error'])
else:
print( # TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Working with Streaming DataLearning Objectives 1. Learn how to process real-time data for ML models using Cloud Dataflow 2. Learn how to serve online predictions using real-time data IntroductionIt can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial. Typically you will have the following: - A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis) - A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub) - A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow) - A persistent store to keep the processed data (in our case this is BigQuery)These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below. Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below. In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of `trips_last_5min` data as an additional feature. This is our proxy for real-time traffic.
###Code
import os
import googleapiclient.discovery
import shutil
from google.cloud import bigquery
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
###Output
_____no_output_____
###Markdown
Re-train our model with `trips_last_5min` featureIn this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook `train.ipynb`. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for `trips_last_5min` in the model and the dataset. Simulate Real Time Taxi DataSince we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.Inspect the `iot_devices.py` script in the `taxicab_traffic` folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery. In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub. To execute the iot_devices.py script, launch a terminal and navigate to the `training-data-analyst/courses/machine_learning/production_ml` directory. Then run the following two commands. ```bashPROJECT_ID=$(gcloud config list project --format "value(core.project)")python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID``` You will see new messages being published every 5 seconds. **Keep this terminal open** so it continues to publish events to the Pub/Sub topic. If you open [Pub/Sub in your Google Cloud Console](https://console.cloud.google.com/cloudpubsub/topic/list), you should be able to see a topic called `taxifares`. Create a BigQuery table to collect the processed dataIn the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called `taxifare` and a table within that dataset called `taxifare`.
###Code
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except:
print("Dataset already exists.")
###Output
_____no_output_____
###Markdown
Next, we create a table called `taxifare_realtime` and set up the schema.
###Code
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
client.create_table(table)
print("Table created.")
except:
print("Table already exists.")
###Output
_____no_output_____
###Markdown
Launch Streaming Dataflow PipelineNow that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.The pipeline is defined in `./taxicab_traffic/streaming_count.py`. Open that file and inspect it. There are 5 transformations being applied: - Read from PubSub - Window the messages - Count number of messages in the window - Format the count for BigQuery - Write results to BigQuery**TODO:** Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the [beam programming guide](https://beam.apache.org/documentation/programming-guide/windowing) for guidance. To check your answer reference the solution. For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds. In a new terminal, launch the dataflow pipeline using the command below. You can change the `BUCKET` variable, if necessary. Here it is assumed to be your `PROJECT_ID`. ```bashPROJECT_ID=$(gcloud config list project --format "value(core.project)")BUCKET=$PROJECT_ID CHANGE AS NECESSARY python3 ./taxicab_traffic/streaming_count.py \ --input_topic taxi_rides \ --runner=DataflowRunner \ --project=$PROJECT_ID \ --temp_location=gs://$BUCKET/dataflow_streaming``` Once you've submitted the command above you can examine the progress of that job in the [Dataflow section of Cloud console](https://console.cloud.google.com/dataflow). Explore the data in the table After a few moments, you should also see new data written to your BigQuery table as well. Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.
###Code
%load_ext google.cloud.bigquery
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
###Output
_____no_output_____
###Markdown
Make predictions from the new dataIn the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the `train.ipynb` notebook. The `add_traffic_last_5min` function below will query the `traffic_realtime` table to find the most recent traffic information and add that feature to our instance for prediction. **Exercise.** Complete the code in the function below. Write a SQL query that will return the most recent entry in `traffic_realtime` and add it to the instance.
###Code
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string = """
TODO: Your code goes here
"""
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instance['traffic_last_5min'] = # TODO: Your code goes here.
return instance
###Output
_____no_output_____
###Markdown
The `traffic_realtime` table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the `traffic_last_5min` feature added to the instance and change over time.
###Code
add_traffic_last_5min(instance={'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
###Output
_____no_output_____
###Markdown
Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well. **Exercise.** Complete the code below to call prediction on an instance incorporating realtime traffic info. You should- use the function `add_traffic_last_5min` to add the most recent realtime traffic data to the prediction instance- call prediction on your model for this realtime instance and save the result as a variable called `response`- parse the json of `response` to print the predicted taxifare cost
###Code
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at the "Serving online predictions" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
MODEL_NAME = 'taxifare'
VERSION_NAME = 'traffic'
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False)
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT,
MODEL_NAME,
VERSION_NAME)
instance = {'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07}
instance = # TODO: Your code goes here.
response = # TODO: Your code goes here.
if 'error' in response:
raise RuntimeError(response['error'])
else:
print( # TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Working with Streaming DataLearning Objectives 1. Learn how to process real-time data for ML models using Cloud Dataflow 2. Learn how to serve online predictions using real-time data IntroductionIt can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial. Typically you will have the following: - A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis) - A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub) - A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow) - A persistent store to keep the processed data (in our case this is BigQuery)These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below. Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below. In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of `trips_last_5min` data as an additional feature. This is our proxy for real-time traffic.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
!pip install --user apache-beam[gcp]
###Output
_____no_output_____
###Markdown
Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. **Restart** the kernel before proceeding further (On the Notebook menu - Kernel - Restart Kernel).
###Code
import os
import googleapiclient.discovery
import shutil
from google.cloud import bigquery
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
###Output
_____no_output_____
###Markdown
Re-train our model with `trips_last_5min` featureIn this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook `training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs/4a_streaming_data_training.ipynb`. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for `trips_last_5min` in the model and the dataset. Simulate Real Time Taxi DataSince we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.Inspect the `iot_devices.py` script in the `taxicab_traffic` folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery. In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub. To execute the `iot_devices.py` script, launch a terminal and navigate to the `training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs` directory. Then run the following two commands. ```bashPROJECT_ID=$(gcloud config list project --format "value(core.project)")python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID``` You will see new messages being published every 5 seconds. **Keep this terminal open** so it continues to publish events to the Pub/Sub topic. If you open [Pub/Sub in your Google Cloud Console](https://console.cloud.google.com/cloudpubsub/topic/list), you should be able to see a topic called `taxi_rides`. Create a BigQuery table to collect the processed dataIn the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called `taxifare` and a table within that dataset called `traffic_realtime`.
###Code
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except:
print("Dataset already exists.")
###Output
_____no_output_____
###Markdown
Next, we create a table called `traffic_realtime` and set up the schema.
###Code
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except:
print("Table already exists.")
###Output
_____no_output_____
###Markdown
Launch Streaming Dataflow PipelineNow that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.The pipeline is defined in `./taxicab_traffic/streaming_count.py`. Open that file and inspect it. There are 5 transformations being applied: - Read from PubSub - Window the messages - Count number of messages in the window - Format the count for BigQuery - Write results to BigQuery**TODO:** Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the [beam programming guide](https://beam.apache.org/documentation/programming-guide/windowing) for guidance. To check your answer reference the solution. For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds. In a new terminal, launch the dataflow pipeline using the command below. You can change the `BUCKET` variable, if necessary. Here it is assumed to be your `PROJECT_ID`. ```bashPROJECT_ID=$(gcloud config list project --format "value(core.project)")BUCKET=$PROJECT_ID CHANGE AS NECESSARY python3 ./taxicab_traffic/streaming_count.py \ --input_topic taxi_rides \ --runner=DataflowRunner \ --project=$PROJECT_ID \ --temp_location=gs://$BUCKET/dataflow_streaming``` Once you've submitted the command above you can examine the progress of that job in the [Dataflow section of Cloud console](https://console.cloud.google.com/dataflow). Explore the data in the table After a few moments, you should also see new data written to your BigQuery table as well. Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.
###Code
%load_ext google.cloud.bigquery
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
###Output
_____no_output_____
###Markdown
Make predictions from the new dataIn the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the `4a_streaming_data_training.ipynb` notebook. The `add_traffic_last_5min` function below will query the `traffic_realtime` table to find the most recent traffic information and add that feature to our instance for prediction. **Exercise.** Complete the code in the function below. Write a SQL query that will return the most recent entry in `traffic_realtime` and add it to the instance.
###Code
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string = """
TODO: Your code goes here
"""
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instance['traffic_last_5min'] = # TODO: Your code goes here.
return instance
###Output
_____no_output_____
###Markdown
The `traffic_realtime` table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the `traffic_last_5min` feature added to the instance and change over time.
###Code
add_traffic_last_5min(instance={'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
###Output
_____no_output_____
###Markdown
Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well. **Exercise.** Complete the code below to call prediction on an instance incorporating realtime traffic info. You should- use the function `add_traffic_last_5min` to add the most recent realtime traffic data to the prediction instance- call prediction on your model for this realtime instance and save the result as a variable called `response`- parse the json of `response` to print the predicted taxifare cost
###Code
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at the "Serving online predictions" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
MODEL_NAME = 'taxifare'
VERSION_NAME = 'traffic'
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False)
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT,
MODEL_NAME,
VERSION_NAME)
instance = {'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07}
instance = # TODO: Your code goes here.
response = # TODO: Your code goes here.
if 'error' in response:
raise RuntimeError(response['error'])
else:
print( # TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Working with Streaming DataLearning Objectives 1. Learn how to process real-time data for ML models using Cloud Dataflow 2. Learn how to serve online predictions using real-time data IntroductionIt can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial. Typically you will have the following: - A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis) - A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub) - A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow) - A persistent store to keep the processed data (in our case this is BigQuery)These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below. Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below. In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of `trips_last_5min` data as an additional feature. This is our proxy for real-time traffic.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import os
import googleapiclient.discovery
import shutil
from google.cloud import bigquery
from google.api_core.client_options import ClientOptions
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
###Output
_____no_output_____
###Markdown
Re-train our model with `trips_last_5min` featureIn this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook `training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs/4a_streaming_data_training.ipynb`. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for `trips_last_5min` in the model and the dataset. Simulate Real Time Taxi DataSince we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.Inspect the `iot_devices.py` script in the `taxicab_traffic` folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery. In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub. To execute the `iot_devices.py` script, launch a terminal and navigate to the `training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs` directory. Then run the following two commands. ```bashPROJECT_ID=$(gcloud config list project --format "value(core.project)")python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID``` You will see new messages being published every 5 seconds. **Keep this terminal open** so it continues to publish events to the Pub/Sub topic. If you open [Pub/Sub in your Google Cloud Console](https://console.cloud.google.com/cloudpubsub/topic/list), you should be able to see a topic called `taxi_rides`. Create a BigQuery table to collect the processed dataIn the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called `taxifare` and a table within that dataset called `traffic_realtime`.
###Code
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except:
print("Dataset already exists.")
###Output
_____no_output_____
###Markdown
Next, we create a table called `traffic_realtime` and set up the schema.
###Code
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except:
print("Table already exists.")
###Output
_____no_output_____
###Markdown
Launch Streaming Dataflow PipelineNow that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.The pipeline is defined in `./taxicab_traffic/streaming_count.py`. Open that file and inspect it. There are 5 transformations being applied: - Read from PubSub - Window the messages - Count number of messages in the window - Format the count for BigQuery - Write results to BigQuery**TODO:** Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the [beam programming guide](https://beam.apache.org/documentation/programming-guide/windowing) for guidance. To check your answer reference the solution. For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds. In a new terminal, launch the dataflow pipeline using the command below. You can change the `BUCKET` variable, if necessary. Here it is assumed to be your `PROJECT_ID`. ```bashPROJECT_ID=$(gcloud config list project --format "value(core.project)")BUCKET=$PROJECT_ID CHANGE AS NECESSARY python3 ./taxicab_traffic/streaming_count.py \ --input_topic taxi_rides \ --runner=DataflowRunner \ --project=$PROJECT_ID \ --temp_location=gs://$BUCKET/dataflow_streaming``` Once you've submitted the command above you can examine the progress of that job in the [Dataflow section of Cloud console](https://console.cloud.google.com/dataflow). Explore the data in the table After a few moments, you should also see new data written to your BigQuery table as well. Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.
###Code
%load_ext google.cloud.bigquery
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
###Output
_____no_output_____
###Markdown
Make predictions from the new dataIn the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the `4a_streaming_data_training.ipynb` notebook. The `add_traffic_last_5min` function below will query the `traffic_realtime` table to find the most recent traffic information and add that feature to our instance for prediction. **Exercise.** Complete the code in the function below. Write a SQL query that will return the most recent entry in `traffic_realtime` and add it to the instance.
###Code
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string = """
TODO: Your code goes here
"""
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instance['traffic_last_5min'] = # TODO: Your code goes here.
return instance
###Output
_____no_output_____
###Markdown
The `traffic_realtime` table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the `traffic_last_5min` feature added to the instance and change over time.
###Code
add_traffic_last_5min(instance={'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
###Output
_____no_output_____
###Markdown
Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well. **Exercise.** Complete the code below to call prediction on an instance incorporating realtime traffic info. You should- use the function `add_traffic_last_5min` to add the most recent realtime traffic data to the prediction instance- call prediction on your model for this realtime instance and save the result as a variable called `response`- parse the json of `response` to print the predicted taxifare cost
###Code
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at the "Serving online predictions" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
MODEL_NAME = 'taxifare'
VERSION_NAME = 'traffic'
endpoint = f'https://{REGION}-ml.googleapis.com'
client_options = ClientOptions(api_endpoint=endpoint)
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False, client_options=client_options)
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT,
MODEL_NAME,
VERSION_NAME)
instance = {'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07}
instance = # TODO: Your code goes here.
response = # TODO: Your code goes here.
if 'error' in response:
raise RuntimeError(response['error'])
else:
print( # TODO: Your code goes here
###Output
_____no_output_____ |
Fourier example & Nyquist/Fourier_basics and Nyquist.ipynb | ###Markdown
Fourier Transform This lesson is a brief introduction to the Fourier Transform. The Fourier Transform is an extremely deep mathematical concept that ties into many different disciplines. Still, these next couple of lessons will teach you how to use it as a tool to accomplish many practical tasks. We will not be going into any theory at all, and in fact, you won't see a single equation in these lessons. The goal here is to convey some intuition about this concept, and while a strong theoretical understanding is important, it's outside the scope of this class. Let's start with our traditional imports.
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
#import mpld3
import scipy as sp
from scipy import io
%matplotlib inline
###Output
_____no_output_____
###Markdown
Adding Sinusoids Let's make some sinusoids.
###Code
fs = 125
ts = np.arange(0, 10, 1/fs)
s2 = np.sin(2 * np.pi * 2 * ts)
s3 = np.sin(2 * np.pi * 3 * ts)
###Output
_____no_output_____
###Markdown
Let's plot the 2 sinusoids and their sum.
###Code
fig=plt.figure(figsize=(12, 8))
plt.subplot(3, 1, 1)
plt.plot(ts, s2)
plt.title('sinusoid 1')
plt.grid()
plt.subplot(3, 1, 2)
plt.plot(ts, s3)
plt.grid()
plt.title('sinusoid 2')
plt.subplot(3, 1, 3)
plt.plot(ts, s2 + s3)
plt.grid()
plt.title('Sum of sinusoid 1+2');
fig.tight_layout()
###Output
_____no_output_____
###Markdown
The sum of two sinusoids is simply the elementwise sum at each time point. However, this means that they must be sampled synchronously. If they are not, you need to interpolate one on to the other. Signal Reconstruction Let's now look at a real-world accelerometer signal collected at the wrist during walking.
###Code
sig = sp.io.loadmat('DATA_11_TYPE02.mat')['sig']
pd.DataFrame({'signal':list(range(len(sig))), 'values':list(sig)}).style.hide_index()
fs = 125
seg = sig[3][9000:10000] #we will take a segment of signal 3
seg -= np.mean(seg)
plt.figure(figsize=(12, 8))
plt.title('Real-world accelerometer signal from wrist during walking')
plt.plot(seg);
###Output
_____no_output_____
###Markdown
Fourier Transform DemoThe Fourier Transform tells us that any signal can be reconstructed by summing sinusoids of various frequencies, amplitudes, and phase shifts together.Let's see this in action.
###Code
# compute the frequency bin centers
freqs = np.fft.rfftfreq(len(seg), 1/fs)#rfft stands for real FFT and means we only compute positive frequencies
# compute positive frequencies (returns the fourier coefficients)
rfft = np.fft.rfft(seg)
# reorder the frequency bin centers from max to min amplitudes (most important frequencies)
order = np.argsort(np.abs(rfft))[::-1]
most_imp_freqs = list(zip(freqs[order], rfft[order]))
np.abs(5+1j*5)
"""
To run the code bellow you need to set the matplotlib backed to your OS
Windows: QT, tkinter
Mac: osx
Linux: tkinter, QT
"""
%matplotlib QT
plot=True
seg_rec = np.zeros(len(seg), dtype='complex128')
ts = np.arange(len(seg)) / len(seg)
n = 0
plt.clf()
fig = plt.gcf()
ax10 = fig.add_subplot(3, 1, 1)
ax11 = fig.add_subplot(3, 1, 2)
ax12 = fig.add_subplot(3, 1, 3)
ax10.plot(seg)
ax11.plot(seg_rec, 'g')
ax12.plot(seg)
ax12.plot(seg_rec, 'g')
fig.suptitle('0 sinusoids')
plt.pause(10)
for f, a in most_imp_freqs:
seg_rec += a / len(seg) * np.exp(2j * np.pi * f / (fs / len(seg)) * ts)
seg_rec += np.conj(a) / len(seg) * np.exp(2j * np.pi * -f / (fs / len(seg)) * ts)
n += 1
if plot:
ax11.clear()
ax11.plot(seg_rec, 'g')
ax12.lines.pop()
ax12.plot(seg_rec, 'g')
fig.suptitle('{} sinusoids'.format(n))
if n == 1:
plt.pause(2)
elif n < 5:
plt.pause(1)
elif n < 15:
plt.pause(0.5)
elif n < 120:
plt.pause(0.005)
else:
break
###Output
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
C:\Users\MRgarciaE\Anaconda3\envs\py36\lib\site-packages\numpy\core\_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
###Markdown
This is basically a demonstration that Fourier is not lying to us when he says that any signal can be recreated by a sum of sinusoids. The frequency of the specific sinusoids that make up a signal can tell us important information that we can use to build algorithms to process that signal. Nyquist Frequency Now that we know that signals are made up of different frequency components, we can learn about a new property of sampling theory -- the **Nyquist frequency**. The Nyquist frequency tells us that when we sample an analog signal, based on the frequency components it's made up of, there are restrictions on how fast we need to sample that signal.
###Code
"""
To run the code bellow you need to set the matplotlib backed to your OS
Windows: QT, tkinter
Mac: osx
Linux: tkinter, QT
"""
%matplotlib QT
###Output
_____no_output_____
###Markdown
Let's see a graphical explanation of this.
###Code
def PlotSinSample(ax, fsin, cfs, fs, drop_bg=False):
cts = np.arange(0, 5, 1/cfs)
cs0 = np.cos(2 * np.pi * fsin * cts)
ts = np.arange(0, 5, 1/fs)
s0 = np.cos(2 * np.pi * fsin * ts)
ax.clear()
if not drop_bg:
ax.plot(cts, cs0)
ax.plot(ts, s0, 'b.', ms=10)
ax.grid()
ax.set_title('{:0.2f} Hz'.format(fsin))
###Output
_____no_output_____
###Markdown
In this demo, you can see what a digitized version of the analog signal would look like at various sampling rates. As we decrease the sampling rate, there will come a point where we only have two samples per period of the sine wave. If we sample any slower than this, the sine wave will look the same as a lower frequency wave and we won't be able to know the true frequency of the wave when all we have is the digitized signal.The Nyquist frequency tells us the maximum frequency analog signal we can sample is half of our sampling rate. If we try to sample a signal that has higher frequency components than this, we will see **aliasing**, which means those high-frequency components will show up at mirrored lower frequencies.
###Code
plt.clf()
fig = plt.gcf()
ax = fig.add_subplot(1, 1, 1)
fsins = np.arange(1, 5.1, 0.2)
for fsin in fsins:
PlotSinSample(ax, fsin, 150, 10)
plt.draw()
while not plt.waitforbuttonpress():
pass
fig.clf()
ax0 = fig.add_subplot(2, 1, 1)
ax1 = fig.add_subplot(2, 1, 2)
while True:
PlotSinSample(ax0, 6, 150, 10)
PlotSinSample(ax1, 4, 150, 10)
plt.draw()
if not plt.waitforbuttonpress():
break
PlotSinSample(ax0, 6, 150, 10, drop_bg=True)
PlotSinSample(ax1, 4, 150, 10, drop_bg=True)
plt.draw()
if not plt.waitforbuttonpress():
break
fig.clf()
ax0 = fig.add_subplot(2, 1, 1)
ax1 = fig.add_subplot(2, 1, 2)
fsins = np.arange(5, 10.1, 0.2)
for fsin in fsins:
PlotSinSample(ax0, fsin, 150, 10)
PlotSinSample(ax1, 10 - fsin, 150, 10)
plt.draw()
while not plt.waitforbuttonpress():
pass
###Output
_____no_output_____ |
magnolia/sandbox/notebooks/source-separation/deep_clustering/DeepClustering_training.ipynb | ###Markdown
Training the deep clustering monaural source separation modelThis notebook contains a detailed example of how to train the deep clustering source separation model. Filepaths to load training data must be filled in to run this notebook.
###Code
# Generic imports
import sys
import time
import numpy as np
import tensorflow as tf
# Plotting imports
import IPython
from IPython.display import Audio
from matplotlib import pyplot as plt
fig_size = [0,0]
fig_size[0] = 8
fig_size[1] = 4
plt.rcParams["figure.figsize"] = fig_size
# Import the deep clustering separation model
from magnolia.dnnseparate.deep_clustering_model import DeepClusteringModel
# Import utilities for using the model
from magnolia.utils.clustering_utils import clustering_separate, get_cluster_masks, process_signal
from magnolia.iterate.supervised_iterator import SupervisedIterator, SupervisedMixer
from magnolia.iterate.hdf5_iterator import SplitsIterator
###Output
_____no_output_____
###Markdown
Hyperparameters numsources : Number of sources used in training mixes batchsize : Number of examples per batch used in training datashape : (Time, Frequency) shape of the examples within each batch
###Code
numsources = 2
batchsize = 256
fft_size = 512
datashape = (40, fft_size//2 + 1)
###Output
_____no_output_____
###Markdown
Set up data I/OFor training, only the training dataset is needed. The other two datasets can be used for evaluation. The (training set, or in set) speaker keys have been separated according to speaker gender.
###Code
libritrain = "Path to training dataset"
with open('Magnolia/data/librispeech/authors/train-clean-100-F.txt','r') as speakers:
keys = speakers.read().splitlines()
speaker_keys = keys[:]
in_set_F = keys[:]
with open('Magnolia/data/librispeech/authors/train-clean-100-M.txt','r') as speakers:
keys = speakers.read().splitlines()
speaker_keys += keys
in_set_M = keys[:]
###Output
_____no_output_____
###Markdown
Create an mixer that iterates over examples from the training set. SplitsIterator handles (deterministically) splitting the training set into three partitions. 80% of the training data is used to train the model, 10% is used to evaluate the training progress on unseen examples, and the last 10% is reserved to evaluate the performance of the model on unseen examples from speakers in the training set.SupervisedMixer handles the mixing of training examples. It outputs the model input (X), the output labels (Y) and the speakerIDs (I) of the speakers who are loudest in each time frequency bin. Y must be reshaped and transposed so that it has shape (batchsize,time,frequency,numspeakers).Scaling of the mixtures to create input batches for the model is done here as well.
###Code
# Create the splits iterator
siterator = SplitsIterator([0.8,0.1,0.1], libritrain, speaker_keys=speaker_keys, shape=datashape, return_key=True)
siterator.set_split(0)
# Create the data mixer
mixer = SupervisedMixer([siterator,siterator], shape=datashape,
mix_method='add', diffseed=True)
###Output
_____no_output_____
###Markdown
Since the model expects a different form of label than Lab41's model, some helper functions can be used to convert the output of the mixer into what the model expects
###Code
def gen_train_batch(mixer, batch_size):
"""
Get a batch from the mixer
"""
batch = mixer.get_batch(batch_size, out_TF=None)
return batch
def gen_batch(mixer,batch_size):
"""
Create a batch from the mixer of the specified size
"""
# Get a batch of mixed examples
batch = gen_train_batch(mixer, batch_size)
# Scale the input spectrograms
X = np.sqrt(np.abs(batch[0]))
X = (X - X.min())/(X.max() - X.min())
# Convert the labels given by the mixer to the form the deep clustering model expects
y = 1/2*(batch[1] + 1)
y = y.reshape(batch_size, 2, T, F)
y = y.transpose(0,2,3,1)
return X, y, phases
###Output
_____no_output_____
###Markdown
Generate some validation dataTo generate a batch from the validation split of the training dataset, the splits iterator can have the split set to the validation split and the mixer can be used as before.
###Code
# Set the current split to the validation split
siterator.set_split(1)
# Generate a batch of validation data
X_vala, y_vala, phases = gen_batch(mixer, batchsize)
###Output
_____no_output_____
###Markdown
Create an instance of the deep clustering modelHere an untrained model instance is created, and its variables are initialized
###Code
model = DeepClusteringModel()
model.initialize()
###Output
_____no_output_____
###Markdown
Variables needed to track the training progress of the modelDuring training, the number of iterations (number of processed batches) is tracked, along with the mean cost on examples from the training data and from the validation data. The last iteration that the model was saved on can also be tracked.
###Code
iterations = []
costs = []
t_costs = []
v_costs = []
last_saved = 0
###Output
_____no_output_____
###Markdown
Training loopHere the model is iteratively trained on batches generated by the mixer. The model is saved every time the validation cost reaches a new minimum value. The training can be configured to stop if the model has not been saved after a specified number of iterations have elapsed since the previous save. Plots of the training cost and the validation set are created as well.
###Code
# Number of iterations to train for (should be large)
num_iterations = 1000000
# Threshold for stopping if the model hasn't improved for this many consecutive iterations
stop_threshold = 10000
# Find the number of iterations already elapsed (Useful for resuming training)
if len(iterations) == 0:
start = 0
else:
start = iterations[-1]
# Ensure that the iterator is set to iterate over the training split
siterator.set_split(0)
# Iterate over training batches
for i in range(num_iterations):
# Generate a batch of training data
Xdata, Ydata, _ = gen_batch(mixer, batchsize)
# Train the model on one batch and get the cost
c = model.train_on_batch(Xin,Ydata,Idata)
# Store the training cost
costs.append(c)
# Every 10 batches, evaluate the model on the validation data and plot the cost curves
if (i+1) % 10 == 0:
IPython.display.clear_output(wait=True)
# Get the cost on the validation batch
c_v = model.get_cost(X_vala, y_vala)
# Check if the validation cost is below the minimum validation cost, and if so, save it.
if len(v_costs)> 0 and c_v < min(v_costs) and len(iterations) > 0:
print("Saving the model because c_v is", min(v_costs) - c_v, "below the old min.")
# Save the model to the specified path
model.save("Path to saved model")
# Record the iteraion that the model was last saved on
last_saved = iterations[-1]
# Store the training cost and the validation cost
t_costs.append(np.mean(costs))
v_costs.append(c_v)
# Store the current iteration number
iterations.append(i + 1 + start)
# Compute scale quantities for plotting
length = len(iterations)
cutoff = int(0.5*length)
lowline = [min(v_costs)]*len(iterations)
# Generate the plots and show them
f, (ax1, ax2) = plt.subplots(2,1)
ax1.plot(iterations,t_costs)
ax1.plot(iterations,v_costs)
ax1.plot(iterations,lowline)
y_u = max(max(t_costs[cutoff:]),max(v_costs[cutoff:]))
y_l = min(min(t_costs[cutoff:]),min(v_costs[cutoff:]))
ax2.set_ylim(y_l,y_u)
ax2.plot(iterations[cutoff:], t_costs[cutoff:])
ax2.plot(iterations[cutoff:], v_costs[cutoff:])
ax2.plot(iterations[cutoff:], lowline[cutoff:])
plt.show()
print("Cost on iteration", iterations[-1], "is", c_v)
print("Last saved",iterations[-1]-last_saved,"iterations ago.")
# Reset the cost over the last 10 iterations
costs = []
# Stop training if the number of iterations since the last save point exceeds the threshold
if iterations[-1]-last_saved > stop_threshold:
print("Done!")
break
###Output
_____no_output_____ |
dev/14_callback_schedule.ipynb | ###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.utils.test import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
if isinstance(start, Iterable):
return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
fs = combined_cos(0.25, [0.25,0.5], [0.5,1.], [0.,0.])
test_eq(len(fs), 2)
f,g = fs
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.25, 0.33638, 0.5, 0.375, 0.])
test_close([g(0.), g(0.1), g(0.25), g(0.5), g(1.)], [0.5, 0.67275, 1., 0.75, 0.])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for pname,fs in self.scheds.items():
fs = L(fs)
if len(fs)==1: fs = fs*len(self.opt.param_groups)
for f,h in zip(fs,self.opt.hypers): h[pname] = f(pct)
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.dbunch.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
sched = {'lr': combined_cos(0.5, np.array([1e-4,1e-3]), np.array([1e-3,1e-2]), np.array([1e-5,1e-4]))}
learn.fit(1, cbs=ParamScheduler(sched))
#n = len(learn.dbunch.train_dl)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
lr_max = self.lr if lr_max is None else lr_max
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduce by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). This schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner()
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
learn.fit_one_cycle(1, lr_max=np.array([1e-3,1e-2]))
#n = len(learn.dbunch.train_dl)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
lr_max = lr_max or self.lr
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.dbunch.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self):
raise CancelValidException()
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one",
"begin_validate": "Skip the validation part of training"}
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=0):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
return fig
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.dbunch.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
learn = synth_learner()
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 01b_script.ipynb.
Converted 01c_dataloader.ipynb.
Converted 02_data_transforms.ipynb.
Converted 03_data_pipeline.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_utils_test.ipynb.
Converted 96_data_external.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.test_utils import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
#if isinstance(start, Iterable):
# return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
# for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
f = combined_cos(0.25, np.array([0.25,0.5]), np.array([0.5,1.]), np.array([0.,0.]))
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)],
[[0.25,0.5], [0.33638,0.67275], [0.5,1.], [0.375,0.75], [0.,0.]])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct))
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.dbunch.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
sched = {'lr': combined_cos(0.5, np.array([1e-4,1e-3]), np.array([1e-3,1e-2]), np.array([1e-5,1e-4]))}
learn.fit(1, cbs=ParamScheduler(sched))
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25, wd=defaults.wd,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduced by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). It schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner(lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
learn.fit_one_cycle(1, lr_max=slice(1e-3,1e-2))
#n = len(learn.dbunch.train_dl)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_flat_cos(self:Learner, n_epoch, lr=None, div_final=1e5, pct_start=0.75, wd=defaults.wd,
cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` at flat `lr` before a cosine annealing."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr is None else lr)
lr = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr, lr, lr/div_final)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
learn = synth_learner()
learn.fit_flat_cos(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False, wd=defaults.wd):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.dbunch.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.learn.save('_tmp')
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self): raise CancelValidException()
def after_fit(self):
tmp_f = self.path/self.model_dir/'_tmp.pth'
if tmp_f.exists():
self.learn.load('_tmp')
os.remove(tmp_f)
_docs = {"begin_fit": "Initialize container for hyper-parameters and save the model",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch and potentially stop training",
"after_fit": "Save the hyper-parameters in the recorder if there is one and load the original model",
"begin_validate": "Skip the validation part of training"}
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
init_a,init_b = learn.model.a,learn.model.b
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
#Model loaded back properly
test_eq(learn.model.a, init_a)
test_eq(learn.model.b, init_b)
test_eq(learn.opt.state_dict()['state'], [{}, {}])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=5):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True, show_plot=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.dbunch.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
if show_plot: self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core_foundation.ipynb.
Converted 01a_core_utils.ipynb.
Converted 01b_core_dispatch.ipynb.
Converted 01c_core_transform.ipynb.
Converted 02_core_script.ipynb.
Converted 03_torchcore.ipynb.
Converted 03a_layers.ipynb.
Converted 04_data_load.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 09b_vision_utils.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 71_callback_tensorboard.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
Converted xse_resnext.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.utils.test import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
#if isinstance(start, Iterable):
# return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
# for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
f = combined_cos(0.25, np.array([0.25,0.5]), np.array([0.5,1.]), np.array([0.,0.]))
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)],
[[0.25,0.5], [0.33638,0.67275], [0.5,1.], [0.375,0.75], [0.,0.]])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct))
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.dbunch.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
sched = {'lr': combined_cos(0.5, np.array([1e-4,1e-3]), np.array([1e-3,1e-2]), np.array([1e-5,1e-4]))}
learn.fit(1, cbs=ParamScheduler(sched))
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25, wd=defaults.wd,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduce by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). This schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner(lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
learn.fit_one_cycle(1, lr_max=slice(1e-3,1e-2))
#n = len(learn.dbunch.train_dl)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False, wd=defaults.wd):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.dbunch.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.learn.save('_tmp')
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self): raise CancelValidException()
def after_fit(self):
self.learn.load('_tmp')
os.remove(self.path/self.model_dir/'_tmp.pth')
_docs = {"begin_fit": "Initialize container for hyper-parameters and save the model",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch and potentially stop training",
"after_fit": "Save the hyper-parameters in the recorder if there is one and load the original model",
"begin_validate": "Skip the validation part of training"}
#slow
learn = synth_learner()
init_a,init_b = learn.model.a,learn.model.b
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
#Model loaded back properly
test_eq(learn.model.a, init_a)
test_eq(learn.model.b, init_b)
test_eq(learn.opt.state_dict()['state'], [{}, {}])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=0):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
return fig
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.dbunch.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
learn = synth_learner()
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 02_script.ipynb.
Converted 03_dataloader.ipynb.
Converted 04_transform.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 22_vision_learner.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_utils_test.ipynb.
Converted 96_data_external.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.utils.test import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
if is_listy(start):
return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
fs = combined_cos(0.25, [0.25,0.5], [0.5,1.], [0.,0.])
test_eq(len(fs), 2)
f,g = fs
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.25, 0.33638, 0.5, 0.375, 0.])
test_close([g(0.), g(0.1), g(0.25), g(0.5), g(1.)], [0.5, 0.67275, 1., 0.75, 0.])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for pname,fs in self.scheds.items():
fs = L(fs)
if len(fs)==1: fs = fs*len(self.opt.param_groups)
for f,h in zip(fs,self.opt.hypers): h[pname] = f(pct)
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.dbunch.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
lr_max = lr_max or self.lr
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduce by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). This schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner()
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
lr_max = lr_max or self.lr
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.dbunch.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self):
raise CancelValidException()
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one",
"begin_validate": "Skip the validation part of training"}
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=0):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
return fig
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.dbunch.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
learn = synth_learner()
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 01b_script.ipynb.
Converted 01c_dataloader.ipynb.
Converted 02_data_transforms.ipynb.
Converted 03_data_pipeline.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_utils_test.ipynb.
Converted 96_data_external.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.test_utils import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
#if isinstance(start, Iterable):
# return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
# for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
f = combined_cos(0.25, np.array([0.25,0.5]), np.array([0.5,1.]), np.array([0.,0.]))
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)],
[[0.25,0.5], [0.33638,0.67275], [0.5,1.], [0.375,0.75], [0.,0.]])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct))
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.dbunch.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
sched = {'lr': combined_cos(0.5, np.array([1e-4,1e-3]), np.array([1e-3,1e-2]), np.array([1e-5,1e-4]))}
learn.fit(1, cbs=ParamScheduler(sched))
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25, wd=defaults.wd,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduce by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). This schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner(lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
learn.fit_one_cycle(1, lr_max=slice(1e-3,1e-2))
#n = len(learn.dbunch.train_dl)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False, wd=defaults.wd):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.dbunch.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.learn.save('_tmp')
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self): raise CancelValidException()
def after_fit(self):
self.learn.load('_tmp')
os.remove(self.path/self.model_dir/'_tmp.pth')
_docs = {"begin_fit": "Initialize container for hyper-parameters and save the model",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch and potentially stop training",
"after_fit": "Save the hyper-parameters in the recorder if there is one and load the original model",
"begin_validate": "Skip the validation part of training"}
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
init_a,init_b = learn.model.a,learn.model.b
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
#Model loaded back properly
test_eq(learn.model.a, init_a)
test_eq(learn.model.b, init_b)
test_eq(learn.opt.state_dict()['state'], [{}, {}])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=0):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
return fig
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.dbunch.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_transform.ipynb.
Converted 02_script.ipynb.
Converted 03_torch_core.ipynb.
Converted 03a_layers.ipynb.
Converted 04_dataloader.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.utils.test import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
if is_listy(start):
return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
fs = combined_cos(0.25, [0.25,0.5], [0.5,1.], [0.,0.])
test_eq(len(fs), 2)
f,g = fs
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.25, 0.33638, 0.5, 0.375, 0.])
test_close([g(0.), g(0.1), g(0.25), g(0.5), g(1.)], [0.5, 0.67275, 1., 0.75, 0.])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for pname,fs in self.scheds.items():
fs = L(fs)
if len(fs)==1: fs = fs*len(self.opt.param_groups)
for f,h in zip(fs,self.opt.hypers): h[pname] = f(pct)
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.data.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
lr_max = lr_max or self.lr
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduce by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). This schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner()
xb,yb = learn.data.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
lr_max = lr_max or self.lr
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.data.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self):
raise CancelValidException()
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one",
"begin_validate": "Skip the validation part of training"}
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=0):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
return fig
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.data.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
learn = synth_learner()
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_script.ipynb.
Converted 02_transforms.ipynb.
Converted 03_pipeline.ipynb.
Converted 04_data_external.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_rect_augment.ipynb.
Converted 11_layers.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 20_metrics.ipynb.
Converted 30_text_core.ipynb.
Converted 60_vision_models_xresnet.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_synth_learner.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from fastai2.test_utils import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
#if isinstance(start, Iterable):
# return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
# for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
f = combined_cos(0.25, np.array([0.25,0.5]), np.array([0.5,1.]), np.array([0.,0.]))
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)],
[[0.25,0.5], [0.33638,0.67275], [0.5,1.], [0.375,0.75], [0.,0.]])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct))
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.dbunch.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
sched = {'lr': combined_cos(0.5, np.array([1e-4,1e-3]), np.array([1e-3,1e-2]), np.array([1e-5,1e-4]))}
learn.fit(1, cbs=ParamScheduler(sched))
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25, wd=defaults.wd,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduced by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). It schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner(lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
learn.fit_one_cycle(1, lr_max=slice(1e-3,1e-2))
#n = len(learn.dbunch.train_dl)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_flat_cos(self:Learner, n_epoch, lr=None, div_final=1e5, pct_start=0.75, wd=defaults.wd,
cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` at flat `lr` before a cosine annealing."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr is None else lr)
lr = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr, lr, lr/div_final)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
learn = synth_learner()
learn.fit_flat_cos(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False, wd=defaults.wd):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.dbunch.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.learn.save('_tmp')
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self): raise CancelValidException()
def after_fit(self):
tmp_f = self.path/self.model_dir/'_tmp.pth'
if tmp_f.exists():
self.learn.load('_tmp')
os.remove(tmp_f)
_docs = {"begin_fit": "Initialize container for hyper-parameters and save the model",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch and potentially stop training",
"after_fit": "Save the hyper-parameters in the recorder if there is one and load the original model",
"begin_validate": "Skip the validation part of training"}
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
init_a,init_b = learn.model.a,learn.model.b
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
#Model loaded back properly
test_eq(learn.model.a, init_a)
test_eq(learn.model.b, init_b)
test_eq(learn.opt.state_dict()['state'], [{}, {}])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=5):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True, show_plot=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.dbunch.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
if show_plot: self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_core_foundation.ipynb.
Converted 01a_core_utils.ipynb.
Converted 01b_core_dispatch.ipynb.
Converted 01c_core_transform.ipynb.
Converted 02_core_script.ipynb.
Converted 03_torchcore.ipynb.
Converted 03a_layers.ipynb.
Converted 04_data_load.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 09b_vision_utils.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 71_callback_tensorboard.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
Converted xse_resnext.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.utils.test import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
if is_listy(start):
return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
fs = combined_cos(0.25, [0.25,0.5], [0.5,1.], [0.,0.])
test_eq(len(fs), 2)
f,g = fs
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.25, 0.33638, 0.5, 0.375, 0.])
test_close([g(0.), g(0.1), g(0.25), g(0.5), g(1.)], [0.5, 0.67275, 1., 0.75, 0.])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for pname,fs in self.scheds.items():
fs = L(fs)
if len(fs)==1: fs = fs*len(self.opt.param_groups)
for f,h in zip(fs,self.opt.hypers): h[pname] = f(pct)
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.data.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
lr_max = lr_max or self.lr
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduce by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). This schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner()
xb,yb = learn.data.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
lr_max = lr_max or self.lr
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.data.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self):
raise CancelValidException()
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one",
"begin_validate": "Skip the validation part of training"}
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=0):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
return fig
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.data.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
learn = synth_learner()
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_script.ipynb.
Converted 01a_torch_core.ipynb.
Converted 01c_dataloader.ipynb.
Converted 02_data_transforms.ipynb.
Converted 03_data_pipeline.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_test_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_synth_learner.ipynb.
Converted 96_data_external.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.utils.test import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
#if isinstance(start, Iterable):
# return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
# for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
f = combined_cos(0.25, np.array([0.25,0.5]), np.array([0.5,1.]), np.array([0.,0.]))
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)],
[[0.25,0.5], [0.33638,0.67275], [0.5,1.], [0.375,0.75], [0.,0.]])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct))
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.dbunch.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
sched = {'lr': combined_cos(0.5, np.array([1e-4,1e-3]), np.array([1e-3,1e-2]), np.array([1e-5,1e-4]))}
learn.fit(1, cbs=ParamScheduler(sched))
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25, wd=defaults.wd,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduce by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). This schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner(lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
learn.fit_one_cycle(1, lr_max=slice(1e-3,1e-2))
#n = len(learn.dbunch.train_dl)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False, wd=defaults.wd):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.dbunch.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.learn.save('_tmp')
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self): raise CancelValidException()
def after_fit(self):
self.learn.load('_tmp')
os.remove(self.path/self.model_dir/'_tmp.pth')
_docs = {"begin_fit": "Initialize container for hyper-parameters and save the model",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch and potentially stop training",
"after_fit": "Save the hyper-parameters in the recorder if there is one and load the original model",
"begin_validate": "Skip the validation part of training"}
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
init_a,init_b = learn.model.a,learn.model.b
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
#Model loaded back properly
test_eq(learn.model.a, init_a)
test_eq(learn.model.b, init_b)
test_eq(learn.opt.state_dict()['state'], [{}, {}])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=0):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
return fig
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.dbunch.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 02_script.ipynb.
Converted 03_dataloader.ipynb.
Converted 04_transform.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 22_vision_learner.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.test_utils import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
#if isinstance(start, Iterable):
# return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
# for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
f = combined_cos(0.25, np.array([0.25,0.5]), np.array([0.5,1.]), np.array([0.,0.]))
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)],
[[0.25,0.5], [0.33638,0.67275], [0.5,1.], [0.375,0.75], [0.,0.]])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct))
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.dbunch.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
sched = {'lr': combined_cos(0.5, np.array([1e-4,1e-3]), np.array([1e-3,1e-2]), np.array([1e-5,1e-4]))}
learn.fit(1, cbs=ParamScheduler(sched))
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25, wd=defaults.wd,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduced by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). It schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner(lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
learn.fit_one_cycle(1, lr_max=slice(1e-3,1e-2))
#n = len(learn.dbunch.train_dl)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_flat_cos(self:Learner, n_epoch, lr=None, div_final=1e5, pct_start=0.75, wd=defaults.wd,
cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` at flat `lr` before a cosine annealing."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr is None else lr)
lr = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr, lr, lr/div_final)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
learn = synth_learner()
learn.fit_flat_cos(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False, wd=defaults.wd):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.dbunch.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.learn.save('_tmp')
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self): raise CancelValidException()
def after_fit(self):
self.learn.load('_tmp')
os.remove(self.path/self.model_dir/'_tmp.pth')
_docs = {"begin_fit": "Initialize container for hyper-parameters and save the model",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch and potentially stop training",
"after_fit": "Save the hyper-parameters in the recorder if there is one and load the original model",
"begin_validate": "Skip the validation part of training"}
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
init_a,init_b = learn.model.a,learn.model.b
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
#Model loaded back properly
test_eq(learn.model.a, init_a)
test_eq(learn.model.b, init_b)
test_eq(learn.opt.state_dict()['state'], [{}, {}])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=5):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True, show_plot=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.dbunch.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
if show_plot: self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core_foundation.ipynb.
Converted 01a_core_utils.ipynb.
Converted 01b_core_dispatch.ipynb.
Converted 01c_core_transform.ipynb.
Converted 02_core_script.ipynb.
Converted 03_torchcore.ipynb.
Converted 03a_layers.ipynb.
Converted 04_data_load.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 71_callback_tensorboard.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
Converted xse_resnext.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.test_utils import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
#if isinstance(start, Iterable):
# return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
# for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
f = combined_cos(0.25, np.array([0.25,0.5]), np.array([0.5,1.]), np.array([0.,0.]))
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)],
[[0.25,0.5], [0.33638,0.67275], [0.5,1.], [0.375,0.75], [0.,0.]])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct))
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.dbunch.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
sched = {'lr': combined_cos(0.5, np.array([1e-4,1e-3]), np.array([1e-3,1e-2]), np.array([1e-5,1e-4]))}
learn.fit(1, cbs=ParamScheduler(sched))
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25, wd=defaults.wd,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduce by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). This schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner(lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
learn.fit_one_cycle(1, lr_max=slice(1e-3,1e-2))
#n = len(learn.dbunch.train_dl)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False, wd=defaults.wd):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.dbunch.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.learn.save('_tmp')
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self): raise CancelValidException()
def after_fit(self):
self.learn.load('_tmp')
os.remove(self.path/self.model_dir/'_tmp.pth')
_docs = {"begin_fit": "Initialize container for hyper-parameters and save the model",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch and potentially stop training",
"after_fit": "Save the hyper-parameters in the recorder if there is one and load the original model",
"begin_validate": "Skip the validation part of training"}
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
init_a,init_b = learn.model.a,learn.model.b
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
#Model loaded back properly
test_eq(learn.model.a, init_a)
test_eq(learn.model.b, init_b)
test_eq(learn.opt.state_dict()['state'], [{}, {}])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=0):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
return fig
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.dbunch.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 02_script.ipynb.
Converted 03_dataloader.ipynb.
Converted 04_transform.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 22_vision_learner.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.utils.test import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
if isinstance(start, Iterable):
return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
fs = combined_cos(0.25, [0.25,0.5], [0.5,1.], [0.,0.])
test_eq(len(fs), 2)
f,g = fs
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.25, 0.33638, 0.5, 0.375, 0.])
test_close([g(0.), g(0.1), g(0.25), g(0.5), g(1.)], [0.5, 0.67275, 1., 0.75, 0.])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for pname,fs in self.scheds.items():
fs = L(fs)
if len(fs)==1: fs = fs*len(self.opt.param_groups)
for f,h in zip(fs,self.opt.hypers): h[pname] = f(pct)
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.dbunch.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
sched = {'lr': combined_cos(0.5, np.array([1e-4,1e-3]), np.array([1e-3,1e-2]), np.array([1e-5,1e-4]))}
learn.fit(1, cbs=ParamScheduler(sched))
#n = len(learn.dbunch.train_dl)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
lr_max = self.lr if lr_max is None else lr_max
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduce by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). This schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner()
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
learn.fit_one_cycle(1, lr_max=np.array([1e-3,1e-2]))
#n = len(learn.dbunch.train_dl)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
lr_max = lr_max or self.lr
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.dbunch.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.learn.save('_tmp')
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self): raise CancelValidException()
def after_fit(self):
self.learn.load('_tmp')
os.remove(self.path/self.model_dir/'_tmp.pth')
_docs = {"begin_fit": "Initialize container for hyper-parameters and save the model",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch and potentially stop training",
"after_fit": "Save the hyper-parameters in the recorder if there is one and load the original model",
"begin_validate": "Skip the validation part of training"}
#slow
learn = synth_learner()
init_a,init_b = learn.model.a,learn.model.b
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
#Model loaded back properly
test_eq(learn.model.a, init_a)
test_eq(learn.model.b, init_b)
test_eq(learn.opt.state_dict()['state'], [{}, {}])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=0):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
return fig
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.dbunch.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
learn = synth_learner()
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 01b_script.ipynb.
Converted 01c_dataloader.ipynb.
Converted 02_data_transforms.ipynb.
Converted 03_data_pipeline.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 22_vision_learner.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_utils_test.ipynb.
Converted 96_data_external.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.test_utils import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
#if isinstance(start, Iterable):
# return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
# for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
f = combined_cos(0.25, np.array([0.25,0.5]), np.array([0.5,1.]), np.array([0.,0.]))
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)],
[[0.25,0.5], [0.33638,0.67275], [0.5,1.], [0.375,0.75], [0.,0.]])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct))
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.dbunch.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
sched = {'lr': combined_cos(0.5, np.array([1e-4,1e-3]), np.array([1e-3,1e-2]), np.array([1e-5,1e-4]))}
learn.fit(1, cbs=ParamScheduler(sched))
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25, wd=defaults.wd,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduced by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). It schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner(lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
learn.fit_one_cycle(1, lr_max=slice(1e-3,1e-2))
#n = len(learn.dbunch.train_dl)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False, wd=defaults.wd):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.dbunch.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.learn.save('_tmp')
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self): raise CancelValidException()
def after_fit(self):
self.learn.load('_tmp')
os.remove(self.path/self.model_dir/'_tmp.pth')
_docs = {"begin_fit": "Initialize container for hyper-parameters and save the model",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch and potentially stop training",
"after_fit": "Save the hyper-parameters in the recorder if there is one and load the original model",
"begin_validate": "Skip the validation part of training"}
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
init_a,init_b = learn.model.a,learn.model.b
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
#Model loaded back properly
test_eq(learn.model.a, init_a)
test_eq(learn.model.b, init_b)
test_eq(learn.opt.state_dict()['state'], [{}, {}])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=0):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
return fig
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.dbunch.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_transform.ipynb.
Converted 02_script.ipynb.
Converted 03_torch_core.ipynb.
Converted 03a_layers.ipynb.
Converted 04_dataloader.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.utils.test import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
if is_listy(start):
return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
fs = combined_cos(0.25, [0.25,0.5], [0.5,1.], [0.,0.])
test_eq(len(fs), 2)
f,g = fs
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.25, 0.33638, 0.5, 0.375, 0.])
test_close([g(0.), g(0.1), g(0.25), g(0.5), g(1.)], [0.5, 0.67275, 1., 0.75, 0.])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for pname,fs in self.scheds.items():
fs = L(fs)
if len(fs)==1: fs = fs*len(self.opt.param_groups)
for f,h in zip(fs,self.opt.hypers): h[pname] = f(pct)
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.data.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
lr_max = lr_max or self.lr
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduce by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). This schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner()
xb,yb = learn.data.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
lr_max = lr_max or self.lr
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.data.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self):
raise CancelValidException()
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one",
"begin_validate": "Skip the validation part of training"}
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=0):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
return fig
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.data.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
learn = synth_learner()
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_dataloader.ipynb.
Converted 01a_script.ipynb.
Converted 02_transforms.ipynb.
Converted 03_pipeline.ipynb.
Converted 04_data_external.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_test_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 50_data_block.ipynb.
Converted 60_vision_models_xresnet.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_synth_learner.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Hyperparam schedule> Callback and helper functions to schedule any hyper-parameter
###Code
from local.test_utils import *
###Output
_____no_output_____
###Markdown
Annealing
###Code
#export
def annealer(f):
"Decorator to make `f` return itself partially applied."
@functools.wraps(f)
def _inner(start, end): return partial(f, start, end)
return _inner
###Output
_____no_output_____
###Markdown
This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`.
###Code
#export
@annealer
def SchedLin(start, end, pos): return start + pos*(end-start)
@annealer
def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def SchedNo (start, end, pos): return start
@annealer
def SchedExp(start, end, pos): return start * (end/start) ** pos
SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
SchedNo .__doc__ = "Constant schedule function with `start` value"
SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
#export
def SchedPoly(start, end, power):
"Polynomial schedule (of `power`) function from `start` to `end`"
def _inner(pos): return start + (end - start) * pos ** power
return _inner
annealings = "NO LINEAR COS EXP".split()
p = torch.linspace(0.,1,100)
fns = [SchedNo, SchedLin, SchedCos, SchedExp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(p, [f(o) for o in p], label=t)
f = SchedPoly(2,1e-2,0.5)
plt.plot(p, [f(o) for o in p], label="POLY(0.5)")
plt.legend();
show_doc(SchedLin)
sched = SchedLin(0, 2)
test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.])
show_doc(SchedCos)
sched = SchedCos(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.])
show_doc(SchedNo)
sched = SchedNo(0, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.])
show_doc(SchedExp)
sched = SchedExp(1, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.])
show_doc(SchedPoly)
sched = SchedPoly(0, 2, 2)
test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.])
p = torch.linspace(0.,1,100)
pows = [0.5,1.,2.]
for e in pows:
f = SchedPoly(2, 0, e)
plt.plot(p, [f(o) for o in p], label=f'power {e}')
plt.legend();
#export
def combine_scheds(pcts, scheds):
"Combine `scheds` according to `pcts` in one function"
assert sum(pcts) == 1.
pcts = tensor([0] + L(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
if pos == 1.: return scheds[-1](1.)
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
###Output
_____no_output_____
###Markdown
`pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth.
###Code
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)])
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)],
[0., 0.5, 1., 1., 1., 0.65451, 0.])
#export
def combined_cos(pct, start, middle, end):
"Return a combined scheduler with cosine annealing from `start` to `middle` then `middle` to `end`"
#if isinstance(start, Iterable):
# return [combine_scheds([pct,1-pct], [SchedCos(s, m), SchedCos(m, e)])
# for s,m,e in zip(start,middle,end)]
return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
###Output
_____no_output_____
###Markdown
This is a useful helper function for the 1cycle policy. `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats.
###Code
p = torch.linspace(0.,1,100)
f = combined_cos(0.25,0.5,1.,0.)
plt.plot(p, [f(o) for o in p]);
#hide
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.])
f = combined_cos(0.25, np.array([0.25,0.5]), np.array([0.5,1.]), np.array([0.,0.]))
test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)],
[[0.25,0.5], [0.33638,0.67275], [0.5,1.], [0.375,0.75], [0.,0.]])
###Output
_____no_output_____
###Markdown
ParamScheduler -
###Code
#export
@docs
class ParamScheduler(Callback):
"Schedule hyper-parameters according to `scheds`"
run_after=TrainEvalCallback
def __init__(self, scheds): self.scheds = scheds
def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
def _update_val(self, pct):
for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct))
def begin_batch(self):
if not self.training: return
self._update_val(self.pct_train)
def after_batch(self):
if self.training:
for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
def after_fit(self):
if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps
_docs = {"begin_fit": "Initialize container for hyper-parameters",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch",
"after_fit": "Save the hyper-parameters in the recorder if there is one"}
###Output
_____no_output_____
###Markdown
`scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).
###Code
learn = synth_learner()
sched = {'lr': SchedLin(1e-3, 1e-2)}
learn.fit(1, cbs=ParamScheduler(sched))
n = len(learn.dbunch.train_dl)
test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
sched = {'lr': combined_cos(0.5, np.array([1e-4,1e-3]), np.array([1e-3,1e-2]), np.array([1e-5,1e-4]))}
learn.fit(1, cbs=ParamScheduler(sched))
show_doc(ParamScheduler.begin_fit)
show_doc(ParamScheduler.begin_batch)
show_doc(ParamScheduler.after_batch)
show_doc(ParamScheduler.after_fit)
#export
@patch
def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25, wd=defaults.wd,
moms=(0.95,0.85,0.95), cbs=None, reset_opt=False):
"Fit `self.model` for `n_epoch` using the 1cycle policy."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
'mom': combined_cos(pct_start, *moms)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
The 1cycle policy was introduced by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). It schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`.
###Code
#Integration test: training a few epochs should make the model better
learn = synth_learner(lr=1e-2)
xb,yb = learn.dbunch.one_batch()
init_loss = learn.loss_func(learn.model(xb), yb)
learn.fit_one_cycle(2)
assert learn.loss < init_loss
#Scheduler test
lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom']
test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)])
test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)])
#export
@patch
def plot_sched(self:Recorder, figsize=None):
rows,cols = (len(self.hps)+1)//2, min(2, len(self.hps))
figsize = figsize or (6*cols,4*rows)
_, axs = plt.subplots(rows, cols, figsize=figsize)
axs = axs.flatten() if len(self.hps) > 1 else L(axs)
for p,ax in zip(self.hps.keys(), axs):
ax.plot(self.hps[p])
ax.set_ylabel(p)
#hide
#test discriminative lrs
def _splitter(m): return [[m.a], [m.b]]
learn = synth_learner(splitter=_splitter)
learn.fit_one_cycle(1, lr_max=slice(1e-3,1e-2))
#n = len(learn.dbunch.train_dl)
#est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)])
learn = synth_learner()
learn.fit_one_cycle(2)
learn.recorder.plot_sched()
#export
@patch
def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False, wd=defaults.wd):
"Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
if self.opt is None: self.create_opt()
self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
lr_max = np.array([h['lr'] for h in self.opt.hypers])
n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
scheds = {'lr': combine_scheds(pcts, scheds)}
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
###Output
_____no_output_____
###Markdown
This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`.
###Code
#slow
learn = synth_learner()
with learn.no_logging(): learn.fit_sgdr(3, 1)
test_eq(learn.n_epoch, 7)
iters = [k * len(learn.dbunch.train_dl) for k in [0,1,3,7]]
for i in range(3):
n = iters[i+1]-iters[i]
#The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1
test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)])
learn.recorder.plot_sched()
###Output
_____no_output_____
###Markdown
LRFind -
###Code
#export
@docs
class LRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
run_after=Recorder
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
def begin_fit(self):
super().begin_fit()
self.learn.save('_tmp')
self.best_loss = float('inf')
def begin_batch(self):
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
super().after_batch()
if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
if self.train_iter >= self.num_it: raise CancelFitException()
def begin_validate(self): raise CancelValidException()
def after_fit(self):
self.learn.load('_tmp')
os.remove(self.path/self.model_dir/'_tmp.pth')
_docs = {"begin_fit": "Initialize container for hyper-parameters and save the model",
"begin_batch": "Set the proper hyper-parameters in the optimizer",
"after_batch": "Record hyper-parameters of this batch and potentially stop training",
"after_fit": "Save the hyper-parameters in the recorder if there is one and load the original model",
"begin_validate": "Skip the validation part of training"}
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
init_a,init_b = learn.model.a,learn.model.b
with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100))
assert len(learn.recorder.lrs) <= 100
test_eq(len(learn.recorder.lrs), len(learn.recorder.losses))
#Check stop if diverge
if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses)
#Test schedule
test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)])
#No validation data
test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)])
#Model loaded back properly
test_eq(learn.model.a, init_a)
test_eq(learn.model.b, init_b)
test_eq(learn.opt.state_dict()['state'], [{}, {}])
show_doc(LRFinder.begin_fit)
show_doc(LRFinder.begin_batch)
show_doc(LRFinder.after_batch)
show_doc(LRFinder.begin_validate)
#export
@patch
def plot_lr_find(self:Recorder, skip_end=5):
"Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
losses = self.losses if skip_end==0 else self.losses[:-skip_end]
fig, ax = plt.subplots(1,1)
ax.plot(lrs, losses)
ax.set_ylabel("Loss")
ax.set_xlabel("Learning Rate")
ax.set_xscale('log')
#export
@patch
def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True, show_plot=True):
"Launch a mock training to find a good learning rate"
n_epoch = num_it//len(self.dbunch.train_dl) + 1
cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
with self.no_logging(): self.fit(n_epoch, cbs=cb)
if show_plot: self.recorder.plot_lr_find()
###Output
_____no_output_____
###Markdown
First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either:- when the slope is the steepest- one tenth of the minimum before the divergence
###Code
#slow
with tempfile.TemporaryDirectory() as d:
learn = synth_learner(path=Path(d))
learn.lr_find()
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_transform.ipynb.
Converted 02_script.ipynb.
Converted 03_torch_core.ipynb.
Converted 03a_layers.ipynb.
Converted 04_dataloader.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
|
Phase_4/ds-singular_value_decomposition-kvo32-main/svd.ipynb | ###Markdown
On the Meaning and Use of Singular Value Decomposition
###Code
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Agenda SWBAT:- explain the notion of singular value decomposition;- describe the relationship between SVD and eigendecomposition;- describe the relationship between SVD and PCA. EigendecompositionLet's start with eigendecomposition. Remember PCA?Any *non-defective* and *square* matrix $A$ has an eigendecomposition:$\large A = Q\Lambda Q^{-1}$,where:- the columns of $Q$ are the eigenvectors of $A$, and- $\Lambda$ is a diagonal matrix whose non-zero entries are the eigenvalues of $A$.Eigendecompositions have *many* practical applications.But since not all matrices are square, not all matrices have eigendecompositions. Singular Value DecompositionHowever, given a non-square matrix $R$, we can construct a square matrix by simply calculating $RR^T$ or $R^TR$.(The 'T' superscript indicates that the relevant matrix is *transposed*, i.e. its rows and columns are switched. So for example the transpose of $\begin{bmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \end{bmatrix}$ is $\begin{bmatrix} 0 & 3 & 6 \\ 1 & 4 & 7 \\ 2 & 5 & 8 \end{bmatrix}$.)Moreover, _any_ matrix $A$ has a ***singular value decomposition***, i.e. a factorization in the form:$\large A = U\Sigma V^T$,where- $\Sigma$ is diagonal if square and otherwise "pseudo-diagonal" (a diagonal matrix sitting on top of zeroes) with real, non-negative entries; and- $U$ and $V$ are orthogonal.A matrix $Q$ is orthogonal if its columns are mutually orthogonal and normalized to lengths of 1. This guarantees that, for orthogonal $Q$, $Q^TQ = I$. Thus, $Q^T = Q^T(QQ^{-1}) = (Q^TQ)Q^{-1}$, so $Q^T = Q^{-1}$.Note also that, if $V$ is orthogonal, then so is $V^{-1}$:$(V^{-1})^T = (V^T)^T$ $(V^{-1})^T = V$ $(V^{-1})^T = (V^{-1})^{-1}$ Now (this text adapted from [Inderjit Dhillon](http://www.cs.utexas.edu/users/inderjit/courses/dm2009/LinearAlgebraBackground.pdf)):Using the singular value decomposition of $A$, we have:$\large AA^T = U\Sigma V^T\times(U\Sigma V^T)^T$ $\large AA^T = U\Sigma V^T\times V\Sigma^TU^T$ $\large AA^T = U\Sigma I\Sigma U^T$ $\large AA^T = U\Sigma^2U^T$ Here we have an eigendecomposition of $AA^T$ in terms of the SVD of $A$!In particular:the eigenvectors of $AA^T$ are the columns of $U$; and the eigenvalues of $AA^T$ are the squares of the singular values of $A$. Similarly, $\large A^TA = V\Sigma^2V^T$.And so:the eigenvectors of $A^TA$ are the columns of $V$; and the eigenvalues of $A^TA$ are the squares of the singular values of $A$.Put another way: The singular values of A are the non-negative square roots of the eigenvalues of $AA^T$ or $A^TA$. Again (see [this page](https://math.stackexchange.com/questions/2152751/why-does-the-eigenvalues-of-aat-are-the-squares-of-the-singular-values-of-a)):Since: $\large AA^T = U\Sigma^2U^T$, we have that $\large AA^TU = U\Sigma^2$ (since $U$ is orthogonal), which says that $AA^T$ multiplied by a vector (let's choose $U_i$, a column of $U$) yields $U$ multiplied by a scalar, namely $\sigma^2_i$. I.e. the squares of the singular values of $A$ are the (non-zero) eigenvalues of $A^TA$ (or $AA^T$). Eigenvalues of $AA^T$ and $A^TA$Why do $AA^T$ and $A^TA$ have the same eigenvalues?Let $\lambda$ be a (non-zero) eigenvalue of $A^TA$.Then:$A^TAx = \lambda x$.Now let $Ax = y$. So we have:$A^Ty = \lambda x$. If we left-multiply by $A$, we have:$AA^Ty = A\lambda x = \lambda Ax = \lambda y$, which is to say that $\lambda$ is an eigenvalue of $AA^T$.(To show the reverse, just let $B = A^T$.)For more, see [this post](https://math.stackexchange.com/questions/1087064/non-zero-eigenvalues-of-aat-and-ata). Diagonalization vs. SVDThis shows that the eigen- and the singular value decompositions are intimately related!The SVD has many uses! Check out [this book chapter](https://www.cs.cmu.edu/~venkatg/teaching/CStheory-infoage/book-chapter-4.pdf) for details. SVD and Dimensionality ReductionThe SVD, much like PCA, can be used to reduce the dimensionality of your data.Recall how PCA works: We start with an eigendecomposition of our covariance matrix. We then order our eigenvectors by the size of their corresponding eigenvalues. Eigenvectors with large eigenvalues explain more of the variance in our dataset, and so we define our principal components accordingly.The situation is much the same with the SVD. The singular vectors that correspond to larger singular values capture more of the variance in our data. And, just as with PCA, we can often capture a large percentage of the variance by taking a relatively small number of singular vectors, throwing out the ones that correspond to small singular values. [Here](https://rpubs.com/Tanmay007/svd) is a good example of this process. SVD in PythonLet's show that the squares of the singular values of a non-square matrix $A$ are equal to the eigenvalues of $A^TA$ (or $AA^T$).
###Code
np.random.seed(42)
A = np.random.rand(5, 3)
A
# Using np.linalg.svd()
np.linalg.svd(A)
###Output
_____no_output_____
###Markdown
`np.linalg.svd()` returns a triple, unsurprisingly: The left singular vectors ("U"), the singular values, and the (transpose of the) right singular vectors ("V").We can re-create A by multiplying these components together. But we'll first have to turn the array of singular values into $\Sigma$, so we'll use `np.diag()` together with `np.vstack()`.
###Code
u, s, vT = np.linalg.svd(A)
sigma = np.vstack([np.diag(s), [[0, 0, 0], [0, 0, 0]]])
sigma
# This should reproduce the matrix A
u.dot(sigma).dot(vT)
###Output
_____no_output_____
###Markdown
Now let's look at the eigendecomposition of $AA^T$.
###Code
np.linalg.eig(A.dot(A.T))
###Output
_____no_output_____
###Markdown
Let's extract the eigenvalues.
###Code
np.linalg.eig(A.dot(A.T))[0]
###Output
_____no_output_____
###Markdown
The last two eigenvalues are really equal to *zero*, which of course can often confuse computers. Squaring the singular values of $A$ should yield the same list (without the zeroes):
###Code
np.linalg.svd(A)[1]**2
###Output
_____no_output_____
###Markdown
Relation to PCA We'll start by **centering** the matrix, i.e. subtracting the mean of the relevant column from each entry:
###Code
centered_A = np.vstack([A[:, col] - A.mean(axis=0)[col] for col in range(3)]).T
centered_A
###Output
_____no_output_____
###Markdown
The column sums should now be 0:
###Code
centered_A.sum(axis=0)
###Output
_____no_output_____
###Markdown
The covariance matrix of a centered matrix $A$ is equal to $A^TA/(n-1)$, where $n$ is the number of rows of $A$. See [this helpful post](https://stats.stackexchange.com/questions/134282/relationship-between-svd-and-pca-how-to-use-svd-to-perform-pca).
###Code
np.allclose(np.cov(centered_A.T), centered_A.T.dot(centered_A) / 4)
###Output
_____no_output_____
###Markdown
SVD of centered matrix:
###Code
u_c, s_c, vT_c = np.linalg.svd(centered_A)
###Output
_____no_output_____
###Markdown
PCA begins by diagonalizing the covariance matrix. And with these matrices generated by the SVD we can reproduce the covariance matrix of $A_{centered}$:
###Code
vT_c.T.dot(np.diag(s_c)**2).dot(vT_c) / 4
np.cov(centered_A.T)
###Output
_____no_output_____
###Markdown
Least-Squares ProblemThe singular value decomposition can be used to solve a least-squares problem quickly. Let's create such a problem. Comparison to the matrix-vector equation, $M\vec{x} = \vec{b}$Suppose we have an exact equation, $M\vec{x} = \vec{b}$.In that case $M$ is square, and the solution to the equation is $x = M^{-1}b$.
###Code
np.random.seed(43)
M = np.random.rand(5, 5)
b = np.random.rand(5, 1)
b
x = np.linalg.inv(M).dot(b)
# Reproducing the vector b
M.dot(x)
###Output
_____no_output_____
###Markdown
Optimization ProblemBut of course the typical DS situation is that we have not an exact equation to solve but rather an optimization to perform. So let's now imagine that $A$ has more rows than columns.If we need some warm and fuzzy familiarity, we could throw this all into a DataFrame:
###Code
pd.DataFrame(np.hstack([A, b]),
columns=['pred1', 'pred2', 'pred3', 'target'])
###Output
_____no_output_____
###Markdown
Treating the columns of $A$ as our predictors and $b$ as our target, the answer to this least-squares problem turns out to be $A^+\vec{b}$, where $A^+$ is the *pseudo-inverse* of $A$. The formula for the pseudo-inverse if $(A^TA)^{-1}A^T$, and the idea behind it is to generalize the notion of an inverse to non-square matrices. The pseudo-inverse reduces to the inverse in the case of square matrices.
###Code
mat = np.random.rand(100, 100)
np.allclose(np.linalg.inv(mat), np.linalg.pinv(mat))
###Output
_____no_output_____
###Markdown
If we have $A = U\Sigma V^T$, then $A^+ = V\Sigma^+U^T$. Numpy has a pseudo-inverse function, `np.linalg.pinv()`, which we could use directly, rather than first constructing the SVD. But because the decomposed equation involves only the pseudo-inverse of a (pseudo-) diagonal matrix, the SVD route can be much *faster*. **This is the real point of using the SVD in calculating least-squares solutions.** See [this site](https://math.stackexchange.com/questions/974193/why-does-svd-provide-the-least-squares-and-least-norm-solution-to-a-x-b) for a proof of the least-squares solution. For more on the pseudo-inverse, see [here](https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse).
###Code
# Let's calculate the least-squares solution using our SVD components:
vT.T.dot(np.linalg.pinv(sigma)).dot(u.T).dot(b)
# Checking against sklearn's LinearRegression():
LinearRegression(fit_intercept=False).fit(A, b).coef_
###Output
_____no_output_____
###Markdown
In fact, `LinearRegression()` uses SVD under the hood! Timings
###Code
%timeit vT.T.dot(np.linalg.pinv(sigma)).dot(u.T).dot(b)
%timeit LinearRegression(fit_intercept=False).fit(A, b).coef_
%timeit np.linalg.pinv(A).dot(b)
###Output
37.5 µs ± 511 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
###Markdown
For our small sample matrix, `sklearn`'s version actually takes longer. But let's try a much larger matrix!
###Code
np.random.seed(42)
X = np.random.rand(10000, 100)
target = np.random.rand(10000, 1)
%timeit np.linalg.pinv(X).dot(target)
%timeit LinearRegression(fit_intercept=False).fit(X, target).coef_
###Output
37.3 ms ± 1.06 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
On the Meaning and Use of Singular Value Decomposition
###Code
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Agenda SWBAT:- explain the notion of singular value decomposition;- describe the relationship between SVD and eigendecomposition;- describe the relationship between SVD and PCA. EigendecompositionLet's start with eigendecomposition. Remember PCA?Any *non-defective* and *square* matrix $A$ has an eigendecomposition:$\large A = Q\Lambda Q^{-1}$,where:- the columns of $Q$ are the eigenvectors of $A$, and- $\Lambda$ is a diagonal matrix whose non-zero entries are the eigenvalues of $A$.Eigendecompositions have *many* practical applications.But since not all matrices are square, not all matrices have eigendecompositions. Singular Value DecompositionHowever, given a non-square matrix $R$, we can construct a square matrix by simply calculating $RR^T$ or $R^TR$.(The 'T' superscript indicates that the relevant matrix is *transposed*, i.e. its rows and columns are switched. So for example the transpose of $\begin{bmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \end{bmatrix}$ is $\begin{bmatrix} 0 & 3 & 6 \\ 1 & 4 & 7 \\ 2 & 5 & 8 \end{bmatrix}$.)Moreover, _any_ matrix $A$ has a ***singular value decomposition***, i.e. a factorization in the form:$\large A = U\Sigma V^T$,where- $\Sigma$ is diagonal if square and otherwise "pseudo-diagonal" (a diagonal matrix sitting on top of zeroes) with real, non-negative entries; and- $U$ and $V$ are orthogonal.A matrix $Q$ is orthogonal if its columns are mutually orthogonal and normalized to lengths of 1. This guarantees that, for orthogonal $Q$, $Q^TQ = I$. Thus, $Q^T = Q^T(QQ^{-1}) = (Q^TQ)Q^{-1}$, so $Q^T = Q^{-1}$.Note also that, if $V$ is orthogonal, then so is $V^{-1}$:$(V^{-1})^T = (V^T)^T$ $(V^{-1})^T = V$ $(V^{-1})^T = (V^{-1})^{-1}$ Now (this text adapted from [Inderjit Dhillon](http://www.cs.utexas.edu/users/inderjit/courses/dm2009/LinearAlgebraBackground.pdf)):Using the singular value decomposition of $A$, we have:$\large AA^T = U\Sigma V^T\times(U\Sigma V^T)^T$ $\large AA^T = U\Sigma V^T\times V\Sigma^TU^T$ $\large AA^T = U\Sigma I\Sigma U^T$ $\large AA^T = U\Sigma^2U^T$ Here we have an eigendecomposition of $AA^T$ in terms of the SVD of $A$!In particular:the eigenvectors of $AA^T$ are the columns of $U$; and the eigenvalues of $AA^T$ are the squares of the singular values of $A$. Similarly, $\large A^TA = V\Sigma^2V^T$.And so:the eigenvectors of $A^TA$ are the columns of $V$; and the eigenvalues of $A^TA$ are the squares of the singular values of $A$.Put another way: The singular values of A are the non-negative square roots of the eigenvalues of $AA^T$ or $A^TA$. Again (see [this page](https://math.stackexchange.com/questions/2152751/why-does-the-eigenvalues-of-aat-are-the-squares-of-the-singular-values-of-a)):Since: $\large AA^T = U\Sigma^2U^T$, we have that $\large AA^TU = U\Sigma^2$ (since $U$ is orthogonal), which says that $AA^T$ multiplied by a vector (let's choose $U_i$, a column of $U$) yields $U$ multiplied by a scalar, namely $\sigma^2_i$. I.e. the squares of the singular values of $A$ are the (non-zero) eigenvalues of $A^TA$ (or $AA^T$). Eigenvalues of $AA^T$ and $A^TA$Why do $AA^T$ and $A^TA$ have the same eigenvalues?Let $\lambda$ be a (non-zero) eigenvalue of $A^TA$.Then:$A^TAx = \lambda x$.Now let $Ax = y$. So we have:$A^Ty = \lambda x$. If we left-multiply by $A$, we have:$AA^Ty = A\lambda x = \lambda Ax = \lambda y$, which is to say that $\lambda$ is an eigenvalue of $AA^T$.(To show the reverse, just let $B = A^T$.)For more, see [this post](https://math.stackexchange.com/questions/1087064/non-zero-eigenvalues-of-aat-and-ata). Diagonalization vs. SVDThis shows that the eigen- and the singular value decompositions are intimately related!The SVD has many uses! Check out [this book chapter](https://www.cs.cmu.edu/~venkatg/teaching/CStheory-infoage/book-chapter-4.pdf) for details. SVD and Dimensionality ReductionThe SVD, much like PCA, can be used to reduce the dimensionality of your data.Recall how PCA works: We start with an eigendecomposition of our covariance matrix. We then order our eigenvectors by the size of their corresponding eigenvalues. Eigenvectors with large eigenvalues explain more of the variance in our dataset, and so we define our principal components accordingly.The situation is much the same with the SVD. The singular vectors that correspond to larger singular values capture more of the variance in our data. And, just as with PCA, we can often capture a large percentage of the variance by taking a relatively small number of singular vectors, throwing out the ones that correspond to small singular values. [Here](https://rpubs.com/Tanmay007/svd) is a good example of this process. SVD in PythonLet's show that the squares of the singular values of a non-square matrix $A$ are equal to the eigenvalues of $A^TA$ (or $AA^T$).
###Code
np.random.seed(42)
A = np.random.rand(5, 3)
A
# Using np.linalg.svd()
np.linalg.svd(A)
###Output
_____no_output_____
###Markdown
`np.linalg.svd()` returns a triple, unsurprisingly: The left singular vectors ("U"), the singular values, and the (transpose of the) right singular vectors ("V").We can re-create A by multiplying these components together. But we'll first have to turn the array of singular values into $\Sigma$, so we'll use `np.diag()` together with `np.vstack()`.
###Code
u, s, vT = np.linalg.svd(A)
sigma = np.vstack([np.diag(s), [[0, 0, 0], [0, 0, 0]]])
sigma
# This should reproduce the matrix A
u.dot(sigma).dot(vT)
###Output
_____no_output_____
###Markdown
Now let's look at the eigendecomposition of $AA^T$.
###Code
np.linalg.eig(A.dot(A.T))
###Output
_____no_output_____
###Markdown
Let's extract the eigenvalues.
###Code
np.linalg.eig(A.dot(A.T))[0]
###Output
_____no_output_____
###Markdown
The last two eigenvalues are really equal to *zero*, which of course can often confuse computers. Squaring the singular values of $A$ should yield the same list (without the zeroes):
###Code
np.linalg.svd(A)[1]**2
###Output
_____no_output_____
###Markdown
Relation to PCA We'll start by **centering** the matrix, i.e. subtracting the mean of the relevant column from each entry:
###Code
centered_A = np.vstack([A[:, col] - A.mean(axis=0)[col] for col in range(3)]).T
centered_A
###Output
_____no_output_____
###Markdown
The column sums should now be 0:
###Code
centered_A.sum(axis=0)
###Output
_____no_output_____
###Markdown
The covariance matrix of a centered matrix $A$ is equal to $A^TA/(n-1)$, where $n$ is the number of rows of $A$. See [this helpful post](https://stats.stackexchange.com/questions/134282/relationship-between-svd-and-pca-how-to-use-svd-to-perform-pca).
###Code
np.allclose(np.cov(centered_A.T), centered_A.T.dot(centered_A) / 4)
###Output
_____no_output_____
###Markdown
SVD of centered matrix:
###Code
u_c, s_c, vT_c = np.linalg.svd(centered_A)
###Output
_____no_output_____
###Markdown
PCA begins by diagonalizing the covariance matrix. And with these matrices generated by the SVD we can reproduce the covariance matrix of $A_{centered}$:
###Code
vT_c.T.dot(np.diag(s_c)**2).dot(vT_c) / 4
np.cov(centered_A.T)
###Output
_____no_output_____
###Markdown
Least-Squares ProblemThe singular value decomposition can be used to solve a least-squares problem quickly. Let's create such a problem. Comparison to the matrix-vector equation, $M\vec{x} = \vec{b}$Suppose we have an exact equation, $M\vec{x} = \vec{b}$.In that case $M$ is square, and the solution to the equation is $x = M^{-1}b$.
###Code
np.random.seed(43)
M = np.random.rand(5, 5)
b = np.random.rand(5, 1)
b
x = np.linalg.inv(M).dot(b)
# Reproducing the vector b
M.dot(x)
###Output
_____no_output_____
###Markdown
Optimization ProblemBut of course the typical DS situation is that we have not an exact equation to solve but rather an optimization to perform. So let's now imagine that $A$ has more rows than columns.If we need some warm and fuzzy familiarity, we could throw this all into a DataFrame:
###Code
pd.DataFrame(np.hstack([A, b]),
columns=['pred1', 'pred2', 'pred3', 'target'])
###Output
_____no_output_____
###Markdown
Treating the columns of $A$ as our predictors and $b$ as our target, the answer to this least-squares problem turns out to be $A^+\vec{b}$, where $A^+$ is the *pseudo-inverse* of $A$. The formula for the pseudo-inverse if $(A^TA)^{-1}A^T$, and the idea behind it is to generalize the notion of an inverse to non-square matrices. The pseudo-inverse reduces to the inverse in the case of square matrices.
###Code
mat = np.random.rand(100, 100)
np.allclose(np.linalg.inv(mat), np.linalg.pinv(mat))
###Output
_____no_output_____
###Markdown
If we have $A = U\Sigma V^T$, then $A^+ = V\Sigma^+U^T$. Numpy has a pseudo-inverse function, `np.linalg.pinv()`, which we could use directly, rather than first constructing the SVD. But because the decomposed equation involves only the pseudo-inverse of a (pseudo-) diagonal matrix, the SVD route can be much *faster*. **This is the real point of using the SVD in calculating least-squares solutions.** See [this site](https://math.stackexchange.com/questions/974193/why-does-svd-provide-the-least-squares-and-least-norm-solution-to-a-x-b) for a proof of the least-squares solution. For more on the pseudo-inverse, see [here](https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse).
###Code
# Let's calculate the least-squares solution using our SVD components:
vT.T.dot(np.linalg.pinv(sigma)).dot(u.T).dot(b)
# Checking against sklearn's LinearRegression():
LinearRegression(fit_intercept=False).fit(A, b).coef_
###Output
_____no_output_____
###Markdown
In fact, `LinearRegression()` uses SVD under the hood! Timings
###Code
%timeit vT.T.dot(np.linalg.pinv(sigma)).dot(u.T).dot(b)
%timeit LinearRegression(fit_intercept=False).fit(A, b).coef_
%timeit np.linalg.pinv(A).dot(b)
###Output
_____no_output_____
###Markdown
For our small sample matrix, `sklearn`'s version actually takes longer. But let's try a much larger matrix!
###Code
np.random.seed(42)
X = np.random.rand(10000, 100)
target = np.random.rand(10000, 1)
%timeit np.linalg.pinv(X).dot(target)
%timeit LinearRegression(fit_intercept=False).fit(X, target).coef_
###Output
_____no_output_____ |
.ipynb_checkpoints/Lab #1-checkpoint.ipynb | ###Markdown
Problem 1: Gaussian Distribution
###Code
mu = 0
sigma = 1
s = np.linspace(-4,4,1000)
t = stats.norm.pdf(s,mu,sigma)
plt.plot(s,t);
plt.xlabel('Values')
plt.ylabel('Probability')
plt.title('Normal distribution');
val = np.array([-1, 0, 1]) # points to test
calc_areas = stats.norm.cdf([-1, 0, 1]) # probability at each point
print(calc_areas)
###Output
[0.15865525 0.5 0.84134475]
###Markdown
1B
###Code
Z = (val - mu)/sigma # z vals do match, need all the tables
print(Z)
###Output
[-1. 0. 1.]
###Markdown
1C
###Code
reverse = stats.norm.ppf(calc_areas) # finding point by plugging in probabilities
reverse # probabilities match up with given values
stats.norm.ppf(calc_areas[1]) # gives z score corresponding to probabilities
stats.norm.ppf(calc_areas[2])
## 1D - z score will be negative if prob is less than .5 due to the integration limits
###Output
_____no_output_____
###Markdown
Problem 2: Chi Squared Distribution
###Code
#PDF
fig, ax = plt.subplots(1, 1)
df = 6
x = np.linspace(-1,35,1000)
ax.plot(x, stats.chi2.pdf(x, df),linewidth=3);
ax.set_xlabel('Values')
ax.set_ylabel('Probability')
ax.set_title(r'$\chi^2$ Distribution');
fig, ax = plt.subplots(1, 1)
r = stats.chi2.rvs(loc=0,scale=1,size=100000,df=6)
ax.hist(r,100,alpha=1,density=True)
x = np.linspace(-1,35,1000)
ax.plot(x,stats.chi2.pdf(x, df),linewidth=5,alpha=.7); ## realization
ax.set_xlabel('Values')
ax.set_ylabel('Probability')
ax.set_title(r'$\chi^2$ Realization');
###Output
_____no_output_____
###Markdown
Problem 3: Hypothetical Measurements
###Code
meas_val = 7
# given the signal free data of z, what is the probability that my measurement of 7 or lower
# is legitimate data and not an outlier from the data?
##integral = int(-inf,7) of chi2 pdf
prob = stats.chi2.cdf([7],df)
print(prob)
print(stats.chi2.ppf(prob,df))
# corresponding z score to a probability of .679 is approximately .47
zscore = .47
mean = np.mean(z)
sigma = (meas_val - mean)/zscore
sigma
# different values, measured = 8,
new_meas = 8
prob2 = stats.chi2.cdf([8],df)
zscore2 = .71
sigma2 = (new_meas-mean)/zscore2
print(prob2,sigma2)
# diff vals, measured = 2
last_meas = 2
prob3 = stats.chi2.cdf([2],df)
zscore3 = 1.4
sigma3 = abs((last_meas-mean)/zscore2)
print(prob3,sigma3)
## patterns noticed: the further away from the mean, the larger the sigma attributed to the measurement
###Output
[0.76189669] 2.8071329838127066
[0.0803014] 5.643571241539407
###Markdown
Non-Continuous Distributions
###Code
# 1A - Poisson
plt.subplots_adjust(bottom=.2, top=1,
left=.01, right=1.5,
hspace=.35, wspace=.35)
plt.suptitle('Poisson Distributions following different mu and k values',x=.85)
k = np.zeros(3)
mu = np.zeros(3)
#samples = np.zeros((9,1000))
for i in range(0,3):
k[i] = (2**(i+1))*10
mu[i] = (3**(i+1))*.1
plt.subplot(3,3,1)
x1 = np.arange(stats.poisson.ppf(.01,mu[0]),stats.poisson.ppf(.99,mu[0]),1/k[0])
plt.plot(x1,stats.poisson.pmf(x1,mu[0]))
plt.ylabel('k = 20')
plt.subplot(3,3,4)
x2 = np.arange(stats.poisson.ppf(.01,mu[0]),stats.poisson.ppf(.99,mu[0]),1/k[1])
plt.plot(x2,stats.poisson.pmf(x2,mu[0]))
plt.ylabel('k = 40')
plt.subplot(3,3,5)
x3 = np.arange(stats.poisson.ppf(.01,mu[1]),stats.poisson.ppf(.99,mu[1]),1/k[1])
plt.plot(x3,stats.poisson.pmf(x3,mu[1]))
plt.subplot(3,3,7)
x4 = np.arange(stats.poisson.ppf(.01,mu[0]),stats.poisson.ppf(.99,mu[0]),1/k[2])
plt.plot(x4,stats.poisson.pmf(x4,mu[0]))
plt.xlabel('mu = .2')
plt.ylabel('k = 80')
plt.subplot(3,3,8)
x5 = np.arange(stats.poisson.ppf(.01,mu[1]),stats.poisson.ppf(.99,mu[1]),1/k[2])
plt.plot(x5,stats.poisson.pmf(x5,mu[1]))
plt.xlabel('mu = .4')
plt.subplot(3,3,9);
x6 = np.arange(stats.poisson.ppf(.01,mu[2]),stats.poisson.ppf(.99,mu[2]),1/k[2])
plt.plot(x6,stats.poisson.pmf(x6,mu[2]))
plt.xlabel('mu = .8');
# 1 B
## the peaks represent the number of events that could statistically happen given the rate of occurence; mu
## the peaks only fall on integers on the x axis due to ".5 events" not being possible
# 1 C
# what is the probability of 2 events happening given an average occurence of 4 events per time interval
#
# looking for probability associated with peak at x=2 and mu = 4
prob_pois = stats.poisson.pmf(2,4)
prob_pois
###Output
_____no_output_____ |
tutorials/Certification_Trainings/Public/databricks_notebooks/1. Quickstart Tutorial on Spark NLP.ipynb | ###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) 1.Quickstart Tutorial on Spark NLP - 1 hrThis is the 1 hr workshop version of the entire training notebooks : https://github.com/JohnSnowLabs/spark-nlp-workshop/tree/master/tutorials/Certification_Trainings/Public an intro article for Spark NLP:https://towardsdatascience.com/introduction-to-spark-nlp-foundations-and-basic-components-part-i-c83b7629ed59How to start Spark NLP in 2 weeks:https://towardsdatascience.com/how-to-get-started-with-sparknlp-in-2-weeks-cb47b2ba994dhttps://towardsdatascience.com/how-to-wrap-your-head-around-spark-nlp-a6f6a968b7e8Article for NER and text classification in Spark NLPhttps://towardsdatascience.com/named-entity-recognition-ner-with-bert-in-spark-nlp-874df20d1d77https://medium.com/spark-nlp/named-entity-recognition-for-healthcare-with-sparknlp-nerdl-and-nercrf-a7751b6ad571https://towardsdatascience.com/text-classification-in-spark-nlp-with-bert-and-universal-sentence-encoders-e644d618ca32a webinar to show how to train a NER model from scratch (90 min)https://www.youtube.com/watch?v=djWX0MR2Oooworkshop repo that you can start playing with Spark NLP in Colab(you will also see Databricks notebook under each folder)https://github.com/JohnSnowLabs/spark-nlp-workshop/tree/master/tutorials/Certification_Trainings Coding ...
###Code
import sparknlp
from sparknlp.base import *
from sparknlp.annotator import *
from pyspark.ml import Pipeline
print("Spark NLP version", sparknlp.version())
spark
###Output
_____no_output_____
###Markdown
Using Pretrained Pipelinesfor a more detailed notebook, see https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/1.SparkNLP_Basics.ipynb
###Code
from sparknlp.pretrained import PretrainedPipeline
pipeline_dl = PretrainedPipeline('explain_document_dl', lang='en')
###Output
_____no_output_____
###Markdown
**Stages**- DocumentAssembler- SentenceDetector- Tokenizer- NER (NER with GloVe 100D embeddings, CoNLL2003 dataset)- Lemmatizer- Stemmer- Part of Speech- SpellChecker (Norvig)
###Code
testDoc = '''
Peter Parker is a very good persn.
My life in Russia is very intersting.
John and Peter are brthers. However they don't support each other that much.
Mercedes Benz is also working on a driverless car.
Europe is very culture rich. There are huge churches! and big houses!
'''
result = pipeline_dl.annotate(testDoc)
result.keys()
result['entities']
import pandas as pd
df = pd.DataFrame({'token':result['token'], 'ner_label':result['ner'],
'spell_corrected':result['checked'], 'POS':result['pos'],
'lemmas':result['lemma'], 'stems':result['stem']})
df
###Output
_____no_output_____
###Markdown
Using fullAnnotate to get more details
###Code
detailed_result = pipeline_dl.fullAnnotate(testDoc)
detailed_result[0]['entities']
chunks=[]
entities=[]
for n in detailed_result[0]['entities']:
chunks.append(n.result)
entities.append(n.metadata['entity'])
df = pd.DataFrame({'chunks':chunks, 'entities':entities})
df
tuples = []
for x,y,z in zip(detailed_result[0]["token"], detailed_result[0]["pos"], detailed_result[0]["ner"]):
tuples.append((int(x.metadata['sentence']), x.result, x.begin, x.end, y.result, z.result))
df = pd.DataFrame(tuples, columns=['sent_id','token','start','end','pos', 'ner'])
df
###Output
_____no_output_____
###Markdown
Sentiment Analysis
###Code
sentiment = PretrainedPipeline('analyze_sentiment', lang='en')
result = sentiment.annotate("The movie I watched today was not a good one")
result['sentiment']
sentiment_imdb_glove = PretrainedPipeline('analyze_sentimentdl_glove_imdb', lang='en')
comment = '''
It's a very scary film but what impressed me was how true the film sticks to the original's tricks; it isn't filled with loud in-your-face jump scares, in fact, a lot of what makes this film scary is the slick cinematography and intricate shadow play. The use of lighting and creation of atmosphere is what makes this film so tense, which is why it's perfectly suited for those who like Horror movies but without the obnoxious gore.
'''
result = sentiment_imdb_glove.annotate(comment)
result['sentiment']
###Output
_____no_output_____
###Markdown
Using the modules in a pipeline for custom tasksfor a more detailed notebook, see https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/2.Text_Preprocessing_with_SparkNLP_Annotators_Transformers.ipynb
###Code
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/annotation/english/spark-nlp-basics/sample-sentences-en.txt
dbutils.fs.cp("file:/databricks/driver/sample-sentences-en.txt", "dbfs:/")
with open('sample-sentences-en.txt') as f:
print (f.read())
spark_df = spark.read.text('/sample-sentences-en.txt').toDF('text')
spark_df.show(truncate=False)
textFiles = spark.sparkContext.wholeTextFiles("/sample-sentences-en.txt",4) # or/*.txt
spark_df_folder = textFiles.toDF(schema=['path','text'])
spark_df_folder.show(truncate=30)
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetector()\
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer() \
.setInputCols(["sentences"]) \
.setOutputCol("token")
nlpPipeline = Pipeline(stages=[
documentAssembler,
sentenceDetector,
tokenizer
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(spark_df)
result.show(truncate=20)
result.printSchema()
result.select('sentences.result').take(3)
###Output
_____no_output_____
###Markdown
StopWords Cleaner
###Code
stopwords_cleaner = StopWordsCleaner()\
.setInputCols("token")\
.setOutputCol("cleanTokens")\
.setCaseSensitive(False)
stopwords_cleaner.getStopWords()[:10]
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
stopwords_cleaner
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(spark_df)
result.show()
result.select('cleanTokens.result').take(1)
###Output
_____no_output_____
###Markdown
Text Matcher
###Code
! wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/tutorials/Certification_Trainings/Public/data/news_category_train.csv
dbutils.fs.cp("file:/databricks/driver/news_category_train.csv", "dbfs:/")
news_df = spark.read \
.option("header", True) \
.csv("/news_category_train.csv")
news_df.show(5, truncate=50)
entities = ['Wall Street', 'USD', 'stock', 'NYSE']
with open ('financial_entities.txt', 'w') as f:
for i in entities:
f.write(i+'\n')
entities = ['soccer', 'world cup', 'Messi', 'FC Barcelona']
with open ('sport_entities.txt', 'w') as f:
for i in entities:
f.write(i+'\n')
dbutils.fs.cp("file:/databricks/driver/financial_entities.txt", "dbfs:/")
dbutils.fs.cp("file:/databricks/driver/sport_entities.txt", "dbfs:/")
documentAssembler = DocumentAssembler()\
.setInputCol("description")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
financial_entity_extractor = TextMatcher() \
.setInputCols(["document",'token'])\
.setOutputCol("financial_entities")\
.setEntities("file:/databricks/driver/financial_entities.txt")\
.setCaseSensitive(False)\
.setEntityValue('financial_entity')
sport_entity_extractor = TextMatcher() \
.setInputCols(["document",'token'])\
.setOutputCol("sport_entities")\
.setEntities("file:/databricks/driver/sport_entities.txt")\
.setCaseSensitive(False)\
.setEntityValue('sport_entity')
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
financial_entity_extractor,
sport_entity_extractor
])
empty_df = spark.createDataFrame([['']]).toDF("description")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(news_df)
result.select('financial_entities.result','sport_entities.result').take(2)
###Output
_____no_output_____
###Markdown
This means there are no financial and sport entities in the first two lines.
###Code
from pyspark.sql import functions as F
result.select('description','financial_entities.result','sport_entities.result')\
.toDF('text','financial_matches','sport_matches').filter((F.size('financial_matches')>1) | (F.size('sport_matches')>1))\
.show(truncate=70)
###Output
_____no_output_____
###Markdown
Using the pipeline in a LightPipeline
###Code
light_model = LightPipeline(pipelineModel)
light_result = light_model.fullAnnotate("Google, Inc. significantly cut the expected share price for its stock at Wall Street")
light_result[0]['financial_entities']
###Output
_____no_output_____
###Markdown
Pretrained Models Spark NLP offers the following pre-trained models in around **40 languages** and all you need to do is to load the pre-trained model into your disk by specifying the model name and then configuring the model parameters as per your use case and dataset. Then you will not need to worry about training a new model from scratch and will be able to enjoy the pre-trained SOTA algorithms directly applied to your own data with transform().In the official documentation, you can find detailed information regarding how these models are trained by using which algorithms and datasets.https://github.com/JohnSnowLabs/spark-nlp-models for a more detailed notebook, see https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/3.SparkNLP_Pretrained_Models.ipynb LemmatizerModel and ContextSpellCheckerModel
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
spellModel = ContextSpellCheckerModel\
.pretrained('spellcheck_dl')\
.setInputCols("token")\
.setOutputCol("checked")
lemmatizer = LemmatizerModel.pretrained('lemma_antbnc', 'en') \
.setInputCols(["checked"]) \
.setOutputCol("lemma")
pipeline = Pipeline(stages = [
documentAssembler,
tokenizer,
spellModel,
lemmatizer
])
empty_ds = spark.createDataFrame([[""]]).toDF("text")
sc_model = pipeline.fit(empty_ds)
lp = LightPipeline(sc_model)
result = lp.annotate("Plaese alliow me tao introdduce myhelf, I am a man of waelth und tiaste and he just knows that")
list(zip(result['token'],result['checked'],result['lemma']))
###Output
_____no_output_____
###Markdown
Word and Sentence Embeddings Word Embeddings
###Code
glove_embeddings = WordEmbeddingsModel.pretrained('glove_100d')\
.setInputCols(["document", "token"])\
.setOutputCol("embeddings")
documentAssembler = DocumentAssembler()\
.setInputCol("description")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
glove_embeddings
])
empty_df = spark.createDataFrame([['']]).toDF("description")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(news_df.limit(1))
output = result.select('token.result','embeddings.embeddings').limit(1).rdd.flatMap(lambda x: x).collect()
pd.DataFrame({'token':output[0],'embeddings':output[1]})
result = pipelineModel.transform(news_df.limit(10))
result_df = result.select(F.explode(F.arrays_zip(result.token.result, result.embeddings.embeddings)).alias("cols")) \
.select(F.expr("cols['0']").alias("token"),
F.expr("cols['1']").alias("embeddings"))
result_df.show(10, truncate=100)
###Output
_____no_output_____
###Markdown
Bert Embeddings
###Code
bert_embeddings = BertEmbeddings.pretrained('bert_base_cased')\
.setInputCols(["document", "token"])\
.setOutputCol("embeddings")
documentAssembler = DocumentAssembler()\
.setInputCol("description")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
bert_embeddings
])
empty_df = spark.createDataFrame([['']]).toDF("description")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(news_df.limit(10))
result_df = result.select(F.explode(F.arrays_zip(result.token.result, result.embeddings.embeddings)).alias("cols")) \
.select(F.expr("cols['0']").alias("token"),
F.expr("cols['1']").alias("bert_embeddings"))
result_df.show(truncate=100)
###Output
_____no_output_____
###Markdown
Bert Sentence Embeddings
###Code
bert_sentence_embeddings = BertSentenceEmbeddings.pretrained('sent_small_bert_L6_128')\
.setInputCols(["document"])\
.setOutputCol("bert_sent_embeddings")
nlpPipeline = Pipeline(stages=[
documentAssembler,
bert_sentence_embeddings
])
empty_df = spark.createDataFrame([['']]).toDF("description")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(news_df.limit(10))
result_df = result.select(F.explode(F.arrays_zip(result.document.result, result.bert_sent_embeddings.embeddings)).alias("cols"))\
.select(F.expr("cols['0']").alias("document"),
F.expr("cols['1']").alias("bert_sent_embeddings"))
result_df.show(truncate=100)
###Output
_____no_output_____
###Markdown
Universal Sentence Encoder
###Code
# no need for token columns
use_embeddings = UniversalSentenceEncoder.pretrained('tfhub_use')\
.setInputCols(["document"])\
.setOutputCol("sentence_embeddings")
from pyspark.sql import functions as F
documentAssembler = DocumentAssembler()\
.setInputCol("description")\
.setOutputCol("document")
nlpPipeline = Pipeline(stages=[
documentAssembler,
use_embeddings
])
empty_df = spark.createDataFrame([['']]).toDF("description")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(news_df.limit(10))
result_df = result.select(F.explode(F.arrays_zip(result.document.result, result.sentence_embeddings.embeddings)).alias("cols"))\
.select(F.expr("cols['0']").alias("document"),
F.expr("cols['1']").alias("USE_embeddings"))
result_df.show(truncate=100)
###Output
_____no_output_____
###Markdown
Named Entity Recognition (NER) Models for a detailed notebbok, see https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/4.NERDL_Training.ipynb
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("description")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
glove_embeddings = WordEmbeddingsModel.pretrained('glove_100d')\
.setInputCols(["document", "token"])\
.setOutputCol("embeddings")
onto_ner = NerDLModel.pretrained("onto_100", 'en') \
.setInputCols(["document", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["document", "token", "ner"]) \
.setOutputCol("ner_chunk")
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
glove_embeddings,
onto_ner,
ner_converter
])
empty_df = spark.createDataFrame([['']]).toDF("description")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(news_df.limit(10))
result.select(F.explode(F.arrays_zip(result.ner_chunk.result, result.ner_chunk.metadata)).alias("cols")) \
.select(F.expr("cols['0']").alias("chunk"),
F.expr("cols['1']['entity']").alias("ner_label")).show(truncate=False)
light_model = LightPipeline(pipelineModel)
light_result = light_model.fullAnnotate('Peter Parker is a nice persn and lives in New York. Bruce Wayne is also a nice guy and lives in Gotham City.')
chunks = []
entities = []
for n in light_result[0]['ner_chunk']:
chunks.append(n.result)
entities.append(n.metadata['entity'])
import pandas as pd
df = pd.DataFrame({'chunks':chunks, 'entities':entities})
df
###Output
_____no_output_____
###Markdown
Train a NER model **To train a new NER from scratch, check out**https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/4.NERDL_Training.ipynb
###Code
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp/master/src/test/resources/conll2003/eng.train
#dbutils.fs.cp("file:/databricks/driver/sample-sentences-en.txt", "dbfs:/")
from sparknlp.training import CoNLL
training_data = CoNLL().readDataset(spark, 'file:/databricks/driver/eng.train')
training_data.show(3)
training_data.select(F.explode(F.arrays_zip(training_data.token.result, training_data.label.result)).alias("cols"))\
.select(F.expr("cols['0']").alias("token"),
F.expr("cols['1']").alias("ground_truth"))\
.groupBy('ground_truth').count().orderBy('count', ascending=False).show(100,truncate=False)
# You can use any word embeddings you want (Glove, Elmo, Bert, custom etc.)
glove_embeddings = WordEmbeddingsModel.pretrained('glove_100d')\
.setInputCols(["document", "token"])\
.setOutputCol("embeddings")
%fs mkdirs dbfs:/ner_logs
nerTagger = NerDLApproach()\
.setInputCols(["sentence", "token", "embeddings"])\
.setLabelColumn("label")\
.setOutputCol("ner")\
.setMaxEpochs(2)\
.setLr(0.003)\
.setPo(0.05)\
.setBatchSize(32)\
.setRandomSeed(0)\
.setVerbose(1)\
.setValidationSplit(0.2)\
.setEvaluationLogExtended(True) \
.setEnableOutputLogs(True)\
.setIncludeConfidence(True)\
.setOutputLogsPath('dbfs:/ner_logs') # if not set, logs will be written to ~/annotator_logs
#.setGraphFolder('graphs') >> put your graph file (pb) under this folder if you are using a custom graph generated thru NerDL-Graph
ner_pipeline = Pipeline(stages=[
glove_embeddings,
nerTagger
])
# remove the existing logs
!rm -r /dbfs/ner_logs/*
ner_model = ner_pipeline.fit(training_data)
# 1 epoch takes around 2.5 min with batch size=32
# if you get an error for incompatible TF graph, use NERDL Graph script to generate the necessary TF graph at the end of this notebook
#%sh cd ~/annotator_logs && ls -lt
%sh cd /dbfs/ner_logs && pwd && ls -l
%sh head -n 45 /dbfs/ner_logs/NerDLApproach_*
%fs mkdirs dbfs:/models
ner_model.stages[1].write().overwrite().save('dbfs:/models/NER_glove_e1_b32')
%sh cd /dbfs/models/ && pwd && ls -l
###Output
_____no_output_____
###Markdown
Load saved model
###Code
document = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentence = SentenceDetector()\
.setInputCols(['document'])\
.setOutputCol('sentence')
token = Tokenizer()\
.setInputCols(['sentence'])\
.setOutputCol('token')
glove_embeddings = WordEmbeddingsModel.pretrained('glove_100d')\
.setInputCols(["document", "token"])\
.setOutputCol("embeddings")
# load back and use in any pipeline
loaded_ner_model = NerDLModel.load("dbfs:/models/NER_glove_e1_b32")\
.setInputCols(["sentence", "token", "embeddings"])\
.setOutputCol("ner")
converter = NerConverter()\
.setInputCols(["document", "token", "ner"])\
.setOutputCol("ner_span")
ner_prediction_pipeline = Pipeline(stages = [
document,
sentence,
token,
glove_embeddings,
loaded_ner_model,
converter
])
empty_data = spark.createDataFrame([['']]).toDF("text")
prediction_model = ner_prediction_pipeline.fit(empty_data)
text = "Peter Parker is a nice guy and lives in New York."
sample_data = spark.createDataFrame([[text]]).toDF("text")
sample_data.show()
preds = prediction_model.transform(sample_data)
preds.select(F.explode(F.arrays_zip(preds.ner_span.result,preds.ner_span.metadata)).alias("entities")) \
.select(F.expr("entities['0']").alias("chunk"),
F.expr("entities['1'].entity").alias("entity")).show(truncate=False)
###Output
_____no_output_____
###Markdown
Text Classificationfor a detailed notebook, see https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/5.Text_Classification_with_ClassifierDL.ipynb
###Code
! wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/tutorials/Certification_Trainings/Public/data/news_category_test.csv
dbutils.fs.cp("file:/databricks/driver/news_category_test.csv", "dbfs:/")
from pyspark.sql.functions import col
trainDataset = spark.read \
.option("header", True) \
.csv("/news_category_train.csv")
trainDataset.groupBy("category") \
.count() \
.orderBy(col("count").desc()) \
.show()
testDataset = spark.read \
.option("header", True) \
.csv("/news_category_test.csv")
testDataset.groupBy("category") \
.count() \
.orderBy(col("count").desc()) \
.show()
%fs mkdirs dbfs:/clf_dl_logs
# actual content is inside description column
document = DocumentAssembler()\
.setInputCol("description")\
.setOutputCol("document")
# we can also use sentece detector here if we want to train on and get predictions for each sentence
use_embeddings = UniversalSentenceEncoder.pretrained('tfhub_use')\
.setInputCols(["document"])\
.setOutputCol("sentence_embeddings")
# the classes/labels/categories are in category column
classsifierdl = ClassifierDLApproach()\
.setInputCols(["sentence_embeddings"])\
.setOutputCol("class")\
.setLabelColumn("category")\
.setMaxEpochs(5)\
.setBatchSize(8)\
.setLr(0.001)\
.setEnableOutputLogs(True)\
.setOutputLogsPath('dbfs:/clf_dl_logs')
use_clf_pipeline = Pipeline(
stages = [
document,
use_embeddings,
classsifierdl
])
# remove the existing logs
! rm -r /dbfs/clf_dl_logs/*
use_pipelineModel = use_clf_pipeline.fit(trainDataset)
# 5 epochs takes around 3 min
%sh cd /dbfs/clf_dl_logs/ && ls -lt
%sh cat /dbfs/clf_dl_logs/ClassifierDLApproach*
from sparknlp.base import LightPipeline
light_model = LightPipeline(use_pipelineModel)
text='''
Fearing the fate of Italy, the centre-right government has threatened to be merciless with those who flout tough restrictions.
As of Wednesday it will also include all shops being closed across Greece, with the exception of supermarkets. Banks, pharmacies, pet-stores, mobile phone stores, opticians, bakers, mini-markets, couriers and food delivery outlets are among the few that will also be allowed to remain open.
'''
result = light_model.annotate(text)
result['class']
light_model.annotate('the soccer games will be postponed.')
###Output
_____no_output_____
###Markdown
NerDL Graph
###Code
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/training/english/dl-ner/nerdl-graph/create_graph.py
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/training/english/dl-ner/nerdl-graph/dataset_encoder.py
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/training/english/dl-ner/nerdl-graph/ner_model.py
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/training/english/dl-ner/nerdl-graph/ner_model_saver.py
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/training/english/dl-ner/nerdl-graph/sentence_grouper.py
import sys
sys.path.append('/databricks/driver/')
sys.path.append('/databricks/driver/create_graph.py')
import create_graph
ntags = 12 # number of labels
embeddings_dim = 90
nchars =60
create_graph.create_graph(ntags, embeddings_dim, nchars)
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) 1.Quickstart Tutorial on Spark NLP - 1 hrThis is the 1 hr workshop version of the entire training notebooks : https://github.com/JohnSnowLabs/spark-nlp-workshop/tree/master/tutorials/Certification_Trainings/Public an intro article for Spark NLP:https://towardsdatascience.com/introduction-to-spark-nlp-foundations-and-basic-components-part-i-c83b7629ed59How to start Spark NLP in 2 weeks:https://towardsdatascience.com/how-to-get-started-with-sparknlp-in-2-weeks-cb47b2ba994dhttps://towardsdatascience.com/how-to-wrap-your-head-around-spark-nlp-a6f6a968b7e8Article for NER and text classification in Spark NLPhttps://towardsdatascience.com/named-entity-recognition-ner-with-bert-in-spark-nlp-874df20d1d77https://medium.com/spark-nlp/named-entity-recognition-for-healthcare-with-sparknlp-nerdl-and-nercrf-a7751b6ad571https://towardsdatascience.com/text-classification-in-spark-nlp-with-bert-and-universal-sentence-encoders-e644d618ca32a webinar to show how to train a NER model from scratch (90 min)https://www.youtube.com/watch?v=djWX0MR2Oooworkshop repo that you can start playing with Spark NLP in Colab(you will also see Databricks notebook under each folder)https://github.com/JohnSnowLabs/spark-nlp-workshop/tree/master/tutorials/Certification_Trainings Coding ...
###Code
import sparknlp
from sparknlp.base import *
from sparknlp.annotator import *
from pyspark.ml import Pipeline
print("Spark NLP version", sparknlp.version())
spark
###Output
_____no_output_____
###Markdown
Using Pretrained Pipelinesfor a more detailed notebook, see https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/1.SparkNLP_Basics.ipynb
###Code
from sparknlp.pretrained import PretrainedPipeline
pipeline_dl = PretrainedPipeline('explain_document_dl', lang='en')
###Output
_____no_output_____
###Markdown
**Stages**- DocumentAssembler- SentenceDetector- Tokenizer- NER (NER with GloVe 100D embeddings, CoNLL2003 dataset)- Lemmatizer- Stemmer- Part of Speech- SpellChecker (Norvig)
###Code
testDoc = '''
Peter Parker is a very good persn.
My life in Russia is very intersting.
John and Peter are brthers. However they don't support each other that much.
Mercedes Benz is also working on a driverless car.
Europe is very culture rich. There are huge churches! and big houses!
'''
result = pipeline_dl.annotate(testDoc)
result.keys()
result['entities']
import pandas as pd
df = pd.DataFrame({'token':result['token'], 'ner_label':result['ner'],
'spell_corrected':result['checked'], 'POS':result['pos'],
'lemmas':result['lemma'], 'stems':result['stem']})
df
###Output
_____no_output_____
###Markdown
Using fullAnnotate to get more details
###Code
detailed_result = pipeline_dl.fullAnnotate(testDoc)
detailed_result[0]['entities']
chunks=[]
entities=[]
for n in detailed_result[0]['entities']:
chunks.append(n.result)
entities.append(n.metadata['entity'])
df = pd.DataFrame({'chunks':chunks, 'entities':entities})
df
tuples = []
for x,y,z in zip(detailed_result[0]["token"], detailed_result[0]["pos"], detailed_result[0]["ner"]):
tuples.append((int(x.metadata['sentence']), x.result, x.begin, x.end, y.result, z.result))
df = pd.DataFrame(tuples, columns=['sent_id','token','start','end','pos', 'ner'])
df
###Output
_____no_output_____
###Markdown
Sentiment Analysis
###Code
sentiment = PretrainedPipeline('analyze_sentiment', lang='en')
result = sentiment.annotate("The movie I watched today was not a good one")
result['sentiment']
# DL version (using Universal sentence encoder - USE)
# 930 MB as it downloads the USE as well
sentiment_twitter = PretrainedPipeline('analyze_sentimentdl_use_twitter', lang='en')
###Output
_____no_output_____
###Markdown
Using the modules in a pipeline for custom tasksfor a more detailed notebook, see https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/2.Text_Preprocessing_with_SparkNLP_Annotators_Transformers.ipynb
###Code
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/annotation/english/spark-nlp-basics/sample-sentences-en.txt
dbutils.fs.cp("file:/databricks/driver/sample-sentences-en.txt", "dbfs:/")
with open('sample-sentences-en.txt') as f:
print (f.read())
spark_df = spark.read.text('/sample-sentences-en.txt').toDF('text')
spark_df.show(truncate=False)
textFiles = spark.sparkContext.wholeTextFiles("/sample-sentences-en.txt",4) # or/*.txt
spark_df_folder = textFiles.toDF(schema=['path','text'])
spark_df_folder.show(truncate=30)
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetector()\
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer() \
.setInputCols(["sentences"]) \
.setOutputCol("token")
nlpPipeline = Pipeline(stages=[
documentAssembler,
sentenceDetector,
tokenizer
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(spark_df)
result.show(truncate=20)
result.printSchema()
result.select('sentences.result').take(3)
###Output
_____no_output_____
###Markdown
StopWords Cleaner
###Code
stopwords_cleaner = StopWordsCleaner()\
.setInputCols("token")\
.setOutputCol("cleanTokens")\
.setCaseSensitive(False)
stopwords_cleaner.getStopWords()[:10]
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
stopwords_cleaner
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(spark_df)
result.show()
result.select('cleanTokens.result').take(1)
###Output
_____no_output_____
###Markdown
Text Matcher
###Code
! wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/tutorials/Certification_Trainings/Public/data/news_category_train.csv
dbutils.fs.cp("file:/databricks/driver/news_category_train.csv", "dbfs:/")
news_df = spark.read \
.option("header", True) \
.csv("/news_category_train.csv")
news_df.show(5, truncate=50)
entities = ['Wall Street', 'USD', 'stock', 'NYSE']
with open ('financial_entities.txt', 'w') as f:
for i in entities:
f.write(i+'\n')
entities = ['soccer', 'world cup', 'Messi', 'FC Barcelona']
with open ('sport_entities.txt', 'w') as f:
for i in entities:
f.write(i+'\n')
dbutils.fs.cp("file:/databricks/driver/financial_entities.txt", "dbfs:/")
dbutils.fs.cp("file:/databricks/driver/sport_entities.txt", "dbfs:/")
documentAssembler = DocumentAssembler()\
.setInputCol("description")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
financial_entity_extractor = TextMatcher() \
.setInputCols(["document",'token'])\
.setOutputCol("financial_entities")\
.setEntities("file:/databricks/driver/financial_entities.txt")\
.setCaseSensitive(False)\
.setEntityValue('financial_entity')
sport_entity_extractor = TextMatcher() \
.setInputCols(["document",'token'])\
.setOutputCol("sport_entities")\
.setEntities("file:/databricks/driver/sport_entities.txt")\
.setCaseSensitive(False)\
.setEntityValue('sport_entity')
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
financial_entity_extractor,
sport_entity_extractor
])
empty_df = spark.createDataFrame([['']]).toDF("description")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(news_df)
result.select('financial_entities.result','sport_entities.result').take(2)
###Output
_____no_output_____
###Markdown
This means there are no financial and sport entities in the first two lines.
###Code
from pyspark.sql import functions as F
result.select('description','financial_entities.result','sport_entities.result')\
.toDF('text','financial_matches','sport_matches').filter((F.size('financial_matches')>1) | (F.size('sport_matches')>1))\
.show(truncate=70)
###Output
_____no_output_____
###Markdown
Using the pipeline in a LightPipeline
###Code
light_model = LightPipeline(pipelineModel)
light_result = light_model.fullAnnotate("Google, Inc. significantly cut the expected share price for its stock at Wall Street")
light_result[0]['financial_entities']
###Output
_____no_output_____
###Markdown
Pretrained Models Spark NLP offers the following pre-trained models in around **40 languages** and all you need to do is to load the pre-trained model into your disk by specifying the model name and then configuring the model parameters as per your use case and dataset. Then you will not need to worry about training a new model from scratch and will be able to enjoy the pre-trained SOTA algorithms directly applied to your own data with transform().In the official documentation, you can find detailed information regarding how these models are trained by using which algorithms and datasets.https://github.com/JohnSnowLabs/spark-nlp-models for a more detailed notebook, see https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/3.SparkNLP_Pretrained_Models.ipynb LemmatizerModel and ContextSpellCheckerModel
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
spellModel = ContextSpellCheckerModel\
.pretrained('spellcheck_dl')\
.setInputCols("token")\
.setOutputCol("checked")
lemmatizer = LemmatizerModel.pretrained('lemma_antbnc', 'en') \
.setInputCols(["checked"]) \
.setOutputCol("lemma")
pipeline = Pipeline(stages = [
documentAssembler,
tokenizer,
spellModel,
lemmatizer
])
empty_ds = spark.createDataFrame([[""]]).toDF("text")
sc_model = pipeline.fit(empty_ds)
lp = LightPipeline(sc_model)
result = lp.annotate("Plaese alliow me tao introdduce myhelf, I am a man of waelth und tiaste and he just knows that")
list(zip(result['token'],result['checked'],result['lemma']))
###Output
_____no_output_____
###Markdown
Word and Sentence Embeddings Word Embeddings
###Code
glove_embeddings = WordEmbeddingsModel.pretrained('glove_100d')\
.setInputCols(["document", "token"])\
.setOutputCol("embeddings")
documentAssembler = DocumentAssembler()\
.setInputCol("description")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
glove_embeddings
])
empty_df = spark.createDataFrame([['']]).toDF("description")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(news_df.limit(1))
output = result.select('token.result','embeddings.embeddings').limit(1).rdd.flatMap(lambda x: x).collect()
pd.DataFrame({'token':output[0],'embeddings':output[1]})
result = pipelineModel.transform(news_df.limit(10))
result_df = result.select(F.explode(F.arrays_zip('token.result', 'embeddings.embeddings')).alias("cols")) \
.select(F.expr("cols['0']").alias("token"),
F.expr("cols['1']").alias("embeddings"))
result_df.show(10, truncate=100)
###Output
_____no_output_____
###Markdown
Bert Embeddings
###Code
bert_embeddings = BertEmbeddings.pretrained('bert_base_cased')\
.setInputCols(["document", "token"])\
.setOutputCol("embeddings")
documentAssembler = DocumentAssembler()\
.setInputCol("description")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
bert_embeddings
])
empty_df = spark.createDataFrame([['']]).toDF("description")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(news_df.limit(10))
result_df = result.select(F.explode(F.arrays_zip('token.result', 'embeddings.embeddings')).alias("cols")) \
.select(F.expr("cols['0']").alias("token"),
F.expr("cols['1']").alias("bert_embeddings"))
result_df.show(truncate=100)
###Output
_____no_output_____
###Markdown
Bert Sentence Embeddings
###Code
bert_sentence_embeddings = BertSentenceEmbeddings.pretrained('sent_small_bert_L6_128')\
.setInputCols(["document"])\
.setOutputCol("bert_sent_embeddings")
nlpPipeline = Pipeline(stages=[
documentAssembler,
bert_sentence_embeddings
])
empty_df = spark.createDataFrame([['']]).toDF("description")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(news_df.limit(10))
result_df = result.select(F.explode(F.arrays_zip('document.result', 'bert_sent_embeddings.embeddings')).alias("cols")) \
.select(F.expr("cols['0']").alias("document"),
F.expr("cols['1']").alias("bert_sent_embeddings"))
result_df.show(truncate=100)
###Output
_____no_output_____
###Markdown
Universal Sentence Encoder
###Code
# no need for token columns
use_embeddings = UniversalSentenceEncoder.pretrained('tfhub_use')\
.setInputCols(["document"])\
.setOutputCol("sentence_embeddings")
nlpPipeline = Pipeline(stages=[
documentAssembler,
use_embeddings
])
empty_df = spark.createDataFrame([['']]).toDF("description")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(news_df.limit(10))
result_df = result.select(F.explode(F.arrays_zip('document.result', 'sentence_embeddings.embeddings')).alias("cols")) \
.select(F.expr("cols['0']").alias("document"),
F.expr("cols['1']").alias("USE_embeddings"))
result_df.show(truncate=100)
###Output
_____no_output_____
###Markdown
Named Entity Recognition (NER) Models for a detailed notebbok, see https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/4.NERDL_Training.ipynb
###Code
glove_embeddings = WordEmbeddingsModel.pretrained('glove_100d')\
.setInputCols(["document", "token"])\
.setOutputCol("embeddings")
onto_ner = NerDLModel.pretrained("onto_100", 'en') \
.setInputCols(["document", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["document", "token", "ner"]) \
.setOutputCol("ner_chunk")
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
glove_embeddings,
onto_ner,
ner_converter
])
empty_df = spark.createDataFrame([['']]).toDF("description")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(news_df.limit(10))
result.select(F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata')).alias("cols")) \
.select(F.expr("cols['0']").alias("chunk"),
F.expr("cols['1']['entity']").alias("ner_label")).show(truncate=False)
light_model = LightPipeline(pipelineModel)
light_result = light_model.fullAnnotate('Peter Parker is a nice persn and lives in New York. Bruce Wayne is also a nice guy and lives in Gotham City.')
chunks = []
entities = []
for n in light_result[0]['ner_chunk']:
chunks.append(n.result)
entities.append(n.metadata['entity'])
import pandas as pd
df = pd.DataFrame({'chunks':chunks, 'entities':entities})
df
###Output
_____no_output_____
###Markdown
Train a NER model **To train a new NER from scratch, check out**https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/4.NERDL_Training.ipynb
###Code
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp/master/src/test/resources/conll2003/eng.train
#dbutils.fs.cp("file:/databricks/driver/sample-sentences-en.txt", "dbfs:/")
from sparknlp.training import CoNLL
training_data = CoNLL().readDataset(spark, 'file:/databricks/driver/eng.train')
training_data.show(3)
training_data.select(F.explode(F.arrays_zip('token.result','label.result')).alias("cols")) \
.select(F.expr("cols['0']").alias("token"),
F.expr("cols['1']").alias("ground_truth")).groupBy('ground_truth').count().orderBy('count', ascending=False).show(100,truncate=False)
# You can use any word embeddings you want (Glove, Elmo, Bert, custom etc.)
glove_embeddings = WordEmbeddingsModel.pretrained('glove_100d')\
.setInputCols(["document", "token"])\
.setOutputCol("embeddings")
nerTagger = NerDLApproach()\
.setInputCols(["sentence", "token", "embeddings"])\
.setLabelColumn("label")\
.setOutputCol("ner")\
.setMaxEpochs(1)\
.setLr(0.003)\
.setPo(0.05)\
.setBatchSize(32)\
.setRandomSeed(0)\
.setVerbose(1)\
.setValidationSplit(0.2)\
.setEvaluationLogExtended(True) \
.setEnableOutputLogs(True)\
.setIncludeConfidence(True)\
.setOutputLogsPath('ner_logs') # if not set, logs will be written to ~/annotator_logs
#.setGraphFolder('graphs') >> put your graph file (pb) under this folder if you are using a custom graph generated thru NerDL-Graph
ner_pipeline = Pipeline(stages=[
glove_embeddings,
nerTagger
])
ner_model = ner_pipeline.fit(training_data)
# 1 epoch takes around 2.5 min with batch size=32
# if you get an error for incompatible TF graph, use NERDL Graph script to generate the necessary TF graph at the end of this notebook
#%sh cd ~/annotator_logs && ls -lt
%sh cd ner_logs && ls -lt
%sh head -n 45 ner_logs/NerDLApproach_86ff127a6f55.log
%sh ls -la
%sh mkdir models
ner_model.stages[1].write().overwrite().save('/databricks/driver/models/NER_glove_e1_b32')
# load back and use in any pipeline
loaded_ner_model = NerDLModel.load("/databricks/driver/models/NER_glove_e1_b32")\
.setInputCols(["sentence", "token", "embeddings"])\
.setOutputCol("ner")
###Output
_____no_output_____
###Markdown
Text Classificationfor a detailed notebook, see https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/5.Text_Classification_with_ClassifierDL.ipynb
###Code
! wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/tutorials/Certification_Trainings/Public/data/news_category_test.csv
dbutils.fs.cp("file:/databricks/driver/news_category_test.csv", "dbfs:/")
from pyspark.sql.functions import col
trainDataset = spark.read \
.option("header", True) \
.csv("/news_category_train.csv")
trainDataset.groupBy("category") \
.count() \
.orderBy(col("count").desc()) \
.show()
testDataset = spark.read \
.option("header", True) \
.csv("/news_category_test.csv")
testDataset.groupBy("category") \
.count() \
.orderBy(col("count").desc()) \
.show()
# actual content is inside description column
document = DocumentAssembler()\
.setInputCol("description")\
.setOutputCol("document")
# we can also use sentece detector here if we want to train on and get predictions for each sentence
use_embeddings = UniversalSentenceEncoder.pretrained('tfhub_use')\
.setInputCols(["document"])\
.setOutputCol("sentence_embeddings")
# the classes/labels/categories are in category column
classsifierdl = ClassifierDLApproach()\
.setInputCols(["sentence_embeddings"])\
.setOutputCol("class")\
.setLabelColumn("category")\
.setMaxEpochs(5)\
.setEnableOutputLogs(True)
use_clf_pipeline = Pipeline(
stages = [
document,
use_embeddings,
classsifierdl
])
use_pipelineModel = use_clf_pipeline.fit(trainDataset)
# 5 epochs takes around 3 min
%sh cd ~/annotator_logs && ls -lt
%sh cat ~/annotator_logs/ClassifierDLApproach_ac9199b197d9.log
from sparknlp.base import LightPipeline
light_model = LightPipeline(use_pipelineModel)
text='''
Fearing the fate of Italy, the centre-right government has threatened to be merciless with those who flout tough restrictions.
As of Wednesday it will also include all shops being closed across Greece, with the exception of supermarkets. Banks, pharmacies, pet-stores, mobile phone stores, opticians, bakers, mini-markets, couriers and food delivery outlets are among the few that will also be allowed to remain open.
'''
result = light_model.annotate(text)
result['class']
light_model.annotate('the soccer games will be postponed.')
###Output
_____no_output_____
###Markdown
NerDL Graph
###Code
!pip -q install tensorflow==1.15.0
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/training/english/dl-ner/nerdl-graph/create_graph.py
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/training/english/dl-ner/nerdl-graph/dataset_encoder.py
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/training/english/dl-ner/nerdl-graph/ner_model.py
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/training/english/dl-ner/nerdl-graph/ner_model_saver.py
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/training/english/dl-ner/nerdl-graph/sentence_grouper.py
import sys
sys.path.append('/databricks/driver/')
sys.path.append('/databricks/driver/create_graph.py')
import create_graph
ntags = 12 # number of labels
embeddings_dim = 90
nchars =60
create_graph.create_graph(ntags, embeddings_dim, nchars)
%sh ls -la
###Output
_____no_output_____ |
running_with_less_data.ipynb | ###Markdown
Setup
###Code
from google.colab import drive
drive.mount("/content/gdrive")
! ls
%cd gdrive/MyDrive
!git clone https://github.com/ImaneChafi/label_representations
%cd label_representations
!git pull
!pip install torch
!pip install torchtoolbox
###Output
Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (1.10.0+cu111)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch) (3.10.0.2)
Collecting torchtoolbox
Downloading torchtoolbox-0.1.8.2-py3-none-any.whl (84 kB)
[K |████████████████████████████████| 84 kB 2.5 MB/s
[?25hRequirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from torchtoolbox) (3.13)
Requirement already satisfied: pyarrow in /usr/local/lib/python3.7/dist-packages (from torchtoolbox) (3.0.0)
Requirement already satisfied: lmdb in /usr/local/lib/python3.7/dist-packages (from torchtoolbox) (0.99)
Collecting transformers
Downloading transformers-4.13.0-py3-none-any.whl (3.3 MB)
[K |████████████████████████████████| 3.3 MB 30.2 MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torchtoolbox) (1.19.5)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from torchtoolbox) (1.0.1)
Requirement already satisfied: prettytable in /usr/local/lib/python3.7/dist-packages (from torchtoolbox) (2.4.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from torchtoolbox) (4.62.3)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from torchtoolbox) (1.15.0)
Requirement already satisfied: tensorboard in /usr/local/lib/python3.7/dist-packages (from torchtoolbox) (2.7.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from torchtoolbox) (1.4.1)
Requirement already satisfied: opencv-python in /usr/local/lib/python3.7/dist-packages (from torchtoolbox) (4.1.2.30)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from prettytable->torchtoolbox) (4.8.2)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prettytable->torchtoolbox) (0.2.5)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->prettytable->torchtoolbox) (3.6.0)
Requirement already satisfied: typing-extensions>=3.6.4 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->prettytable->torchtoolbox) (3.10.0.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->torchtoolbox) (3.0.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->torchtoolbox) (1.1.0)
Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->torchtoolbox) (1.35.0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/dist-packages (from tensorboard->torchtoolbox) (0.37.0)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard->torchtoolbox) (0.12.0)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->torchtoolbox) (1.42.0)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->torchtoolbox) (0.6.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->torchtoolbox) (1.8.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard->torchtoolbox) (1.0.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard->torchtoolbox) (0.4.6)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->torchtoolbox) (57.4.0)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->torchtoolbox) (3.17.3)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->torchtoolbox) (2.23.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard->torchtoolbox) (3.3.6)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard->torchtoolbox) (4.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard->torchtoolbox) (4.2.4)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard->torchtoolbox) (0.2.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->torchtoolbox) (1.3.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard->torchtoolbox) (0.4.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->torchtoolbox) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->torchtoolbox) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->torchtoolbox) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->torchtoolbox) (2021.10.8)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->torchtoolbox) (3.1.1)
Collecting huggingface-hub<1.0,>=0.1.0
Downloading huggingface_hub-0.2.1-py3-none-any.whl (61 kB)
[K |████████████████████████████████| 61 kB 625 kB/s
[?25hRequirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers->torchtoolbox) (21.3)
Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers->torchtoolbox) (3.4.0)
Collecting sacremoses
Downloading sacremoses-0.0.46-py3-none-any.whl (895 kB)
[K |████████████████████████████████| 895 kB 73.1 MB/s
[?25hRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers->torchtoolbox) (2019.12.20)
Collecting pyyaml
Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)
[K |████████████████████████████████| 596 kB 63.4 MB/s
[?25hCollecting tokenizers<0.11,>=0.10.1
Downloading tokenizers-0.10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.3 MB)
[K |████████████████████████████████| 3.3 MB 24.9 MB/s
[?25hRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers->torchtoolbox) (3.0.6)
Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers->torchtoolbox) (7.1.2)
Installing collected packages: pyyaml, tokenizers, sacremoses, huggingface-hub, transformers, torchtoolbox
Attempting uninstall: pyyaml
Found existing installation: PyYAML 3.13
Uninstalling PyYAML-3.13:
Successfully uninstalled PyYAML-3.13
Successfully installed huggingface-hub-0.2.1 pyyaml-6.0 sacremoses-0.0.46 tokenizers-0.10.3 torchtoolbox-0.1.8.2 transformers-4.13.0
###Markdown
Running ResNet110 with Less Training Data Original Labels Category Labels - Seed 7
###Code
# 1 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 7 --label category --level 1
# 2 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 7 --label category --level 2
# 4 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 7 --label category --level 4
# 8 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 7 --label category --level 8
# 10 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 7 --label category --level 10
# 20 % of data
#!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 7 --label category --level 20
###Output
_____no_output_____
###Markdown
Category Labels - Seed 77
###Code
# 1 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 77 --label category --level 1
# 2 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 77 --label category --level 2
# 4 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 77 --label category --level 4
# 8 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 77 --label category --level 8
# 10 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 77 --label category --level 10
# 20 % of data
#!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 77 --label category --level 20
###Output
_____no_output_____
###Markdown
Category Labels - Seed 100
###Code
# 1 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 100 --label category --level 1
# 2 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 100 --label category --level 2
# 4 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 100 --label category --level 4
# 8 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 100 --label category --level 8
# 10 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 100 --label category --level 10
# 20 % of data
#!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 100 --label category --level 20
###Output
_____no_output_____
###Markdown
Speech Labels - Seed 7
###Code
# 1 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 7 --label speech --level 1
# 2 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 7 --label speech --level 2
# 4 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 7 --label speech --level 4
# 8 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 7 --label speech --level 8
# 10 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 7 --label speech --level 10
# 20 % of data
#!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 7 --label speech --level 20
###Output
_____no_output_____
###Markdown
Speech Labels - Seed 77
###Code
# 1 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 77 --label speech --level 1
# 2 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 77 --label speech --level 2
# 4 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 77 --label speech --level 4
# 8 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 77 --label speech --level 8
# 10 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 77 --label speech --level 10
# 20 % of data
#!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 77 --label speech --level 20
###Output
_____no_output_____
###Markdown
Speech Labels - Seed 100
###Code
# 1 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 100 --label speech --level 1
# 2 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 100 --label speech --level 2
# 4 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 100 --label speech --level 4
# 8 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 100 --label speech --level 8
# 10 % of data
!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 100 --label speech --level 10
# 20 % of data
#!python train.py --model resnet110 --dataset cifar10 --base_dir ./outputs/baseline/ --seed 100 --label speech --level 20
###Output
_____no_output_____
###Markdown
Chantel Labels Speech Labels - Seed 7
###Code
# 1 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 7 --label speech --level 1
# 2 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 7 --label speech --level 2
# 4 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 7 --label speech --level 4
# 8 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 7 --label speech --level 8
# 10 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 7 --label speech --level 10
# 20 % of data
#!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 7 --label speech --level 20
###Output
_____no_output_____
###Markdown
Speech Labels - Seed 77
###Code
# 1 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 77 --label speech --level 1
# 2 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 77 --label speech --level 2
# 4 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 77 --label speech --level 4
# 8 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 77 --label speech --level 8
# 10 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 77 --label speech --level 10
# 20 % of data
#!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 77 --label speech --level 20
###Output
_____no_output_____
###Markdown
Speech Labels - Seed 100
###Code
# 1 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 100 --label speech --level 1
# 2 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 100 --label speech --level 2
# 4 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 100 --label speech --level 4
# 8 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 100 --label speech --level 8
# 10 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 100 --label speech --level 10
# 20 % of data
#!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 100 --label speech --level 20
###Output
_____no_output_____
###Markdown
Speech Labels - Seed 100
###Code
# 1 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 100 --label speech --level 1
# 2 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 100 --label speech --level 2
# 4 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 100 --label speech --level 4
# 8 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 100 --label speech --level 8
# 10 % of data
!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 100 --label speech --level 10
# 20 % of data
#!python train.py --model resnet110 --dataset cifar10 --label_dir ./labels/label_files/chantel --base_dir ./outputs/chantel/ --seed 100 --label speech --level 20
###Output
_____no_output_____
###Markdown
Save
###Code
!git add .
!git commit -m "ResNet110 trained on less data with chantel, original speech labels and categorical labels, for 1%, 2%, 4%, 8% and 10%"
uname = "MarieGuertin"
!git config --global user.email "[email protected]"
!git config --global user.name "Marie Guertin"
token = "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"
!git remote remove origin
!git remote add origin https://$uname:[email protected]/ImaneChafi/label_representations
! git push
###Output
_____no_output_____ |
source/classes/class_6/class_6.ipynb | ###Markdown
Class 6: Advanced `pandas` Currently, `pandas`' `Series` and `DataFrame` might seem to us as no more than tables with complicated indexing methods. In this lesson, we will learn more about what makes `pandas` so powerful and how we can use it to write efficient and readable code. ````{note}Some of the features described below only work with pandas >= 1.0.0. Make sure you have the latest pandas installation when running this notebook. To check the version of your pandas (or any other package), import it and print its `__version__` attribute:```python>>> import pandas as pd>>> print(pd.__version__)'1.2.0'``````` Missing Data The last question in the previous class pointed us to [working with missing data](https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html). But how and why do missing data occur?One option is pandas' index alignment, the property that makes sure that each value will have the same index throughout the entire computation process.
###Code
import pandas as pd
import numpy as np
A = pd.Series([2, 4, 6], index=[0, 1, 2])
B = pd.Series([1, 3, 5], index=[1, 2, 3])
A + B
###Output
_____no_output_____
###Markdown
The NaNs we have are what we call missing data, and this is how they are represented in pandas. We'll discuss that in more detail in a few moments.The same thing occurs with DataFrames:
###Code
A = pd.DataFrame(np.random.randint(0, 20, (2, 2)),
columns=list('AB'))
A
B = pd.DataFrame(np.random.randint(0, 10, (3, 3)),
columns=list('BAC'))
B
new = A + B
print(new)
print(f"\nReturned dtypes:\n{new.dtypes}")
###Output
A B C
0 12.0 13.0 NaN
1 19.0 15.0 NaN
2 NaN NaN NaN
Returned dtypes:
A float64
B float64
C float64
dtype: object
###Markdown
```{note}Note how `new.dtypes` itself returns a `Series` of dtypes, with it's own `object` dtype.``` The dataframe's shape is the shape of the larger dataframe, and the "extra" row (index 2) was filled with NaNs. Since we have NaNs, the data type of the column is implicitly converted to a floating point type. To have integer dataframes with NaNs, we have to explicitly say we want them available. More on that later. Another way to introduce missing data is through reindexing. If we "resample" our data we can achieve the following:
###Code
df = pd.DataFrame(np.random.randn(5, 3), index=['a', 'c', 'e', 'f', 'h'],
columns=['one', 'two', 'three'])
df
df2 = df.reindex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'])
df2
###Output
_____no_output_____
###Markdown
But what is `NaN`? Is it the same as `None`? To better answer the former, let's first have a closer look at the latter. The `None` object `None` is the standard null value in Python, and is used extensively in normal usage of the language. For example, functions that don't have a `return` statement, implicitly return `None`. While `None` can be used as a missing data type, it's probably not the best choice.
###Code
vals1 = np.array([1, None, 3, 4])
vals1
###Output
_____no_output_____
###Markdown
The `dtype` is `object`, because the best common type of `int`s and a `None` is a Python `object`. This slows down computation time on these arrays:
###Code
for dtype in ['object', 'int']:
print("dtype =", dtype)
%timeit np.arange(1E6, dtype=dtype).sum()
print()
###Output
dtype = object
54.7 ms ± 3.97 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
dtype = int
2.12 ms ± 289 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
If you recall from a couple of lessons ago, the performance of `object` arrays is very similar to that of standard lists (generally speaking, the two data structures are effectively identical). Another thing we can't do is aggregation:
###Code
vals1.sum()
###Output
_____no_output_____
###Markdown
The `NaN` value `NaN` is a special floating-point value recognized by all programming languages that conform to the IEEE standard (which means most of them). As we mentioned before, it forces the entire array to have a floating point type:
###Code
vals2 = np.array([1, np.nan, 3, 4])
vals2.dtype
###Output
_____no_output_____
###Markdown
Creating floating point arrays is very fast, so performance isn't hindered. NaN is sometimes described as a "data virus", since it infects objects it touches:
###Code
1 + np.nan
0 * np.nan
vals2.sum(), vals2.min(), vals2.max()
np.nan == np.nan
###Output
_____no_output_____
###Markdown
Numpy has `nan`-aware counterparts to many of its aggregation functions, which can work with NaNs correctly. They usually have the same name as their non-NaN sibling, but with the "nan" prefix:
###Code
print(np.nansum(vals2))
print(np.nanmean(vals2))
###Output
_____no_output_____
###Markdown
However, pandas objects account for NaNs in their calculations, as we'll soon see.Pandas can handle both `NaN` and `None` interchangeably:
###Code
ser = pd.Series([1, np.nan, 2, None])
ser
###Output
_____no_output_____
###Markdown
The `NaT` value When dealing with datetime values or indices, the missing value is represented as `NaT`, or not-a-time:
###Code
df['timestamp'] = pd.Timestamp('20180101')
df
df2 = df.reindex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'])
df2
###Output
_____no_output_____
###Markdown
Operations and calculations with missing data
###Code
a = pd.DataFrame(np.random.random((5, 2)), columns=['one', 'two'])
a.iloc[1, 1] = np.nan
a
b = pd.DataFrame(np.random.random((6, 3)), columns=['one', 'two', 'three'])
b.iloc[2, 2] = np.nan
b
a + b
###Output
_____no_output_____
###Markdown
As we see, missing values propagate naturally through these arithmetic operations. Statistics also works:
###Code
(a + b).describe()
# Summation - NaNs are zero.
# If everything is NaN - the result is NaN as well.
# pandas' cumsum and cumprod ignore NaNs but preserve them in the resulting arrays.
###Output
_____no_output_____
###Markdown
We can also receive a boolean mask of the NaNs in a dataframe:
###Code
mask = (a + b).isnull() # also isna(), and the opposite .notnull()
mask
###Output
_____no_output_____
###Markdown
Filling missing values The simplest option is to use the `fillna` method:
###Code
summed = a + b
summed.iloc[4, 0] = np.nan
summed
summed.fillna(0)
summed.fillna('missing') # changed dtype to "object"
summed.fillna(method='pad') # The NaN column remained the same, but values were propagated forward
# We can also use the "backfill" method to fill in values to the back
summed.fillna(method='pad', limit=1) # No more than one padded NaN in a row
summed.fillna(summed.mean()) # each column received its respective mean. The NaN column is untouched.
###Output
_____no_output_____
###Markdown
Dropping missing values We've already seen in the short exercise the `dropna` method, that allows us to drop missing values:
###Code
summed
filled = summed.fillna(summed.mean())
filled
filled.dropna(axis=1) # each column containing NaN is dropped
filled.dropna(axis=0) # each row containing a NaN is dropped
###Output
_____no_output_____
###Markdown
Interpolation The last way to to fill in missing values is through [interpolation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.interpolate.html).The default interpolation methods perform linear interpolation on the data, based on its ordinal index:
###Code
summed
summed.interpolate() # notice all the details in the interpolation of the three columns
###Output
_____no_output_____
###Markdown
We can also interpolate with the actual index values in mind:
###Code
# Create "missing" index
timeindex = pd.Series(['1/1/2018', '1/4/2018', '1/5/2018', '1/7/2018', '1/8/2018'])
timeindex = pd.to_datetime(timeindex)
data_to_interp = [1, np.nan, 5, np.nan, 8]
df_to_interp = pd.DataFrame(data_to_interp, index=timeindex)
df_to_interp
df_to_interp.interpolate() # the index values aren't taken into account
df_to_interp.interpolate(method='index') # notice how the data obtains the "right" values
###Output
_____no_output_____
###Markdown
Pandas has many other interpolation methods, based on SciPy's.
###Code
df_inter_2 = pd.DataFrame({'A': [1, 2.1, np.nan, 4.7, 5.6, 6.8],
'B': [.25, np.nan, np.nan, 4, 12.2, 14.4]})
df_inter_2
df_inter_2.interpolate(method='polynomial', order=2)
###Output
_____no_output_____
###Markdown
Missing Values in Non-Float Columns Starting from pandas v1.0.0 pandas gained support for NaN values in non-float columns. This feature is a bit experimental currently, so the default behavior still converts integers to floats for example, but the support is there if you know where to look. By default:
###Code
nanint = pd.Series([1, 2, np.nan, 4])
nanint # the result has a dtype of float64 even though all numbers are integers.
###Output
_____no_output_____
###Markdown
We can try to force pandas' hand here, but it won't work:
###Code
nanint = pd.Series([1, 2, np.nan, 4], dtype="int32")
###Output
_____no_output_____
###Markdown
To our rescue comes the new `pd.Int32Dtype`:
###Code
nanint = pd.Series([1, 2, np.nan, 4], dtype="Int32")
nanint
###Output
_____no_output_____
###Markdown
It worked! We have a series with integers and a missing value! Notice the changes we had to made:1. The `NaN` is `` now. It's actually a new type of `NaN` called `pd.NA`.2. The data type had to be mentioned explictly, meaning that the conversion will work only if we know in advance that we'll have NA values.3. The data type is `Int32`. It's CamelCase and it's actually a class underneath. Standard datatypes are lowercase.Caveats aside, this is definitely useful for scientists who sometimes have integer values and do not want to convert them to float to supports NAs.
###Code
import matplotlib.pyplot as plt
from myst_nb import glue
n_cycles = 10
n_samples = 10000
amplitude = 3
phase = np.pi / 4
end = 2 * np.pi * n_cycles
x = np.linspace(0, end, num=n_samples)
y = amplitude * np.sin(x + phase)
chosen_idx = np.random.choice(n_samples, size=100, replace=False)
data = pd.DataFrame(np.nan, index=x, columns=['raw'])
data.iloc[chosen_idx, 0] = y[chosen_idx]
# plotting
fig1, ax1 = plt.subplots()
ax1.set_title('Raw Data')
data.raw.plot(marker='o', ax=ax1)
data['lin_inter'] = data.raw.interpolate(method='index')
fig2, ax2 = plt.subplots()
ax2.set_title('Linear Interpolation')
data.lin_inter.plot(marker='o', ax=ax2)
data['quad_inter'] = data.raw.interpolate(method='quadratic')
fig3, ax3 = plt.subplots()
ax3.set_title('Quadratic Interpolation')
data.quad_inter.plot(marker='o', ax=ax3)
glue("fig1", fig1, display=False)
glue("fig2", fig2, display=False)
glue("fig3", fig3, display=False)
###Output
_____no_output_____
###Markdown
`````{admonition} Exercise: Missing Data* Create a vector of 10000 measurements from a 10-cycle sinus wave. Remember that a single period of sine starts at 0 and ends at 2$\pi$, so 10 periods span between 0 and 20$\pi$.````{dropdown} Solution```pythonn_cycles = 10n_samples = 10000amplitude = 3phase = np.pi / 4end = 2 * np.pi * n_cyclesx = np.linspace(0, end, num=n_samples)y = amplitude * np.sin(x + phase)```````* Using `np.random.choice(replace=False)` sample 100 points from the wave and place them in a Series.````{dropdown} Solution```pythonchosen_idx = np.random.choice(n_samples, size=100, replace=False)data = pd.DataFrame(np.nan, index=x, columns=['raw'])data.iloc[chosen_idx, 0] = y[chosen_idx]```````* Plot the chosen points.````{dropdown} Solution```pythonfig1, ax1 = plt.subplots()ax1.set_title('Raw data pre-interpolation')data.raw.plot(marker='o', ax=ax1)``````{glue:figure} fig1 :figwidth: 500px```````* Interpolate the points using linear interpolation and plot them on a different graph.````{dropdown} Solution```pythondata['lin_inter'] = data.raw.interpolate(method='index')fig2, ax2 = plt.subplots()ax2.set_title('Linear interpolation')data.lin_inter.plot(marker='o', ax=ax2)``````{glue:figure} fig2 :figwidth: 500px```````* Interpolate the points using quadratic interpolation and plot them on a different graph. ````{dropdown} Solution```pythondata['quad_inter'] = data.raw.interpolate(method='quadratic')fig3, ax3 = plt.subplots()ax3.set_title('Quadratic interpolation')data.quad_inter.plot(marker='o', ax=ax3)``````{glue:figure} fig3 :figwidth: 500px```````````` Categorical Data So far, we've used examples with quantitative data. Let's now have a look at [categorical data](https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html), i.e. data can only have one of a specific set, or categories, of values. For example, if we have a column which marks the weekday, then it can obviously only be one of seven options. Same for boolean data, colors, and other examples. These data columns should be marked as "categorical" to reduce memory consumption and improve performance. It also tells the code readers more about the nature of that data column. The easiest way to create a categorical variable is to declare it as such, or to convert as existing column to a categorical data type:
###Code
s = pd.Series(["a", "b", "c", "a"], dtype="category")
s
df = pd.DataFrame({"A": ["a", "b", "c", "a"]})
df["B"] = df["A"].astype("category")
print(f"DataFrame:\n{df}")
print(f"\nData types:\n{df.dtypes}")
###Output
DataFrame:
A B
0 a a
1 b b
2 c c
3 a a
Data types:
A object
B category
dtype: object
###Markdown
We can also force order between our categories, or force specific categories on our data, using the special CategoricalDtype (which we won't show).As we said, memory usage is reduced when working with categorical data:
###Code
df_obj = pd.DataFrame({'a': np.random.random(10_000), 'b': ['a'] * 10_000})
df_obj
df_cat = pd.DataFrame({'a': df_obj['a'], 'b': df_obj['b'].astype('category')})
df_cat
df_obj.memory_usage()
df_cat.memory_usage()
###Output
_____no_output_____
###Markdown
A factor of 8 in memory reduction. Hierarchical Indexing Last time we mentioned that while a DataFrame is inherently a 2D object, it can contain multi-dimensional data. The way a DataFrame (and a Series) does that is with [hierarchical indexing](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html), or sometimes Multi-Indexing. Simple Example: Temperature in a Grid In this example, our data is the temperature sampled across a 2-dimensional grid. First, we need to generate the required set of indices, $(x, y)$, which point to a specific location inside the square. These coordinates can then be assigned the designated temperature values. A list of such coordinates can be a simple `Series`:
###Code
values = np.array([1.2, 0.8, 3.1, 0.1, 0.05, 1, 1.4, 2.1, 2.9])
coords = [('r0', 'c0'), ('r0', 'c1'), ('r0', 'c2'),
('r1', 'c0'), ('r1', 'c1'), ('r1', 'c2'),
('r2', 'c0'), ('r2', 'c1'), ('r2', 'c2')] # r is row, c is column
points = pd.Series(values, index=coords, name='temperature')
points
###Output
_____no_output_____
###Markdown
It is important we understand that this is a series because _the data is one-dimensional_. The actual data is contained in that rightmost column, a one-dimensional array. We do have two coordinates for each point, but the data itself, the temperature, is one-dimensional.Currently, the index is a simple tuple of coordinates. It's a single column, containing tuples. Pandas can help us to index this data in a more intuitive manner, using a MultiIndex object.
###Code
mindex = pd.MultiIndex.from_tuples(coords)
mindex
###Output
_____no_output_____
###Markdown
We received something which looks quite similar to the list of tuples we had before, but it's a [`MultiIndex`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.html) instance. Let's see how it helps us by `reindex`ing our data with it:
###Code
points = points.reindex(mindex)
points
###Output
_____no_output_____
###Markdown
This looks good. Each index level is represented by a column, with the data being the last one. The "missing" values indicate that the value in that cell is the same as the value above it.You might have assumed that accessing the data now is much more intuitive. Let's look at the values of all the points in the first row, `r0`:
###Code
points.loc['r0', :] # .loc() is label-based indexing
###Output
_____no_output_____
###Markdown
Or the values of points in the second column:
###Code
points.loc[:, 'c1']
points.loc[:, :] # all values - each level of the index has its own colon (:)
###Output
_____no_output_____
###Markdown
Note that `.iloc` disregards the MultiIndex, treating our data as a simple one-dimensional vector (as it actually is):
###Code
points.iloc[6]
# points.iloc[0, 1] # ERRORS
###Output
_____no_output_____
###Markdown
Besides making the syntax cleaner, these slicing operations are as efficient as their single-dimension counterparts. It should be clear that a MultiIndex can have more than two levels. Modelling a 3D cube (with the temperatures inside it) is as easy as:
###Code
values3d = np.array([1.2, 0.8,
3.1, 0.1,
0.05, 1,
1.4, 2.1,
2.9, 0.3,
2.4, 1.9])
# 3D coordinates with a shape of (r, c, z) = (3, 2, 2)
coords3d = [('r0', 'c0', 'z0'), ('r0', 'c0', 'z1'),
('r0', 'c1', 'z0'), ('r0', 'c1', 'z1'),
('r1', 'c0', 'z0'), ('r1', 'c0', 'z1'),
('r1', 'c1', 'z0'), ('r1', 'c1', 'z1'),
('r2', 'c0', 'z0'), ('r2', 'c0', 'z1'),
('r2', 'c1', 'z0'), ('r2', 'c1', 'z1')] # we'll soon see an easier way to create this index
cube = pd.Series(values3d, index=pd.MultiIndex.from_tuples(coords3d), name='temp_cube')
cube
###Output
_____no_output_____
###Markdown
We can even name the individual levels, which helps with some slicing operations we'll see below:
###Code
cube.index.names = ['x', 'y', 'z']
cube
###Output
_____no_output_____
###Markdown
Again, you have to remember that this is one-dimensional data, with a three-dimensional index. In statistical term, we might term the indices a fixed, independent categorical variable, while the values are the dependent variable. Pandas actually has a [`CategoricalIndex`](https://pandas.pydata.org/docs/reference/api/pandas.CategoricalIndex.html) object which you'll meet in one of your future homework assignments (but don't be afraid to hit the link and check it out on your own if you just can't wait). More on extra dimensions In the previous square example, it's very appealing to ditch the MultiIndex altogether and just work with a dataframe, or even a simple NumPy array. This is because the two indices represented rows and columns. A quick way to turn one representation into the other is the [`stack()`\\`unstack()`](https://pandas.pydata.org/docs/user_guide/reshaping.html) method:
###Code
points.index.names = ['rows', 'columns']
points
pts_df = points.unstack()
pts_df
pts_df.stack() # back to a series
###Output
_____no_output_____
###Markdown
If we want to turn the indices into "real" columns, we can use the `reset_index()` method:
###Code
pts_df_reset = points.reset_index()
pts_df_reset
###Output
_____no_output_____
###Markdown
So why bother with these (you haven't seen nothing yet) complicated multi-indices?As you might have guessed, adding data points, i.e. increasing the dimensionality of the data, is very easy and intuitive. Data remains aligned through addition and deletion of data. Moreover, treating these categorical variables as an index can help the mental modeling of the problem, especially when you wish to perform statistical modeling with your analysis. Constructing a MultiIndex Creating a hierarchical index can be done in several ways:
###Code
pd.MultiIndex.from_arrays([['a', 'a', 'b', 'b'], [1, 2, 1, 2]])
pd.MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1), ('b', 2)])
pd.MultiIndex.from_product([['a', 'b'], [1, 2]]) # Cartesian product
###Output
_____no_output_____
###Markdown
The most common way to construct a MultiIndex, though, is to add to the existing index one of the columns of the dataframe. We'll see how it's done below. Another important note is that with dataframes, the column and row index is symmetric. In effect this means that the columns could also contain a MultiIndex:
###Code
index = pd.MultiIndex.from_product([[2013, 2014], [1, 2]],
names=['year', 'visit'])
columns = pd.MultiIndex.from_product([['Bob', 'Guido', 'Sue'], ['HR', 'Temp']],
names=['subject', 'type'])
# mock some data
data = np.round(np.random.randn(4, 6), 1)
data[:, ::2] *= 10
data += 37
# create the DataFrame
health_data = pd.DataFrame(data, index=index, columns=columns)
health_data
###Output
_____no_output_____
###Markdown
This sometimes might seem too much, and so usually people prefer to keep the column index as a simple list of names, moving any nestedness to the row index. This is due to the fact that usually columns represent the measured dependent variable.
###Code
index = pd.MultiIndex.from_product([[2013, 2014], [1, 2], ['Bob', 'Guido', 'Sue']],
names=['year', 'visit', 'subject'])
columns = ['HR', 'Temp']
# mock some data
data = np.round(np.random.randn(12, 2), 1)
data[:, ::2] *= 10
data += 37
# create the DataFrame
health_data_row = pd.DataFrame(data, index=index, columns=columns)
health_data_row
###Output
_____no_output_____
###Markdown
Creating a MultiIndex from a data column While all of the above methods work, and could be useful sometimes, the most common method of creating an index is from an existing data column.
###Code
location = ['AL', 'AL', 'NY', 'NY', 'NY', 'VA']
day = ['SUN', 'SUN', 'TUE', 'WED', 'SAT', 'SAT']
temp = [12.3, 14.1, 21.3, 20.9, 18.8, 16.5]
humidity = [31, 45, 41, 41, 49, 52]
states = pd.DataFrame(dict(location=location, day=day,
temp=temp, humidity=humidity))
states
states.set_index(['day'])
states.set_index(['day', 'location'])
states.set_index(['day', 'location'], append=True)
states.set_index([['i', 'ii', 'iii', 'iv', 'v', 'vi'], 'day'])
###Output
_____no_output_____
###Markdown
Indexing and Slicing a MultiIndex We'll use these dataframes as an example:
###Code
health_data
health_data_row
###Output
_____no_output_____
###Markdown
If all we wish to do is to examine a column, indexing is very easy. Don't forget the dataframe as dictionary analogy:
###Code
health_data['Guido'] # works for the column MultiIndex as expected
health_data_row['HR'] # that's a Series!
###Output
_____no_output_____
###Markdown
Accessing single elements is also pretty straight-forward:
###Code
health_data_row.loc[2013, 1, 'Guido'] # index triplet
###Output
_____no_output_____
###Markdown
We can even slice easily using the first `MultiIndex` (year in our case):
###Code
health_data_row.loc[2013:2017] # 2017 doesn't exist, but Python's slicing rules prevent an exception here
# health_data_row.loc[1] # doesn't work
###Output
_____no_output_____
###Markdown
Slicing is a bit more difficult when we want to take into account all available indices. This is due to the possible conflicts between the different indices and the columns.Assuming we want to look at all the years, with all the visits, only by Bob - we would want to write something like this:
###Code
health_data_row.loc[(:, :, 'Bob'), :] # doesn't work
###Output
_____no_output_____
###Markdown
This pickle is solved in two possible ways:First option is the [`slice`](https://www.programiz.com/python-programming/methods/built-in/slice) object:
###Code
bobs_data = (slice(None), slice(None), 'Bob') # all years, all visits, of Bob
health_data_row.loc[bobs_data, 'HR']
# arr[slice(None), 1] is the same as arr[:, 1]
row_idx = (slice(None), slice(None), slice('Bob', 'Guido')) # all years, all visits, Bob + Guido
health_data_row.loc[row_idx, 'HR']
###Output
_____no_output_____
###Markdown
Another option is the [`IndexSlice`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.IndexSlice.html) object:
###Code
idx = pd.IndexSlice
health_data_row.loc[idx[:, :, 'Bob'], :] # very close to the naive implementation
idx2 = pd.IndexSlice
health_data_row.loc[idx2[2013:2015, 1, 'Bob':'Guido'], 'Temp']
###Output
_____no_output_____
###Markdown
Finally, there's one more way to index into a `MultiIndex` which is very straight-forward and explicit; the [cross-section](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.xs.html).
###Code
health_data_row.xs(key=(2013, 1), level=('year', 'visit'))
###Output
_____no_output_____
###Markdown
Small caveat: unsorted indices Having an unsorted index in your `MultiIndex` might make the interpreter pop a few exceptions at you:
###Code
# char index in unsorted
index = pd.MultiIndex.from_product([['a', 'c', 'b'], [1, 2]])
data = pd.Series(np.random.rand(6), index=index)
data.index.names = ['char', 'int']
data
data['a':'b']
###Output
_____no_output_____
###Markdown
`lexsort` means "lexicography-sorted", or sorted by either number or letter. Sorting an index is done with the [`sort_index()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_index.html) method:
###Code
data.sort_index(inplace=True)
print(data)
print(data['a':'b']) # now it works
###Output
char int
a 1 0.803176
2 0.634355
b 1 0.727000
2 0.495458
c 1 0.888520
2 0.023923
dtype: float64
char int
a 1 0.803176
2 0.634355
b 1 0.727000
2 0.495458
dtype: float64
###Markdown
Data Aggregation Data aggregation using a `MultiIndex` is super simple:
###Code
states
states.set_index(['location', 'day'], inplace=True)
states
states.mean(level='location')
states.median(level='day')
###Output
_____no_output_____ |
Homeworks/HW2/HW2-PlotConfirmedCases-solution.ipynb | ###Markdown
HW2 - Plotting with `matplotlib` **Spring 2020 | Python for Neuroscientists**This HW will focus on plotting with the `matplotlib` library. We will use the coronavirus data, which can be downloaded as a csv file from the following Johns Hopkins University website. The data is updated daily here:https://data.humdata.org/dataset/novel-coronavirus-2019-ncov-casesThis data is simply cumulative confirmed cases for each country.
###Code
# pandas
import pandas as pd
# matplotlib
import matplotlib.pyplot as plt
# seaborn
import seaborn as sns
###Output
_____no_output_____
###Markdown
Load the csv fileMake sure that the csv file and the .ipynb are in the same directory
###Code
df = pd.read_csv('time_series_covid19_confirmed_global.csv') # this is pandas function
###Output
_____no_output_____
###Markdown
This loads a table, or pandas "dataframe." We will use pandas to extract data from the table. We can look at the contents of the table:
###Code
df
###Output
_____no_output_____
###Markdown
Let's look at the stats for Italy
###Code
df_italy = df[df['Country/Region'] == 'Italy'] # this is one way to select Italy from the dataframe
df_italy.head() # this displays the
df_italy1 = df_italy.set_index('Country/Region', drop = True) # set pandas dataframe index to country
df_italy2 = df_italy1.drop(columns=['Lat','Long','Province/State'])
data_italy = df_italy2.loc['Italy','2/20/20':'4/13/20'] # select dates through April 5, 2020
###Output
_____no_output_____
###Markdown
Note that it would be cleaner to write the following:> `df_italy = df_italy.set_index('Country/Region', drop = True)`> `df_italy = df_italy.drop(columns=['Lat','Long','Province/State'])`> `data_italy = df_italy.loc['Italy','2/20/20':'4/5/20']` Part 1: Plot the Data from Italy
###Code
plt.plot(data_italy.values,'.-')
plt.xlabel('days')
plt.show()
###Output
_____no_output_____
###Markdown
1a Change the color of the line
###Code
plt.plot(data_italy.values,'.-',color='r')
plt.xlabel('days')
plt.show()
###Output
_____no_output_____
###Markdown
1b Change the thickness of the line
###Code
plt.plot(data_italy.values,'.-',color='r',linewidth=3)
plt.xlabel('days')
plt.show()
###Output
_____no_output_____
###Markdown
1c Add a label the y-axis
###Code
plt.plot(data_italy.values,'.-',color='r',linewidth=3)
plt.xlabel('days')
plt.ylabel('confirmed cases')
plt.show()
###Output
_____no_output_____
###Markdown
1d Add a title
###Code
plt.plot(data_italy.values,'-',color='r',linewidth=3)
plt.xlabel('days')
plt.ylabel('confirmed cases')
plt.title('Italy Confirmed Cases')
plt.show()
###Output
_____no_output_____
###Markdown
1e Resize the figureNote that we can plot with the calendar date using `data_italy.index`
###Code
# resize this figure here
fig = plt.figure(figsize=(10,5))
plt.plot(data_italy.index,data_italy.values,'-',linewidth=3)
plt.xticks(rotation=90) # rotate the xticks so text is not so tight
plt.xlabel('calendar date')
plt.ylabel('confirmed cases')
plt.title('Italy Confirmed Cases')
plt.show()
###Output
_____no_output_____
###Markdown
Part 2: Plotting US and Italy DataWe can extract the data for the US from the dataframe
###Code
df_us = df[df['Country/Region'] == 'US']
df_us = df_us.set_index('Country/Region', drop = True) # set pandas dataframe index to country
df_us = df_us.drop(columns=['Lat','Long','Province/State'])
data_us = df_us.loc['US','2/28/20':'4/13/20']
###Output
_____no_output_____
###Markdown
2a: Plot Italy and US data in two axesUse subplots and assign separate colors to each country. Set the same limits on the y-axis for both subplots
###Code
# use
fig,ax = plt.subplots(1,2,figsize=(20,4)) # or whatever dimensions you like
ax[0].plot(data_italy.index,data_italy.values,'-',linewidth=3)
ax[1].plot(data_us.index,data_us.values,'-',linewidth=3)
ax[0].set_title('Italy Confirmed Cases')
ax[1].set_title('US Confirmed Cases')
for a in ax:
a.set_xlabel('calendar date')
a.set_ylabel('confirmed cases')
plt.xticks(rotation=90) # rotate the xticks so text is not so tight << this is tricky
plt.show()
###Output
_____no_output_____
###Markdown
2b: Plot Italy and US data in the same plotUse the same colors as above, and include a legend
###Code
fig,ax = plt.subplots(1,1,figsize=(12,4)) # or whatever dimensions you like
ax.plot(data_italy.index,data_italy.values,'-',linewidth=3,label='Italy')
ax.plot(data_us.index,data_us.values,'-',linewidth=3,label='US')
ax.set_title('Confirmed Cases')
ax.set_xlabel('Date')
ax.set_ylabel('Confirmed Cases')
plt.xticks(rotation=90)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
2c Plot Italy and US data on log axisNote that we only want the y-axis to be logarithmic
###Code
fig,ax = plt.subplots(1,1,figsize=(12,8)) # or whatever dimensions you like
ax.plot(data_italy.index,data_italy.values,'-',linewidth=3,label='Italy')
ax.plot(data_us.index,data_us.values,'-',linewidth=3,label='US')
ax.set_title('Confirmed Cases')
ax.set_xlabel('Date')
ax.set_ylabel('Confirmed Cases')
plt.xticks(rotation=90)
plt.yscale('log')
plt.grid(which='major')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
2d Make the same plot as above, but using a `for` loop
###Code
data_list = [data_italy,data_us]
data_names = ['Italy','US']
fig,ax = plt.subplots(1,1,figsize=(12,8)) # or whatever dimensions you like
for data,name in zip(data_list,data_names):
ax.plot(data.index,data.values,'-',linewidth=3,label=name)
ax.set_title('Confirmed Cases')
ax.set_xlabel('Date')
ax.set_ylabel('Confirmed Cases')
plt.xticks(rotation=90)
plt.yscale('log')
plt.grid(which='major')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Part 3 Plot Data for 5 countriesUse a for loop, and include x/y labels and a legend. Also save your figure as a jpg and share with friends. You are now a Python datascientist
###Code
country_names = ['US', 'Germany','Italy','Brazil','Argentina']
fig,ax = plt.subplots(1,1,figsize=(12,8)) # or whatever dimensions you like
for name in country_names:
df_t = df[df['Country/Region'] == name]
df_t = df_t.set_index('Country/Region', drop = True) # set pandas dataframe index to country
df_t = df_t.drop(columns=['Lat','Long','Province/State'])
data = df_t.loc[name,'2/28/20':'4/13/20']
ax.plot(data.index,data.values,'-',linewidth=3,label=name)
ax.set_title('Confirmed Cases')
ax.set_xlabel('Date')
ax.set_ylabel('Confirmed Cases')
plt.xticks(rotation=90)
#plt.yscale('log')
plt.grid(which='major')
plt.legend()
plt.show()
# save your figure
# fig.savefig('myplot.jpg')
###Output
_____no_output_____
###Markdown
Bonus: SeabornPlot whatever you like using seaborn
###Code
df
df_t = df[df['Country/Region'] == 'Germany']
df_t
df_t = df[df['Country/Region'] == 'France']
df_t
france_all = df_t.sum()
france_all
data = france_all['1/22/20':'4/13/20']
fig = plt.figure(figsize=(15,4))
plt.plot(data)
plt.xticks(rotation=90)
plt.title('France')
plt.show()
###Output
_____no_output_____ |
price_predict.ipynb | ###Markdown
Price PredictionThe jupyter notebook file is to answer how to predict listing price by building a linear regression model. Read Data
###Code
import pandas as pd
import numpy as np
data_listing=pd.read_csv('listings.csv')
data_listing.head()
###Output
_____no_output_____
###Markdown
We extracted related columns in the listing_var from the raw dataset and indicated numerical variables and categorical variables in the numerical_var and categorical_var.
###Code
listing_var=['host_response_time','host_response_rate','host_is_superhost','host_listings_count','zipcode','property_type',
'room_type','accommodates','bathrooms','bedrooms','price','security_deposit','cleaning_fee','minimum_nights','availability_30',
'availability_90','availability_60','availability_365','number_of_reviews','review_scores_rating','instant_bookable',
'cancellation_policy','reviews_per_month']
numerical_var=['host_response_rate','host_listings_count','accommodates','bathrooms','bedrooms','price','security_deposit',
'cleaning_fee','minimum_nights','availability_30','availability_90','availability_60','availability_365','number_of_reviews',
'review_scores_rating','reviews_per_month']
categorical_var=['host_response_time','host_is_superhost','zipcode','property_type','room_type','instant_bookable','cancellation_policy']
print ('The number of columns extracted to predict price is',len(listing_var))
print ('The number of numerical columns is',len(numerical_var))
print ('The numver of categorical columns is',len(categorical_var))
listingdata=data_listing[listing_var]
listingdata.head()
###Output
_____no_output_____
###Markdown
Data Preparation We found that 'price','cleaning_fee' and 'host_response_rate' are 'object' dtype other than numerical dtype.It is because they have characters like '$' or ',',etc. We need to remove these characters and transform these columns to numerical dtype.
###Code
listingdata.info()
def removechar(string):
'''
INPUT:
string - original string with characters will be removed
OUTPUT:
num - the numerical dtype after removed the characters like '$','%' and ','
Remove characters like '$','%' and ',' in the original columns and transform 'object' type to 'int' type.
'''
num=None
if str(string).find('$')>=0 :
string1=str(string)[str(string).find('$')+1:str(string).find('.')]
if string1.find(',')<0:
num=int(string1)
elif string1.find(',')>0:
#print (string1)
num=int(string1[0])*1000+int(string1[string.find(',')+1:])
#print (num)
return num
elif str(string).find('%')>0:
string1=str(string)[:str(string).find('%')]
num=float(string1)/100
return num
cf_col=listingdata['cleaning_fee'].map(removechar)
pr_col=listingdata['price'].map(removechar)
hrr_col=listingdata['host_response_rate'].map(removechar)
#insert the charaters removed numerical columns in the dataset
listingdata.insert(loc=listingdata.shape[1],column='cleaning_fee_corr',value=cf_col)
listingdata.insert(loc=listingdata.shape[1],column='price_corr',value=pr_col)
listingdata.insert(loc=listingdata.shape[1],column='host_response_rate_corr',value=hrr_col)
###Output
_____no_output_____
###Markdown
Check the columns after removed characters,and remove the original columns of 'cleaning_fee','price','host_response_rate'.
###Code
listingdata[['cleaning_fee','price','host_response_rate','cleaning_fee_corr','price_corr','host_response_rate_corr',]].head()
#drop the original columns of 'cleaning_fee','price','host_response_rate'
listingdata=listingdata.drop(['cleaning_fee','price','host_response_rate'],axis=1)
print (listingdata.shape)
###Output
(3818, 23)
###Markdown
Handle the missing values
###Code
#check the missing values
def missingcheck(df):
missingpercentage=(df.shape[0]-df.count())/df.shape[0]
return missingpercentage
print ('Check the missing values in each column \n',missingcheck(listingdata).sort_values(ascending=False))
###Output
Check the missing values in each column
security_deposit 0.511262
cleaning_fee_corr 0.269775
review_scores_rating 0.169460
reviews_per_month 0.164222
host_response_rate_corr 0.136983
host_response_time 0.136983
bathrooms 0.004191
zipcode 0.001833
bedrooms 0.001572
host_is_superhost 0.000524
host_listings_count 0.000524
property_type 0.000262
room_type 0.000000
accommodates 0.000000
instant_bookable 0.000000
number_of_reviews 0.000000
cancellation_policy 0.000000
minimum_nights 0.000000
price_corr 0.000000
availability_90 0.000000
availability_60 0.000000
availability_365 0.000000
availability_30 0.000000
dtype: float64
###Markdown
The missing percentage of 'Security_deposit' is >50%, so we remove the column. We have 2 choices to deal with NAs.We can drop rows having any NAs and we can also fill NAs with the columns mean. I have tried both methods and found that filling NAs with mean having better r2score in LinearRegression model. So in the next step ,we fill NAs with column mean.
###Code
listingdata=listingdata.drop('security_deposit',axis=1)
#listingdata=listingdata.dropna(how='any',axis=0)
print ('The listingdata after dropping security_deposit rows has {} rows,and {} columns'.format(listingdata.shape[0],listingdata.shape[1]))
numerical_var1=['host_response_rate_corr','price_corr','cleaning_fee_corr',
'host_listings_count','accommodates','bathrooms','bedrooms',
'minimum_nights','availability_30','availability_90','availability_60','availability_365','number_of_reviews',
'review_scores_rating','reviews_per_month']
#fill NAs with column mean
missingfill=lambda col:col.fillna(col.mean())
listingdata[numerical_var1]=listingdata[numerical_var1].apply(missingfill,axis=0)
listingdata[numerical_var1].head()
###Output
_____no_output_____
###Markdown
Data Exploration We see in the coorelation matrix of numerical columns,there are some columns are highly correlated:'availability_30'and 'availability_90''has coefficient correlation=0.88.'availability_30'and'availability_60' has coefficient correlation=0.94.'accommodates' and 'bedrooms' has coefficient correlation =0.77.
###Code
#cooralation matrix
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
corrmat = listingdata[numerical_var1].corr()
f, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, vmax=.8, square=True,annot=True,fmt='.2f')
###Output
_____no_output_____
###Markdown
We do onehot encoding by get_dummies() with categorical columns.
###Code
cat_vars = listingdata.select_dtypes(include=['object']).copy().columns
for var in cat_vars:
# for each cat add dummy var, drop original column
listingdata = pd.concat([listingdata.drop(var, axis=1), pd.get_dummies(listingdata[var], prefix=var,dummy_na=True, prefix_sep='_', drop_first=True)], axis=1)
#The dataset shape after get_dummies
print (listingdata.shape)
###Output
(3818, 73)
###Markdown
Train the Model
###Code
y=listingdata['price_corr']
#X=listingdata.drop(['availability_90','availability_60','bedrooms','price_corr'],axis=1)
X=listingdata.drop('price_corr',axis=1)
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,random_state=42)
###Output
_____no_output_____
###Markdown
We have tried linearregression model and randomforrestregression model. LinearRegressor have better r2score on testset(0.59) and train set(0.61). We got r2score =0.57 on testset by RandomForrestRegression model.
###Code
from sklearn.linear_model import LinearRegression
lm=LinearRegression(normalize=True)
lm.fit(X_train,y_train)
y_pred_train=lm.predict(X_train)
y_pred_test=lm.predict(X_test)
from sklearn.metrics import r2_score,mean_squared_error
r2score_train=r2_score(y_train,y_pred_train)
r2score_test=r2_score(y_test,y_pred_test)
mse_train=mean_squared_error(y_train,y_pred_train)
mse_test=mean_squared_error(y_test,y_pred_test)
print ('r square score on test set is {},on train set is {}'.format(r2score_test,r2score_train))
print ('mean squared error score on test set is {}, on train set is {}'.format(mse_test,mse_train))
from sklearn.ensemble import RandomForestRegressor
rf=RandomForestRegressor(random_state=24)
rf.fit(X_train,y_train)
y_pred_train=rf.predict(X_train)
y_pred_test=rf.predict(X_test)
r2score_train=r2_score(y_train,y_pred_train)
r2score_test=r2_score(y_test,y_pred_test)
mse_train=mean_squared_error(y_train,y_pred_train)
mse_test=mean_squared_error(y_test,y_pred_test)
print ('r square score on test set is {},on train set is {}'.format(r2score_test,r2score_train))
print ('mean square error score on test set is {},on train set is {}'.format(mse_test,mse_train))
###Output
/anaconda3/lib/python3.7/site-packages/sklearn/ensemble/weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release.
from numpy.core.umath_tests import inner1d
###Markdown
Combat Overfitting We have used the whole bunch of features to train and predict. That may have issue of overfitting. In this step, we'll try to subset features to overcome overfittings and find the optimal subset of features having highest r2score on testset. We consider that columns that weight more are also more useful for predicting our response variable. So we use cutoff to subset X columns each time with larger weight columns to train and test. Then see at which cutoff point we achieve the highest r2score on testset that means we get the best subset of features to overcome overfitting.
###Code
#Sort the columns by the sum of columns, the larger the sum is , the more the column weight is
X.sum().sort_values(ascending=False)
def find_optimal_lm_mod(X, y, cutoffs, test_size = .30, random_state=42, plot=True):
'''
INPUT
X - pandas dataframe, X matrix
y - pandas dataframe, response variable
cutoffs - list of ints, cutoff for number of non-zero values in dummy categorical vars
test_size - float between 0 and 1, default 0.3, determines the proportion of data as test data
random_state - int, default 42, controls random state for train_test_split
plot - boolean, default 0.3, True to plot result
OUTPUT
r2_scores_test - list of floats of r2 scores on the test data
r2_scores_train - list of floats of r2 scores on the train data
lm_model - model object from sklearn
X_train, X_test, y_train, y_test - output from sklearn train test split used for optimal model
'''
r2_scores_test, r2_scores_train, num_feats, results = [], [], [], dict()
for cutoff in cutoffs:
#reduce X matrix
reduce_X = X.iloc[:, np.where((X.sum() > cutoff) == True)[0]]
# print (X.sum())
num_feats.append(reduce_X.shape[1])
#split the data into train and test
X_train, X_test, y_train, y_test = train_test_split(reduce_X, y, test_size = test_size, random_state=random_state)
#fit the model and obtain pred response
lm_model = LinearRegression(normalize=True)
lm_model.fit(X_train, y_train)
y_test_preds = lm_model.predict(X_test)
y_train_preds = lm_model.predict(X_train)
#append the r2 value from the test set
r2_scores_test.append(r2_score(y_test, y_test_preds))
r2_scores_train.append(r2_score(y_train, y_train_preds))
results[str(cutoff)] = r2_score(y_test, y_test_preds)
if plot:
plt.plot(num_feats, r2_scores_test, label="Test", alpha=.5)
plt.plot(num_feats, r2_scores_train, label="Train", alpha=.5)
plt.xlabel('Number of Features')
plt.ylabel('Rsquared')
plt.title('Rsquared by Number of Features')
plt.legend(loc=1)
plt.show()
best_cutoff = max(results, key=results.get)
#reduce X matrix
reduce_X = X.iloc[:, np.where((X.sum() > int(best_cutoff)) == True)[0]]
#X_reduce_col=reduce_X.columns
num_feats.append(reduce_X.shape[1])
#split the data into train and test
X_train, X_test, y_train, y_test = train_test_split(reduce_X, y, test_size = test_size, random_state=random_state)
#fit the model
lm_model = LinearRegression(normalize=True)
lm_model.fit(X_train, y_train)
return r2_scores_test, r2_scores_train, lm_model, X_train, X_test, y_train, y_test
###Output
_____no_output_____
###Markdown
We see on the plot, We got the highest r2score=0.605 on testset and r2score=0.610 on trainset. The X_train and X_test of the output of the function 'find_optimal_lm_mod()' is from the cutoff point we got the highest r2score on testset, and at the cutoff point we got 69 features subset .
###Code
cutoffs = [5000, 3500, 2500, 1000, 100, 50, 30, 20, 10, 5,0]
r2_scores_test, r2_scores_train, lm_model, X_train, X_test, y_train, y_test= find_optimal_lm_mod(X, y, cutoffs)
print ('The max r2score on testset of optimal LinearRegression model is {}'.format(r2_scores_test[np.argmax(r2_scores_test)]))
print ('The max r2score on trainset if optimal LinearRegression model is {}'.format(r2_scores_train[np.argmax(r2_scores_test)]))
print ('The number of features when achieve highest r2score on testset is {}'.format(X_train.shape[1]))
###Output
The max r2score on testset of optimal LinearRegression model is 0.6050834234767488
The max r2score on trainset if optimal LinearRegression model is 0.6104090867781367
The number of features when achieve highest r2score on testset is 69
###Markdown
We examined the top 20 high coefficienct weight of linearregression mode. We found that property_type (such as boat and shared room) and location(zipcode) have large coefficient weight on predicting price. We see boat room has highly positive coefficient on price while shared room has highly negative coefficient on price.
###Code
def coef_weights(coefficients, X_train):
'''
INPUT:
coefficients - the coefficients of the linear model
X_train - the training data, so the column names can be used
OUTPUT:
coefs_df - a dataframe holding the coefficient, estimate, and abs(estimate)
Provides a dataframe that can be used to understand the most influential coefficients
in a linear model by providing the coefficient estimates along with the name of the
variable attached to the coefficient.
'''
coefs_df = pd.DataFrame()
coefs_df['est_int'] = X_train.columns
coefs_df['coefs'] = coefficients
coefs_df['abs_coefs'] = np.abs(coefficients)
coefs_df = coefs_df.sort_values('abs_coefs', ascending=False)
return coefs_df
#Use the function
coef_df = coef_weights(lm_model.coef_, X_train)
#A quick look at the top results
#print (len(coef_df))
coef_df.head(20)
###Output
_____no_output_____
###Markdown
We see the highest positive coefficients are: property_type_Boat, property_type_Camper/RV are very expensive room types, zipcode_98134 is expensive area. Other factors like bathroom and bedrooms are also related positively with price.
###Code
coef_df[coef_df.coefs>0]
###Output
_____no_output_____
###Markdown
Fine Tuning on RandomForrestRegression Model We used GridSearchCV to find the optimal parameters , the tuned parameters RandomForrestRegressor doesnot improved much ,only have slight improvement of r2score=0.61 on testset,r2=0.82 on trainset
###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV
param_grid={'n_estimators':[24,25,26,30],'min_samples_leaf':[1,2,3,4]
}
from sklearn.metrics import make_scorer
scorer = make_scorer(r2_score)
rf=RandomForestRegressor(param_grid,random_state=24)
rf_model = GridSearchCV(rf, param_grid, n_jobs=-1,cv=5,scoring=scorer)
rf_model.fit(X_train,y_train)
y_pred_train=rf_model.predict(X_train)
y_pred_test=rf_model.predict(X_test)
from sklearn.metrics import r2_score,mean_squared_error
r2score_train=r2_score(y_train,y_pred_train)
r2score_test=r2_score(y_test,y_pred_test)
mse_train=mean_squared_error(y_train,y_pred_train)
mse_test=mean_squared_error(y_test,y_pred_test)
print ('r square score on test set is {},on train set is {}'.format(r2score_test,r2score_train))
print ('mean squared error score on test set is {},on train set is {}'.format(mse_test,mse_train))
print ('the best estimator is {}'.format(rf_model.best_estimator_))
###Output
r square score on test set is 0.6118529086768387,on train set is 0.8214647967128096
mean squared error score on test set is 3323.9166693027214,on train set is 1421.320549403084
the best estimator is RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=None,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=3, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=25, n_jobs=1,
oob_score=False, random_state=24, verbose=0, warm_start=False)
###Markdown
I also examined the features importances of randomforrestregressor. The order of feature ranking as follow. Does the features importances consistent with coefficients in LinearRegression model?
###Code
importances=rf_model.best_estimator_.feature_importances_
indices=np.argsort(importances)[::-1]
features_list=X_train.columns
print ('Feature Ranking:\n')
for i in range(X_train.shape[1]):
print ('feature no. %d: %s %f' % (i+1,features_list[indices[i]],importances[indices[i]]))
###Output
Feature Ranking:
feature no. 1: bedrooms 0.423887
feature no. 2: cleaning_fee_corr 0.090743
feature no. 3: bathrooms 0.080306
feature no. 4: accommodates 0.064085
feature no. 5: reviews_per_month 0.049265
feature no. 6: availability_365 0.038974
feature no. 7: room_type_Private room 0.034812
feature no. 8: number_of_reviews 0.030275
feature no. 9: room_type_Shared room 0.020774
feature no. 10: review_scores_rating 0.016955
feature no. 11: host_listings_count 0.016923
feature no. 12: availability_90 0.013794
feature no. 13: host_response_time_nan 0.012152
feature no. 14: availability_30 0.010795
feature no. 15: availability_60 0.008958
feature no. 16: minimum_nights 0.008936
feature no. 17: property_type_House 0.007127
feature no. 18: zipcode_98119 0.006456
feature no. 19: host_response_rate_corr 0.006298
feature no. 20: host_response_time_within a few hours 0.006011
feature no. 21: zipcode_98104 0.005888
feature no. 22: cancellation_policy_strict 0.004392
feature no. 23: cancellation_policy_moderate 0.003982
feature no. 24: zipcode_98102 0.003597
feature no. 25: instant_bookable_t 0.003533
feature no. 26: zipcode_98116 0.003302
feature no. 27: zipcode_98109 0.003193
feature no. 28: host_response_time_within an hour 0.003073
feature no. 29: host_is_superhost_t 0.002921
feature no. 30: zipcode_98199 0.002597
feature no. 31: zipcode_98121 0.001926
feature no. 32: zipcode_98112 0.001847
feature no. 33: zipcode_98122 0.001754
feature no. 34: property_type_Bed & Breakfast 0.001402
feature no. 35: zipcode_98144 0.001234
feature no. 36: property_type_Camper/RV 0.001115
feature no. 37: zipcode_98103 0.001082
feature no. 38: host_response_time_within a day 0.000961
feature no. 39: zipcode_98105 0.000901
feature no. 40: property_type_Loft 0.000771
feature no. 41: property_type_Condominium 0.000622
feature no. 42: zipcode_98107 0.000520
feature no. 43: zipcode_98115 0.000425
feature no. 44: zipcode_98118 0.000383
feature no. 45: zipcode_98117 0.000340
feature no. 46: zipcode_98126 0.000293
feature no. 47: zipcode_98133 0.000125
feature no. 48: zipcode_98108 0.000094
feature no. 49: zipcode_98136 0.000073
feature no. 50: zipcode_98125 0.000049
feature no. 51: zipcode_98106 0.000041
feature no. 52: property_type_Townhouse 0.000029
feature no. 53: property_type_Other 0.000009
feature no. 54: zipcode_98178 0.000000
feature no. 55: property_type_Dorm 0.000000
feature no. 56: property_type_Yurt 0.000000
feature no. 57: property_type_Treehouse 0.000000
feature no. 58: property_type_Tent 0.000000
feature no. 59: property_type_nan 0.000000
feature no. 60: zipcode_98146 0.000000
feature no. 61: property_type_Chalet 0.000000
feature no. 62: zipcode_98177 0.000000
feature no. 63: property_type_Cabin 0.000000
feature no. 64: property_type_Bungalow 0.000000
feature no. 65: property_type_Boat 0.000000
feature no. 66: zipcode_98134 0.000000
feature no. 67: host_is_superhost_nan 0.000000
feature no. 68: zipcode_nan 0.000000
feature no. 69: zipcode_99
98122 0.000000
###Markdown
We see that the rank of features importances are not consistent with coefficient weights neither on coefs nor absolute value of coefs. It's because features importances calculated base on entropy gain of each split in tree model. While coefficients is the weight of each columns in Linear Regression model. Thay are total 2 different things, not comparable. More Experiment on Remove Coefficienct Correlation Should I remove high coefficient correlation features? Let's do more test on removing 'availability_90','availability_60','bedrooms', since they are highly related with 'availability_30' and 'accommodates'. And see, we got a worse r2score on both testset and trainset. We also got a worse r2score on RandomForrestRegression.
###Code
y=listingdata['price_corr']
X=listingdata.drop(['availability_90','availability_60','bedrooms','price_corr'],axis=1)
X_train,X_test,y_train,y_test=train_test_split(X,y,random_state=42)
lm=LinearRegression(normalize=True)
lm.fit(X_train,y_train)
y_pred_train=lm.predict(X_train)
y_pred_test=lm.predict(X_test)
from sklearn.metrics import r2_score,mean_squared_error
r2score_train=r2_score(y_train,y_pred_train)
r2score_test=r2_score(y_test,y_pred_test)
mse_train=mean_squared_error(y_train,y_pred_train)
mse_test=mean_squared_error(y_test,y_pred_test)
print ('r square score on test set is {},on train set is {}'.format(r2score_test,r2score_train))
print ('mean squared error score on test set is {}, on train set is {}'.format(mse_test,mse_train))
rf=RandomForestRegressor(random_state=24)
rf.fit(X_train,y_train)
y_pred_train=rf.predict(X_train)
y_pred_test=rf.predict(X_test)
r2score_train=r2_score(y_train,y_pred_train)
r2score_test=r2_score(y_test,y_pred_test)
mse_train=mean_squared_error(y_train,y_pred_train)
mse_test=mean_squared_error(y_test,y_pred_test)
print ('r square score on test set is {},on train set is {}'.format(r2score_test,r2score_train))
print ('mean square error score on test set is {},on train set is {}'.format(mse_test,mse_train))
###Output
r square score on test set is 0.5440017268985746,on train set is 0.9126660502961377
mean square error score on test set is 3809.284892379291,on train set is 704.9730759200781
###Markdown
Predicting Car Selling Prices------Dataset is downloaded from : https://www.kaggle.com/nehalbirla/vehicle-dataset-from-cardekho------
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
###Output
_____no_output_____
###Markdown
**Loading Data**
###Code
data = pd.read_csv('car_data.csv')
data.head()
data.shape
data.info()
data.describe().T
###Output
_____no_output_____
###Markdown
**Data Cleaning And Visualization**
###Code
data = data.drop('Car_Name', axis=1)
data.head()
# to know how old the car is we subtract current year with the year in which the car was bought
data['Years_old'] = 2020 - data.Year
data.head()
data.drop('Year', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Use One Hot Encoding
###Code
data = pd.get_dummies(data,drop_first=True)
data.head()
# here 'Selling_Price' is what we have to predict
sns.pairplot(data);
# This shows the relationship for (n,2) combination of variable in a DataFrame
# as a matrix of plots and the diagonal plots are the univariate plots.
plt.figure(figsize=(15,15))
sns.heatmap(
data.corr(),
cmap=sns.diverging_palette(20, 220, n=200),
square=True
);
data.head()
X = data.drop('Selling_Price', axis = 1)
y = data['Selling_Price']
print(X.shape)
print(y.shape)
###Output
(301, 8)
(301,)
###Markdown
Cheaking For Important Features!
###Code
from sklearn.ensemble import ExtraTreesRegressor
model = ExtraTreesRegressor()
model.fit(X,y)
model.feature_importances_
pd.Series(model.feature_importances_, index=X.columns).plot(kind='bar',alpha=0.75, rot=90);
###Output
_____no_output_____
###Markdown
**Model Training**
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test=train_test_split(X,y,test_size=0.2,random_state=0)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train,y_train)
model.score(X_test,y_test)
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import cross_val_score
cv = ShuffleSplit(n_splits = 5, test_size=0.2, random_state=0)
cross_val_score(LinearRegression(), X,y,cv=cv)
###Output
_____no_output_____
###Markdown
Finding best model using RandomizedSearchCV
###Code
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import GridSearchCV
def perfect_model(X, y):
model_algo = {
'Linear_Regression':{
'model': LinearRegression(),
'params': {
'normalize': [True, False]
}
},
'Decision_Tree':{
'model': DecisionTreeRegressor(),
'params': {
'criterion': ['mse', 'friedman_mse', 'mae'],
'splitter': ['best', 'random'],
'max_depth': [x for x in range(5,35,5)],
'min_samples_leaf': [1, 2, 5, 10]
}
},
'Random_forest':{
'model': RandomForestRegressor(),
'params': {
'n_estimators': [x for x in range(20,150,20)],
'max_features': ['auto', 'sqrt'],
'max_depth': [x for x in range(5,35,5)],
'min_samples_split': [2, 5, 10, 15, 100],
'min_samples_leaf': [1, 2, 5, 10]
}
}
}
score = []
cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=0)
for algo_name, config in model_algo.items():
rs = RandomizedSearchCV(config['model'], config['params'], cv=cv, return_train_score=False, n_iter=5)
rs.fit(X_train,y_train)
score.append({
'model': algo_name,
'best_score': rs.best_score_,
'best_params': rs.best_params_
})
result = pd.DataFrame(score,columns=['model','best_score','best_params'])
print(result.best_params.tolist())
return result
perfect_model(X, y)
final_dec_model = DecisionTreeRegressor(splitter='best', min_samples_leaf= 2, max_depth=15, criterion='mae')
final_dec_model.fit(X_train,y_train)
final_dec_model.score(X_test,y_test)
final_rf_model = RandomForestRegressor(n_estimators=120, min_samples_split=2, min_samples_leaf=1, max_features='auto', max_depth=20)
final_rf_model.fit(X_train,y_train)
final_rf_model.score(X_test,y_test)
cross_val_score(DecisionTreeRegressor(splitter='best', min_samples_leaf= 2, max_depth=15, criterion='mae'), X,y,cv=cv)
cross_val_score(RandomForestRegressor(n_estimators=120, min_samples_split=2, min_samples_leaf=1, max_features='auto', max_depth=20), X,y,cv=cv)
###Output
_____no_output_____
###Markdown
**Based on above results we can say that Random Forest Regressor gives the best score.**
###Code
predictions=final_rf_model.predict(X_test)
plt.scatter(y_test,predictions)
###Output
_____no_output_____
###Markdown
**Exporting the tested model to a pickle file**
###Code
import pickle
with open('RF_price_predicting_model.pkl', 'wb') as file:
# dump information to that file
pickle.dump(final_rf_model, file)
###Output
_____no_output_____ |
0.16/_downloads/plot_otp.ipynb | ###Markdown
Plot sensor denoising using oversampled temporal projectionThis demonstrates denoising using the OTP algorithm [1]_ on data withwith sensor artifacts (flux jumps) and random noise.
###Code
# Author: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import mne
import numpy as np
from mne import find_events, fit_dipole
from mne.datasets.brainstorm import bst_phantom_elekta
from mne.io import read_raw_fif
print(__doc__)
###Output
_____no_output_____
###Markdown
Plot the phantom data, lowpassed to get rid of high-frequency artifacts.We also crop to a single 10-second segment for speed.Notice that there are two large flux jumps on channel 1522 that couldspread to other channels when performing subsequent spatial operations(e.g., Maxwell filtering, SSP, or ICA).
###Code
dipole_number = 1
data_path = bst_phantom_elekta.data_path()
raw = read_raw_fif(
op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif'))
raw.crop(40., 50.).load_data()
order = list(range(160, 170))
raw.copy().filter(0., 40.).plot(order=order, n_channels=10)
###Output
_____no_output_____
###Markdown
Now we can clean the data with OTP, lowpass, and plot. The flux jumps havebeen suppressed alongside the random sensor noise.
###Code
raw_clean = mne.preprocessing.oversampled_temporal_projection(raw)
raw_clean.filter(0., 40.)
raw_clean.plot(order=order, n_channels=10)
###Output
_____no_output_____
###Markdown
We can also look at the effect on single-trial phantom localization.See the `sphx_glr_auto_tutorials_plot_brainstorm_phantom_elekta.py`for more information. Here we use a version that does single-triallocalization across the 17 trials are in our 10-second window:
###Code
def compute_bias(raw):
events = find_events(raw, 'STI201', verbose=False)
events = events[1:] # first one has an artifact
tmin, tmax = -0.2, 0.1
epochs = mne.Epochs(raw, events, dipole_number, tmin, tmax,
baseline=(None, -0.01), preload=True, verbose=False)
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None,
verbose=False)
cov = mne.compute_covariance(epochs, tmax=0, method='shrunk',
verbose=False)
idx = epochs.time_as_index(0.036)[0]
data = epochs.get_data()[:, :, idx].T
evoked = mne.EvokedArray(data, epochs.info, tmin=0.)
dip = fit_dipole(evoked, cov, sphere, n_jobs=1, verbose=False)[0]
actual_pos = mne.dipole.get_phantom_dipoles()[0][dipole_number - 1]
misses = 1000 * np.linalg.norm(dip.pos - actual_pos, axis=-1)
return misses
bias = compute_bias(raw)
print('Raw bias: %0.1fmm (worst: %0.1fmm)'
% (np.mean(bias), np.max(bias)))
bias_clean = compute_bias(raw_clean)
print('OTP bias: %0.1fmm (worst: %0.1fmm)'
% (np.mean(bias_clean), np.max(bias_clean),))
###Output
_____no_output_____ |
tutorials/tutorial05_networks.ipynb | ###Markdown
Tutorial 05: Creating Custom Networks创建自定义网络This tutorial walks you through the process of generating custom networks. Networks define the network geometry of a task, as well as the constituents of the network, e.g. vehicles, traffic lights, etc... Various networks are available in Flow, depicting a diverse set of open and closed traffic networks such as ring roads, intersections, traffic light grids, straight highway merges, and more. 本教程将带您完成生成自定义网络的过程。网络定义了任务的网络几何形状,以及网络的组成部分,如车辆、交通灯等。各种各样的交通网络以流的形式出现,描绘了各种各样的开放和封闭的交通网络,如环形道路、十字路口、交通灯网格、笔直的高速公路合并等。In this tutorial, we will recreate the ring road network, seen in the figure below.在本教程中,我们将重新创建环路网络,如下图所示。In order to recreate this network, we will design a *network* class. This class creates the configuration files needed to produce a transportation network within the simulator. It also specifies the location of edge nodes in the network, as well as the positioning of vehicles at the start of a run.为了重新创建这个网络,我们将设计一个*network*类。该类创建在模拟器中生成传输网络所需的配置文件。它还指定了网络中边缘节点的位置,以及车辆在运行开始时的位置。We begin by creating a class that inherits the methods of Flow's base network class. The separate methods are filled in in later sections.我们首先创建一个继承Flow的基本网络类的方法的类。后面的部分将填充单独的方法。
###Code
# import Flow's base network class
from flow.networks import Network
# define the network class, and inherit properties from the base network class
class myNetwork(Network):
pass
###Output
_____no_output_____
###Markdown
The rest of the tutorial is organized as follows: sections 1 and 2 walk through the steps needed to specify custom traffic network geometry features and auxiliary features, respectively, while section 3 implements the new network in a simulation for visualization and testing purposes.本教程的其余部分组织如下:第1节和第2节分别介绍了指定自定义交通网络几何特性和辅助特性所需的步骤,而第3节在一个模拟中实现了新的网络,以便进行可视化和测试。 1. Specifying Traffic Network Features指定交通网络特性One of the core responsibilities of the network class is to to generate the necessary xml files needed to initialize a sumo instance. These xml files describe specific network features such as the position and directions of nodes and edges (see the figure above). Once the base network has been inherited, specifying these features becomes very systematic. All child classes are required to define at least the following three methods: network类的核心职责之一是生成初始化sumo实例所需的xml文件。这些xml文件描述了特定的网络特性,比如节点和边缘的位置和方向(见上图)。一旦继承了基本网络,指定这些特性就变得非常系统化。所有子类至少需要定义以下三个方法:* **specify_nodes**: specifies the attributes of nodes in the network* **specify_edges**: specifies the attributes of edges containing pairs on nodes in the network* **specify_routes**: specifies the routes vehicles can take starting from any edgeAdditionally, the following optional functions may also be defined:* **specify_types**: specifies the attributes of various edge types (if any exist)指定各种边缘类型的属性(如果存在的话)* **specify_connections**: specifies the attributes of connections. These attributes are used to describe how any specific node's incoming and outgoing edges/lane pairs are connected. If no connections are specified, sumo generates default connections.指定连接的属性。这些属性用于描述任何特定节点的传入和传出边/车道对是如何连接的。如果没有指定连接,sumo将生成默认连接。All of the functions mentioned above paragraph take in as input `net_params`, and output a list of dictionary elements, with each element providing the attributes of the component to be specified.上述段落中提到的所有函数都接受“net_params”作为输入,并输出字典元素列表,每个元素提供要指定的组件的属性。This tutorial will cover the first three methods. For examples of `specify_types` and `specify_routes`, refer to source code located in `flow/networks/ring.py` and `flow/networks/bridge_toll.py`, respectively.本教程将介绍前三种方法。有关“specify_types”和“specify_routes”的示例,请参考位于`flow/networks/ring.py` and `flow/networks/bridge_toll.py`。 1.1 ADDITIONAL_NET_PARAMSThe features used to parametrize the network are specified within the `NetParams` input, as discussed in tutorial 1. Specifically, for the sake of our network, the `additional_params` attribute within `NetParams` will be responsible for storing information on the radius, number of lanes, and speed limit within each lane, as seen in the figure above. Accordingly, for this problem, we define an `ADDITIONAL_NET_PARAMS` variable of the form:用于对网络进行参数化的特性在“NetParams”输入中指定,如教程1所述。具体来说,为了我们的网络,“NetParams”中的“additional_params”属性将负责存储每个车道的半径、车道数和限速信息,如上图所示。因此,对于这个问题,我们定义了表单的‘ADDITIONAL_NET_PARAMS’变量:
###Code
ADDITIONAL_NET_PARAMS = {
"radius": 40,
"num_lanes": 1,
"speed_limit": 30,
}
###Output
_____no_output_____
###Markdown
All networks presented in Flow provide a unique `ADDITIONAL_NET_PARAMS` component containing the information needed to properly define the network parameters of the network. We assume that these values are always provided by the user, and accordingly can be called from `net_params`. For example, if we would like to call the "radius" parameter, we simply type:Flow中提供的所有网络都提供一个惟一的‘ADDITIONAL_NET_PARAMS’组件,其中包含正确定义网络参数所需的信息。我们假设这些值总是由用户提供的,因此可以从' net_params '中调用。例如,如果我们想调用“radius”参数,我们只需输入: radius = net_params.additional_params["radius"] 1.2 specify_nodes 定义节点The nodes of a network are the positions of a select few points in the network. These points are connected together using edges (see section 1.4). In order to specify the location of the nodes that will be placed in the network, the function `specify_nodes` is used. This method returns a list of dictionary elements, where each dictionary depicts the attributes of a single node. These node attributes include: 网络的节点是网络中选定的几个点的位置。这些点用边连接在一起(见第1.4节)。为了指定将放置在网络中的节点的位置,使用了函数“specify_nodes”。此方法返回dictionary元素列表,其中每个dictionary描述单个节点的属性。这些节点属性包括:* **id**: name of the node* **x**: x coordinate of the node* **y**: y coordinate of the node* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsNode_DescriptionsRefering to the figure at the top of this tutorial, we specify four nodes at the bottom (0,-r), top (0,r), left (-r,0), and right (0,r) of the ring. This is done as follows:参照本教程顶部的图,我们在环的底部(0,-r)、顶部(0,r)、左侧(-r,0)和右侧(0,r)指定了四个节点。具体做法如下:
###Code
class myNetwork(myNetwork): # update my network class
def specify_nodes(self, net_params):
# one of the elements net_params will need is a "radius" value
r = net_params.additional_params["radius"]
# specify the name and position (x,y) of each node
nodes = [{"id": "bottom", "x": 0, "y": -r},
{"id": "right", "x": r, "y": 0},
{"id": "top", "x": 0, "y": r},
{"id": "left", "x": -r, "y": 0}]
return nodes
###Output
_____no_output_____
###Markdown
1.3 specify_edges 定义边Once the nodes are specified, the nodes are linked together using directed edges. This done through the `specify_edges` method which, similar to `specify_nodes`, returns a list of dictionary elements, with each dictionary specifying the attributes of a single edge. The attributes include:一旦指定了节点,就使用有向边将节点链接在一起。这是通过“specify_edges”方法实现的,该方法与“specify_nodes”类似,返回一个字典元素列表,每个字典指定单个edge的属性。属性包括:* **id**: name of the edge* **from**: name of the node the edge starts from* **to**: the name of the node the edges ends at* **length**: length of the edge* **numLanes**: the number of lanes on the edge* **speed**: the speed limit for vehicles on the edge* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsEdge_Descriptions.One useful additional attribute is **shape**, which specifies the shape of the edge connecting the two nodes. The shape consists of a series of subnodes (internal to sumo) that are connected together by straight lines to create a curved edge. If no shape is specified, the nodes are connected by a straight line. This attribute will be needed to create the circular arcs between the nodes in the system. 一个有用的附加属性是**shape**,它指定连接两个节点的边的形状。形状由一系列子节点(相扑的内部节点)组成,这些子节点通过直线连接在一起,形成一个弯曲的边缘。如果没有指定形状,则节点以直线连接。创建系统中节点之间的圆弧需要此属性。We now create four arcs connected the nodes specified in section 1.2, with the direction of the edges directed counter-clockwise:我们现在创建四个弧连接节1.2中指定的节点,以逆时针方向的边缘方向:
###Code
# some mathematical operations that may be used
from numpy import pi, sin, cos, linspace
class myNetwork(myNetwork): # update my network class
def specify_edges(self, net_params):
r = net_params.additional_params["radius"]
edgelen = r * pi / 2
# this will let us control the number of lanes in the network
lanes = net_params.additional_params["num_lanes"]
# speed limit of vehicles in the network
speed_limit = net_params.additional_params["speed_limit"]
edges = [
{
"id": "edge0",
"numLanes": lanes,
"speed": speed_limit,
"from": "bottom",
"to": "right",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(-pi/2, 0, 40)]
},
{
"id": "edge1",
"numLanes": lanes,
"speed": speed_limit,
"from": "right",
"to": "top",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(0, pi/2, 40)]
},
{
"id": "edge2",
"numLanes": lanes,
"speed": speed_limit,
"from": "top",
"to": "left",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi/2, pi, 40)]},
{
"id": "edge3",
"numLanes": lanes,
"speed": speed_limit,
"from": "left",
"to": "bottom",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi, 3*pi/2, 40)]
}
]
return edges
###Output
_____no_output_____
###Markdown
1.4 specify_routesThe routes are the sequence of edges vehicles traverse given their current position. For example, a vehicle beginning in the edge titled "edge0" (see section 1.3) must traverse, in sequence, the edges "edge0", "edge1", "edge2", and "edge3", before restarting its path.路径是给定当前位置的车辆通过的边缘序列。例如,从“edge0”(见1.3节)开始的车辆必须依次通过“edge0”、“edge1”、“edge2”和“edge3”的边缘,然后才能重新开始其路径。In order to specify the routes a vehicle may take, the function `specify_routes` is used. The routes in this method can be specified in one of three ways:为了指定车辆可能走的路线,使用了函数“specify_routes”。该方法中的路由可以通过以下三种方式之一指定:**1. Single route per edge:** 每边单路In this case of deterministic routes (as is the case in the ring road network), the routes can be specified as dictionary where the key element represents the starting edge and the element is a single list of edges the vehicle must traverse, with the first edge corresponding to the edge the vehicle begins on. Note that the edges must be connected for the route to be valid.在这种情况下确定的路线(环道路网络)一样,可以指定的路线作为字典的关键元素表示起始边,边的元素是一个列表的车辆必须遍历,第一边缘的边缘开始。请注意,要使路由有效,必须连接边缘。For this network, the available routes under this setting can be defined as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"]}
return rts
###Output
_____no_output_____
###Markdown
**2. Multiple routes per edge:** 每边多条路径Alternatively, if the routes are meant to be stochastic, each element can consist of a list of (route, probability) tuples, where the first element in the tuple is one of the routes a vehicle can take from a specific starting edge, and the second element is the probability that vehicles will choose that route. Note that, in this case, the sum of probability values for each dictionary key must sum up to one.或者,如果路线是随机的,每个元素可以包含列表(路线、概率)元组,第一个元素的元组是一个路线的车辆可以从一个特定的边缘开始,第二个元素是车辆的概率会选择这条路。注意,在本例中,每个字典键的概率值之和必须等于1。For example, modifying the code snippet we presented above, another valid way of representing the route in a more probabilistic setting is:例如,修改我们在上面给出的代码片段,另一种有效的方法是在更大的概率设置中表示路由:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": [(["edge0", "edge1", "edge2", "edge3"], 1)],
"edge1": [(["edge1", "edge2", "edge3", "edge0"], 1)],
"edge2": [(["edge2", "edge3", "edge0", "edge1"], 1)],
"edge3": [(["edge3", "edge0", "edge1", "edge2"], 1)]}
return rts
###Output
_____no_output_____
###Markdown
**3. Per-vehicle routes:** 每个车辆路线Finally, if you would like to assign a specific starting route to a vehicle with a specific ID, you can do so by adding a element into the dictionary whose key is the name of the vehicle and whose content is the list of edges the vehicle is meant to traverse as soon as it is introduced to the network.最后,如果您想要指定一个特定的车辆路线开始与一个特定的ID,您可以通过添加一个元素到字典的键是车辆的名称,其内容是边缘车辆是遍历列表就介绍到网络中。As an example, assume we have a vehicle named "human_0" in the network (as we will in the later sections), and it is initialized in the edge names "edge_0". Then, the route for this edge specifically can be added through the `specify_routes` method as follows:例如,假设我们在网络中有一个名为“human_0”的车辆(我们将在后面的部分中介绍),并且它是在边缘名称“edge_0”中初始化的。然后,可以通过“specify_routes”方法具体添加这条边的路由,如下:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"],
"human_0": ["edge0", "edge1", "edge2", "edge3"]}
return rts
###Output
_____no_output_____
###Markdown
In all three cases, the routes are ultimately represented in the class in the form described under the multiple routes setting, i.e.在所有这三种情况下,路由最终在类中以多路由设置下描述的形式表示,即 >>> print(network.rts) { "edge0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ], "edge1": [ (["edge1", "edge2", "edge3", "edge0"], 1) ], "edge2": [ (["edge2", "edge3", "edge0", "edge1"], 1) ], "edge3": [ (["edge3", "edge0", "edge1", "edge2"], 1) ], "human_0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ] }where the vehicle-specific route is only included in the third case.只包括第三种情况的车辆专用路线。 2. Specifying Auxiliary Network Features 指定辅助网络特性Other auxiliary methods exist within the base network class to help support vehicle state initialization and acquisition. Of these methods, the only required abstract method is:其他辅助方法存在于基本网络类中,以帮助支持车辆状态初始化和获取。在这些方法中,唯一需要的抽象方法是:* **specify_edge_starts**: defines edge starts for road sections with respect to some global reference 根据一些全局引用,为路段定义边界起点Other optional abstract methods within the base network class include:* **specify_internal_edge_starts**: defines the edge starts for internal edge nodes caused by finite length connections between road section 定义由路段之间的有限长度连接引起的内部边缘节点的边缘起点* **specify_intersection_edge_starts**: defines edge starts for intersections with respect to some global reference frame. Only needed by environments with intersections. 定义与某个全局参考框架相关的交叉口的边界起点。只需要具有交叉的环境。* **gen_custom_start_pos**: used to generate a user defined set of starting positions for vehicles in the network 用于为网络中的车辆生成用户定义的一组起始位置 2.2 Specifying the Starting Position of Edges 指定边缘的起始位置All of the above functions starting with "specify" receive no inputs, and return a list of tuples in which the first element of the tuple is the name of the edge/intersection/internal_link, and the second value is the distance of the link from some global reference, i.e. [(link_0, pos_0), (link_1, pos_1), ...].以上所有以“指定”开头的函数都不接收输入,并返回一个元组列表,其中元组的第一个元素是edge/交集/internal_link的名称,第二个值是链接到某个全局引用的距离,即The data specified in `specify_edge_starts` is used to provide a "global" sense of the location of vehicles, in one dimension. This is done either through the `get_x_by_id` method within an environment, or the `get_absolute_position` method in the `Vehicles` object within an environment. The `specify_internal_edge_starts` allows us to do the same to junctions/internal links when they are also located within the network (this is not the case for the ring road).在“specify_edge_started”中指定的数据用于在一维中提供车辆位置的“全局”感觉。这可以通过环境中的' get_x_by_id '方法完成,也可以通过环境中的' Vehicles '对象中的' get_absolute_position '方法完成。“specify_internal_edge_starts”允许我们在路口/内部链接也位于网络中时对它们进行同样的处理(环路则不是这样)。In section 1, we created a network with 4 edges named: "edge0", "edge1", "edge2", and "edge3". We assume that the edge titled "edge0" is the origin, and accordingly the position of the edge start of "edge0" is 0. The next edge, "edge1", begins a quarter of the length of the network from the starting point of edge "edge0", and accordingly the position of its edge start is radius * pi/2. This process continues for each of the edges. We can then define the starting position of the edges as follows:在第1节中,我们创建了一个具有4条边的网络,分别命名为:“edge0”、“edge1”、“edge2”和“edge3”。我们假设标题为“edge0”的边是原点,因此“edge0”的边的起始位置为0。下一条边“edge1”从边“edge0”的起始点开始,其长度为网络长度的四分之一,因此其边缘起始点的位置为半径* pi/2。这个过程对于每条边都是继续的。然后我们可以定义边缘的起始位置如下:
###Code
# import some math functions we may use
from numpy import pi
class myNetwork(myNetwork): # update my network class
def specify_edge_starts(self):
r = self.net_params.additional_params["radius"]
edgestarts = [("edge0", 0),
("edge1", r * 1/2 * pi),
("edge2", r * pi),
("edge3", r * 3/2 * pi)]
return edgestarts
###Output
_____no_output_____
###Markdown
3. Testing the New Network 测试新网络In this section, we run a new sumo simulation using our newly generated network class. For information on running sumo experiments, see `tutorial01_sumo.ipynb`.在本节中,我们将使用新生成的网络类运行一个新的sumo模拟。有关运行相扑实验的信息,请参见“tutorial01_sumo.ipynb”。We begin by defining some of the components needed to run a sumo experiment.
###Code
from flow.core.params import VehicleParams
from flow.controllers import IDMController, ContinuousRouter
from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams
vehicles = VehicleParams()
vehicles.add(veh_id="human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=22)
sim_params = SumoParams(sim_step=0.1, render=True)
initial_config = InitialConfig(bunching=40)
###Output
_____no_output_____
###Markdown
For visualizing purposes, we use the environment `AccelEnv`, as it works on any given network.为了可视化,我们使用环境‘AccelEnv’,因为它在任何给定的网络上工作。
###Code
from flow.envs.ring.accel import AccelEnv, ADDITIONAL_ENV_PARAMS
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
###Output
_____no_output_____
###Markdown
Next, using the `ADDITIONAL_NET_PARAMS` component see created in section 1.1, we prepare the `NetParams` component.接下来,使用第1.1节中创建的‘ADDITIONAL_NET_PARAMS’组件,我们准备‘NetParams’组件。
###Code
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
net_params = NetParams(additional_params=additional_net_params)
###Output
_____no_output_____
###Markdown
We are ready now to create and run our network. Using the newly defined network classes, we create a network object and feed it into a `Experiment` simulation. Finally, we are able to visually confirm that are network has been properly generated.我们现在已经准备好创建和运行我们的网络。使用新定义的网络类,我们创建一个网络对象,并将其提供给一个“实验”模拟。最后,我们可以可视化地确认是否正确地生成了网络。
###Code
from flow.core.experiment import Experiment
flow_params = dict(
exp_tag='test_network',
env_name=AccelEnv,
network=myNetwork,
simulator='traci',
sim=sim_params,
env=env_params,
net=net_params,
veh=vehicles,
initial=initial_config,
)
# number of time steps
flow_params['env'].horizon = 1500
exp = Experiment(flow_params)
# run the sumo simulation
_ = exp.run(1)
###Output
_____no_output_____
###Markdown
Tutorial 05: Creating Custom NetworksThis tutorial walks you through the process of generating custom networks. Networks define the network geometry of a task, as well as the constituents of the network, e.g. vehicles, traffic lights, etc... Various networks are available in Flow, depicting a diverse set of open and closed traffic networks such as ring roads, intersections, traffic light grids, straight highway merges, and more. In this tutorial, we will recreate the ring road network, seen in the figure below.In order to recreate this network, we will design a *network* class. This class creates the configuration files needed to produce a transportation network within the simulator. It also specifies the location of edge nodes in the network, as well as the positioning of vehicles at the start of a run.We begin by creating a class that inherits the methods of Flow's base network class. The separate methods are filled in in later sections.
###Code
# import Flow's base network class
from flow.networks import Network
# define the network class, and inherit properties from the base network class
class myNetwork(Network):
pass
###Output
_____no_output_____
###Markdown
The rest of the tutorial is organized as follows: sections 1 and 2 walk through the steps needed to specify custom traffic network geometry features and auxiliary features, respectively, while section 3 implements the new network in a simulation for visualization and testing purposes. 1. Specifying Traffic Network FeaturesOne of the core responsibilities of the network class is to to generate the necessary xml files needed to initialize a sumo instance. These xml files describe specific network features such as the position and directions of nodes and edges (see the figure above). Once the base network has been inherited, specifying these features becomes very systematic. All child classes are required to define at least the following three methods: * **specify_nodes**: specifies the attributes of nodes in the network* **specify_edges**: specifies the attributes of edges containing pairs on nodes in the network* **specify_routes**: specifies the routes vehicles can take starting from any edgeAdditionally, the following optional functions may also be defined:* **specify_types**: specifies the attributes of various edge types (if any exist)* **specify_connections**: specifies the attributes of connections. These attributes are used to describe how any specific node's incoming and outgoing edges/lane pairs are connected. If no connections are specified, sumo generates default connections.All of the functions mentioned above paragraph take in as input `net_params`, and output a list of dictionary elements, with each element providing the attributes of the component to be specified.This tutorial will cover the first three methods. For examples of `specify_types` and `specify_routes`, refer to source code located in `flow/networks/ring.py` and `flow/networks/bridge_toll.py`, respectively. 1.1 ADDITIONAL_NET_PARAMSThe features used to parametrize the network are specified within the `NetParams` input, as discussed in tutorial 1. Specifically, for the sake of our network, the `additional_params` attribute within `NetParams` will be responsible for storing information on the radius, number of lanes, and speed limit within each lane, as seen in the figure above. Accordingly, for this problem, we define an `ADDITIONAL_NET_PARAMS` variable of the form:
###Code
ADDITIONAL_NET_PARAMS = {
"radius": 40,
"num_lanes": 1,
"speed_limit": 30,
}
###Output
_____no_output_____
###Markdown
All networks presented in Flow provide a unique `ADDITIONAL_NET_PARAMS` component containing the information needed to properly define the network parameters of the network. We assume that these values are always provided by the user, and accordingly can be called from `net_params`. For example, if we would like to call the "radius" parameter, we simply type: radius = net_params.additional_params["radius"] 1.2 specify_nodesThe nodes of a network are the positions of a select few points in the network. These points are connected together using edges (see section 1.4). In order to specify the location of the nodes that will be placed in the network, the function `specify_nodes` is used. This method returns a list of dictionary elements, where each dictionary depicts the attributes of a single node. These node attributes include: * **id**: name of the node* **x**: x coordinate of the node* **y**: y coordinate of the node* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsNode_DescriptionsRefering to the figure at the top of this tutorial, we specify four nodes at the bottom (0,-r), top (0,r), left (-r,0), and right (0,r) of the ring. This is done as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_nodes(self, net_params):
# one of the elements net_params will need is a "radius" value
r = net_params.additional_params["radius"]
# specify the name and position (x,y) of each node
nodes = [{"id": "bottom", "x": 0, "y": -r},
{"id": "right", "x": r, "y": 0},
{"id": "top", "x": 0, "y": r},
{"id": "left", "x": -r, "y": 0}]
return nodes
###Output
_____no_output_____
###Markdown
1.3 specify_edgesOnce the nodes are specified, the nodes are linked together using directed edges. This done through the `specify_edges` method which, similar to `specify_nodes`, returns a list of dictionary elements, with each dictionary specifying the attributes of a single edge. The attributes include:* **id**: name of the edge* **from**: name of the node the edge starts from* **to**: the name of the node the edges ends at* **length**: length of the edge* **numLanes**: the number of lanes on the edge* **speed**: the speed limit for vehicles on the edge* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsEdge_Descriptions.One useful additional attribute is **shape**, which specifies the shape of the edge connecting the two nodes. The shape consists of a series of subnodes (internal to sumo) that are connected together by straight lines to create a curved edge. If no shape is specified, the nodes are connected by a straight line. This attribute will be needed to create the circular arcs between the nodes in the system. We now create four arcs connected the nodes specified in section 1.2, with the direction of the edges directed counter-clockwise:
###Code
# some mathematical operations that may be used
from numpy import pi, sin, cos, linspace
class myNetwork(myNetwork): # update my network class
def specify_edges(self, net_params):
r = net_params.additional_params["radius"]
edgelen = r * pi / 2
# this will let us control the number of lanes in the network
lanes = net_params.additional_params["num_lanes"]
# speed limit of vehicles in the network
speed_limit = net_params.additional_params["speed_limit"]
edges = [
{
"id": "edge0",
"numLanes": lanes,
"speed": speed_limit,
"from": "bottom",
"to": "right",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(-pi/2, 0, 40)]
},
{
"id": "edge1",
"numLanes": lanes,
"speed": speed_limit,
"from": "right",
"to": "top",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(0, pi/2, 40)]
},
{
"id": "edge2",
"numLanes": lanes,
"speed": speed_limit,
"from": "top",
"to": "left",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi/2, pi, 40)]},
{
"id": "edge3",
"numLanes": lanes,
"speed": speed_limit,
"from": "left",
"to": "bottom",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi, 3*pi/2, 40)]
}
]
return edges
###Output
_____no_output_____
###Markdown
1.4 specify_routesThe routes are the sequence of edges vehicles traverse given their current position. For example, a vehicle beginning in the edge titled "edge0" (see section 1.3) must traverse, in sequence, the edges "edge0", "edge1", "edge2", and "edge3", before restarting its path.In order to specify the routes a vehicle may take, the function `specify_routes` is used. The routes in this method can be specified in one of three ways:**1. Single route per edge:**In this case of deterministic routes (as is the case in the ring road network), the routes can be specified as dictionary where the key element represents the starting edge and the element is a single list of edges the vehicle must traverse, with the first edge corresponding to the edge the vehicle begins on. Note that the edges must be connected for the route to be valid.For this network, the available routes under this setting can be defined as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"]}
return rts
###Output
_____no_output_____
###Markdown
**2. Multiple routes per edge:**Alternatively, if the routes are meant to be stochastic, each element can consist of a list of (route, probability) tuples, where the first element in the tuple is one of the routes a vehicle can take from a specific starting edge, and the second element is the probability that vehicles will choose that route. Note that, in this case, the sum of probability values for each dictionary key must sum up to one.For example, modifying the code snippet we presented above, another valid way of representing the route in a more probabilistic setting is:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": [(["edge0", "edge1", "edge2", "edge3"], 1)],
"edge1": [(["edge1", "edge2", "edge3", "edge0"], 1)],
"edge2": [(["edge2", "edge3", "edge0", "edge1"], 1)],
"edge3": [(["edge3", "edge0", "edge1", "edge2"], 1)]}
return rts
###Output
_____no_output_____
###Markdown
**3. Per-vehicle routes:**Finally, if you would like to assign a specific starting route to a vehicle with a specific ID, you can do so by adding a element into the dictionary whose key is the name of the vehicle and whose content is the list of edges the vehicle is meant to traverse as soon as it is introduced to the network.As an example, assume we have a vehicle named "human_0" in the network (as we will in the later sections), and it is initialized in the edge names "edge_0". Then, the route for this edge specifically can be added through the `specify_routes` method as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"],
"human_0": ["edge0", "edge1", "edge2", "edge3"]}
return rts
###Output
_____no_output_____
###Markdown
In all three cases, the routes are ultimately represented in the class in the form described under the multiple routes setting, i.e. >>> print(network.rts) { "edge0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ], "edge1": [ (["edge1", "edge2", "edge3", "edge0"], 1) ], "edge2": [ (["edge2", "edge3", "edge0", "edge1"], 1) ], "edge3": [ (["edge3", "edge0", "edge1", "edge2"], 1) ], "human_0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ] }where the vehicle-specific route is only included in the third case. 2. Specifying Auxiliary Network FeaturesOther auxiliary methods exist within the base network class to help support vehicle state initialization and acquisition. Of these methods, the only required abstract method is:* **specify_edge_starts**: defines edge starts for road sections with respect to some global referenceOther optional abstract methods within the base network class include:* **specify_internal_edge_starts**: defines the edge starts for internal edge nodes caused by finite length connections between road section* **specify_intersection_edge_starts**: defines edge starts for intersections with respect to some global reference frame. Only needed by environments with intersections.* **gen_custom_start_pos**: used to generate a user defined set of starting positions for vehicles in the network 2.2 Specifying the Starting Position of EdgesAll of the above functions starting with "specify" receive no inputs, and return a list of tuples in which the first element of the tuple is the name of the edge/intersection/internal_link, and the second value is the distance of the link from some global reference, i.e. [(link_0, pos_0), (link_1, pos_1), ...].The data specified in `specify_edge_starts` is used to provide a "global" sense of the location of vehicles, in one dimension. This is done either through the `get_x_by_id` method within an environment, or the `get_absolute_position` method in the `Vehicles` object within an environment. The `specify_internal_edge_starts` allows us to do the same to junctions/internal links when they are also located within the network (this is not the case for the ring road).In section 1, we created a network with 4 edges named: "edge0", "edge1", "edge2", and "edge3". We assume that the edge titled "edge0" is the origin, and accordingly the position of the edge start of "edge0" is 0. The next edge, "edge1", begins a quarter of the length of the network from the starting point of edge "edge0", and accordingly the position of its edge start is radius * pi/2. This process continues for each of the edges. We can then define the starting position of the edges as follows:
###Code
# import some math functions we may use
from numpy import pi
class myNetwork(myNetwork): # update my network class
def specify_edge_starts(self):
r = self.net_params.additional_params["radius"]
edgestarts = [("edge0", 0),
("edge1", r * 1/2 * pi),
("edge2", r * pi),
("edge3", r * 3/2 * pi)]
return edgestarts
###Output
_____no_output_____
###Markdown
3. Testing the New NetworkIn this section, we run a new sumo simulation using our newly generated network class. For information on running sumo experiments, see `tutorial01_sumo.ipynb`.We begin by defining some of the components needed to run a sumo experiment.
###Code
from flow.core.params import VehicleParams
from flow.controllers import IDMController, ContinuousRouter
from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams
vehicles = VehicleParams()
vehicles.add(veh_id="human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=22)
sim_params = SumoParams(sim_step=0.1, render=True)
initial_config = InitialConfig(bunching=40)
###Output
_____no_output_____
###Markdown
For visualizing purposes, we use the environment `AccelEnv`, as it works on any given network.
###Code
from flow.envs.ring.accel import AccelEnv, ADDITIONAL_ENV_PARAMS
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
###Output
_____no_output_____
###Markdown
Next, using the `ADDITIONAL_NET_PARAMS` component see created in section 1.1, we prepare the `NetParams` component.
###Code
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
net_params = NetParams(additional_params=additional_net_params)
###Output
_____no_output_____
###Markdown
We are ready now to create and run our network. Using the newly defined network classes, we create a network object and feed it into a `Experiment` simulation. Finally, we are able to visually confirm that are network has been properly generated.
###Code
from flow.core.experiment import Experiment
flow_params = dict(
exp_tag='test_network',
env_name=AccelEnv,
network=myNetwork,
simulator='traci',
sim=sim_params,
env=env_params,
net=net_params,
veh=vehicles,
initial=initial_config,
)
# number of time steps
flow_params['env'].horizon = 1500
exp = Experiment(flow_params)
# run the sumo simulation
_ = exp.run(1)
###Output
_____no_output_____
###Markdown
Tutorial 05: Creating Custom NetworksThis tutorial walks you through the process of generating custom networks. Networks define the network geometry of a task, as well as the constituents of the network, e.g. vehicles, traffic lights, etc... Various networks are available in Flow, depicting a diverse set of open and closed traffic networks such as ring roads, intersections, traffic light grids, straight highway merges, and more. In this tutorial, we will recreate the ring road network, seen in the figure below.In order to recreate this network, we will design a *network* class. This class creates the configuration files needed to produce a transportation network within the simulator. It also specifies the location of edge nodes in the network, as well as the positioning of vehicles at the start of a run.We begin by creating a class that inherits the methods of Flow's base network class. The separate methods are filled in in later sections.
###Code
# import Flow's base network class
from flow.networks import Network
# define the network class, and inherit properties from the base network class
class myNetwork(Network):
pass
###Output
_____no_output_____
###Markdown
The rest of the tutorial is organized as follows: sections 1 and 2 walk through the steps needed to specify custom traffic network geometry features and auxiliary features, respectively, while section 3 implements the new network in a simulation for visualization and testing purposes. 1. Specifying Traffic Network FeaturesOne of the core responsibilities of the network class is to to generate the necessary xml files needed to initialize a sumo instance. These xml files describe specific network features such as the position and directions of nodes and edges (see the figure above). Once the base network has been inherited, specifying these features becomes very systematic. All child classes are required to define at least the following three methods: * **specify_nodes**: specifies the attributes of nodes in the network* **specify_edges**: specifies the attributes of edges containing pairs on nodes in the network* **specify_routes**: specifies the routes vehicles can take starting from any edgeAdditionally, the following optional functions may also be defined:* **specify_types**: specifies the attributes of various edge types (if any exist)* **specify_connections**: specifies the attributes of connections. These attributes are used to describe how any specific node's incoming and outgoing edges/lane pairs are connected. If no connections are specified, sumo generates default connections.All of the functions mentioned above paragraph take in as input `net_params`, and output a list of dictionary elements, with each element providing the attributes of the component to be specified.This tutorial will cover the first three methods. For examples of `specify_types` and `specify_routes`, refer to source code located in `flow/networks/ring.py` and `flow/networks/bridge_toll.py`, respectively. 1.1 ADDITIONAL_NET_PARAMSThe features used to parametrize the network are specified within the `NetParams` input, as discussed in tutorial 1. Specifically, for the sake of our network, the `additional_params` attribute within `NetParams` will be responsible for storing information on the radius, number of lanes, and speed limit within each lane, as seen in the figure above. Accordingly, for this problem, we define an `ADDITIONAL_NET_PARAMS` variable of the form:
###Code
ADDITIONAL_NET_PARAMS = {
"radius": 100,
"num_lanes": 2,
"speed_limit": 60,
}
###Output
_____no_output_____
###Markdown
All networks presented in Flow provide a unique `ADDITIONAL_NET_PARAMS` component containing the information needed to properly define the network parameters of the network. We assume that these values are always provided by the user, and accordingly can be called from `net_params`. For example, if we would like to call the "radius" parameter, we simply type: radius = net_params.additional_params["radius"] 1.2 specify_nodesThe nodes of a network are the positions of a select few points in the network. These points are connected together using edges (see section 1.4). In order to specify the location of the nodes that will be placed in the network, the function `specify_nodes` is used. This method returns a list of dictionary elements, where each dictionary depicts the attributes of a single node. These node attributes include: * **id**: name of the node* **x**: x coordinate of the node* **y**: y coordinate of the node* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsNode_DescriptionsRefering to the figure at the top of this tutorial, we specify four nodes at the bottom (0,-r), top (0,r), left (-r,0), and right (0,r) of the ring. This is done as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_nodes(self, net_params):
# one of the elements net_params will need is a "radius" value
r = net_params.additional_params["radius"]
# specify the name and position (x,y) of each node
nodes = [{"id": "bottom", "x": 0, "y": -r},
{"id": "right", "x": r, "y": 0},
{"id": "top", "x": 0, "y": r},
{"id": "left", "x": -r, "y": 0}]
return nodes
###Output
_____no_output_____
###Markdown
1.3 specify_edgesOnce the nodes are specified, the nodes are linked together using directed edges. This done through the `specify_edges` method which, similar to `specify_nodes`, returns a list of dictionary elements, with each dictionary specifying the attributes of a single edge. The attributes include:* **id**: name of the edge* **from**: name of the node the edge starts from* **to**: the name of the node the edges ends at* **length**: length of the edge* **numLanes**: the number of lanes on the edge* **speed**: the speed limit for vehicles on the edge* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsEdge_Descriptions.One useful additional attribute is **shape**, which specifies the shape of the edge connecting the two nodes. The shape consists of a series of subnodes (internal to sumo) that are connected together by straight lines to create a curved edge. If no shape is specified, the nodes are connected by a straight line. This attribute will be needed to create the circular arcs between the nodes in the system. We now create four arcs connected the nodes specified in section 1.2, with the direction of the edges directed counter-clockwise:
###Code
# some mathematical operations that may be used
from numpy import pi, sin, cos, linspace
class myNetwork(myNetwork): # update my network class
def specify_edges(self, net_params):
r = net_params.additional_params["radius"]
edgelen = r * pi / 2
# this will let us control the number of lanes in the network
lanes = net_params.additional_params["num_lanes"]
# speed limit of vehicles in the network
speed_limit = net_params.additional_params["speed_limit"]
edges = [
{
"id": "edge0",
"numLanes": lanes,
"speed": speed_limit,
"from": "bottom",
"to": "right",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(-pi/2, 0, 40)]
},
{
"id": "edge1",
"numLanes": lanes,
"speed": speed_limit,
"from": "right",
"to": "top",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(0, pi/2, 40)]
},
{
"id": "edge2",
"numLanes": lanes,
"speed": speed_limit,
"from": "top",
"to": "left",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi/2, pi, 40)]},
{
"id": "edge3",
"numLanes": lanes,
"speed": speed_limit,
"from": "left",
"to": "bottom",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi, 3*pi/2, 40)]
}
]
return edges
###Output
_____no_output_____
###Markdown
1.4 specify_routesThe routes are the sequence of edges vehicles traverse given their current position. For example, a vehicle beginning in the edge titled "edge0" (see section 1.3) must traverse, in sequence, the edges "edge0", "edge1", "edge2", and "edge3", before restarting its path.In order to specify the routes a vehicle may take, the function `specify_routes` is used. The routes in this method can be specified in one of three ways:**1. Single route per edge:**In this case of deterministic routes (as is the case in the ring road network), the routes can be specified as dictionary where the key element represents the starting edge and the element is a single list of edges the vehicle must traverse, with the first edge corresponding to the edge the vehicle begins on. Note that the edges must be connected for the route to be valid.For this network, the available routes under this setting can be defined as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"]}
return rts
###Output
_____no_output_____
###Markdown
**2. Multiple routes per edge:**Alternatively, if the routes are meant to be stochastic, each element can consist of a list of (route, probability) tuples, where the first element in the tuple is one of the routes a vehicle can take from a specific starting edge, and the second element is the probability that vehicles will choose that route. Note that, in this case, the sum of probability values for each dictionary key must sum up to one.For example, modifying the code snippet we presented above, another valid way of representing the route in a more probabilistic setting is:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": [(["edge0", "edge1", "edge2", "edge3"], 1)],
"edge1": [(["edge1", "edge2", "edge3", "edge0"], 1)],
"edge2": [(["edge2", "edge3", "edge0", "edge1"], 1)],
"edge3": [(["edge3", "edge0", "edge1", "edge2"], 1)]}
return rts
###Output
_____no_output_____
###Markdown
**3. Per-vehicle routes:**Finally, if you would like to assign a specific starting route to a vehicle with a specific ID, you can do so by adding a element into the dictionary whose key is the name of the vehicle and whose content is the list of edges the vehicle is meant to traverse as soon as it is introduced to the network.As an example, assume we have a vehicle named "human_0" in the network (as we will in the later sections), and it is initialized in the edge names "edge_0". Then, the route for this edge specifically can be added through the `specify_routes` method as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"],
"human_0": ["edge0", "edge1", "edge2", "edge3"]}
return rts
###Output
_____no_output_____
###Markdown
In all three cases, the routes are ultimately represented in the class in the form described under the multiple routes setting, i.e. >>> print(network.rts) { "edge0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ], "edge1": [ (["edge1", "edge2", "edge3", "edge0"], 1) ], "edge2": [ (["edge2", "edge3", "edge0", "edge1"], 1) ], "edge3": [ (["edge3", "edge0", "edge1", "edge2"], 1) ], "human_0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ] }where the vehicle-specific route is only included in the third case. 2. Specifying Auxiliary Network FeaturesOther auxiliary methods exist within the base network class to help support vehicle state initialization and acquisition. Of these methods, the only required abstract method is:* **specify_edge_starts**: defines edge starts for road sections with respect to some global referenceOther optional abstract methods within the base network class include:* **specify_internal_edge_starts**: defines the edge starts for internal edge nodes caused by finite length connections between road section* **specify_intersection_edge_starts**: defines edge starts for intersections with respect to some global reference frame. Only needed by environments with intersections.* **gen_custom_start_pos**: used to generate a user defined set of starting positions for vehicles in the network 2.2 Specifying the Starting Position of EdgesAll of the above functions starting with "specify" receive no inputs, and return a list of tuples in which the first element of the tuple is the name of the edge/intersection/internal_link, and the second value is the distance of the link from some global reference, i.e. [(link_0, pos_0), (link_1, pos_1), ...].The data specified in `specify_edge_starts` is used to provide a "global" sense of the location of vehicles, in one dimension. This is done either through the `get_x_by_id` method within an environment, or the `get_absolute_position` method in the `Vehicles` object within an environment. The `specify_internal_edge_starts` allows us to do the same to junctions/internal links when they are also located within the network (this is not the case for the ring road).In section 1, we created a network with 4 edges named: "edge0", "edge1", "edge2", and "edge3". We assume that the edge titled "edge0" is the origin, and accordingly the position of the edge start of "edge0" is 0. The next edge, "edge1", begins a quarter of the length of the network from the starting point of edge "edge0", and accordingly the position of its edge start is radius * pi/2. This process continues for each of the edges. We can then define the starting position of the edges as follows:
###Code
# import some math functions we may use
from numpy import pi
class myNetwork(myNetwork): # update my network class
def specify_edge_starts(self):
r = self.net_params.additional_params["radius"]
edgestarts = [("edge0", 0),
("edge1", r * 1/2 * pi),
("edge2", r * pi),
("edge3", r * 3/2 * pi)]
return edgestarts
###Output
_____no_output_____
###Markdown
3. Testing the New NetworkIn this section, we run a new sumo simulation using our newly generated network class. For information on running sumo experiments, see `tutorial01_sumo.ipynb`.We begin by defining some of the components needed to run a sumo experiment.
###Code
from flow.core.params import VehicleParams
from flow.controllers import IDMController, ContinuousRouter
from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams
from flow.networks.minicity import MiniCityNetwork
vehicles = VehicleParams()
vehicles.add(veh_id="human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=22)
sim_params = SumoParams(sim_step=5, render=True)
initial_config = InitialConfig(bunching=40)
###Output
_____no_output_____
###Markdown
For visualizing purposes, we use the environment `AccelEnv`, as it works on any given network.
###Code
from flow.envs.ring.accel import AccelEnv, ADDITIONAL_ENV_PARAMS
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
###Output
_____no_output_____
###Markdown
Next, using the `ADDITIONAL_NET_PARAMS` component see created in section 1.1, we prepare the `NetParams` component.
###Code
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
net_params = NetParams(additional_params=additional_net_params)
City = MiniCityNetwork('City', vehicles, net_params)
###Output
_____no_output_____
###Markdown
We are ready now to create and run our network. Using the newly defined network classes, we create a network object and feed it into a `Experiment` simulation. Finally, we are able to visually confirm that are network has been properly generated.
###Code
from flow.core.experiment import Experiment
flow_params = dict(
exp_tag='test_network',
env_name=AccelEnv,
network= myNetwork,
simulator='traci',
sim=sim_params,
env=env_params,
net=net_params,
veh=vehicles,
initial=initial_config,
)
# number of time steps
flow_params['env'].horizon = 2000
exp = Experiment(flow_params)
# run the sumo simulation
_ = exp.run(1)
###Output
Round 0, return: 5.424763597662036
Average, std returns: 5.424763597662036, 0.0
Average, std velocities: 27.099818375250294, 0.0
Average, std outflows: 0.0, 0.0
Total time: 36.67716574668884
steps/second: 79.41007223403614
###Markdown
Tutorial 05: Creating Custom NetworksThis tutorial walks you through the process of generating custom networks. Networks define the network geometry of a task, as well as the constituents of the network, e.g. vehicles, traffic lights, etc... Various networks are available in Flow, depicting a diverse set of open and closed traffic networks such as ring roads, intersections, traffic light grids, straight highway merges, and more. In this exercise, we will recreate the ring road network, seen in the figure below.In order to recreate this network, we will design a *network* class. This class creates the configuration files needed to produce a transportation network within the simulator. It also specifies the location of edge nodes in the network, as well as the positioning of vehicles at the start of a run.We begin by creating a class that inherits the methods of Flow's base network class. The separate methods are filled in in later sections.
###Code
# import Flow's base network class
from flow.networks import Network
# define the network class, and inherit properties from the base network class
class myNetwork(Network):
pass
###Output
_____no_output_____
###Markdown
The rest of the tutorial is organized as follows: sections 1 and 2 walk through the steps needed to specify custom traffic network geometry features and auxiliary features, respectively, while section 3 implements the new network in a simulation for visualization and testing purposes. 1. Specifying Traffic Network FeaturesOne of the core responsibilities of the network class is to to generate the necessary xml files needed to initialize a sumo instance. These xml files describe specific network features such as the position and directions of nodes and edges (see the figure above). Once the base network has been inherited, specifying these features becomes very systematic. All child classes are required to define at least the following three methods: * **specify_nodes**: specifies the attributes of nodes in the network* **specify_edges**: specifies the attributes of edges containing pairs on nodes in the network* **specify_routes**: specifies the routes vehicles can take starting from any edgeAdditionally, the following optional functions may also be defined:* **specify_types**: specifies the attributes of various edge types (if any exist)* **specify_connections**: specifies the attributes of connections. These attributes are used to describe how any specific node's incoming and outgoing edges/lane pairs are connected. If no connections are specified, sumo generates default connections.All of the functions mentioned above paragraph take in as input `net_params`, and output a list of dictionary elements, with each element providing the attributes of the component to be specified.This tutorial will cover the first three methods. For examples of `specify_types` and `specify_routes`, refer to source code located in `flow/networks/ring.py` and `flow/networks/bridge_toll.py`, respectively. 1.1 ADDITIONAL_NET_PARAMSThe features used to parametrize the network are specified within the `NetParams` input, as discussed in tutorial 1. Specifically, for the sake of our network, the `additional_params` attribute within `NetParams` will be responsible for storing information on the radius, number of lanes, and speed limit within each lane, as seen in the figure above. Accordingly, for this problem, we define an `ADDITIONAL_NET_PARAMS` variable of the form:
###Code
ADDITIONAL_NET_PARAMS = {
"radius": 40,
"num_lanes": 1,
"speed_limit": 30,
}
###Output
_____no_output_____
###Markdown
All networks presented in Flow provide a unique `ADDITIONAL_NET_PARAMS` component containing the information needed to properly define the network parameters of the network. We assume that these values are always provided by the user, and accordingly can be called from `net_params`. For example, if we would like to call the "radius" parameter, we simply type: radius = net_params.additional_params["radius"] 1.2 specify_nodesThe nodes of a network are the positions of a select few points in the network. These points are connected together using edges (see section 1.4). In order to specify the location of the nodes that will be placed in the network, the function `specify_nodes` is used. This method returns a list of dictionary elements, where each dictionary depicts the attributes of a single node. These node attributes include: * **id**: name of the node* **x**: x coordinate of the node* **y**: y coordinate of the node* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsNode_DescriptionsRefering to the figure at the top of this tutorial, we specify four nodes at the bottom (0,-r), top (0,r), left (-r,0), and right (0,r) of the ring. This is done as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_nodes(self, net_params):
# one of the elements net_params will need is a "radius" value
r = net_params.additional_params["radius"]
# specify the name and position (x,y) of each node
nodes = [{"id": "bottom", "x": 0, "y": -r},
{"id": "right", "x": r, "y": 0},
{"id": "top", "x": 0, "y": r},
{"id": "left", "x": -r, "y": 0}]
return nodes
###Output
_____no_output_____
###Markdown
1.3 specify_edgesOnce the nodes are specified, the nodes are linked together using directed edges. This done through the `specify_edges` method which, similar to `specify_nodes`, returns a list of dictionary elements, with each dictionary specifying the attributes of a single edge. The attributes include:* **id**: name of the edge* **from**: name of the node the edge starts from* **to**: the name of the node the edges ends at* **length**: length of the edge* **numLanes**: the number of lanes on the edge* **speed**: the speed limit for vehicles on the edge* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsEdge_Descriptions.One useful additional attribute is **shape**, which specifies the shape of the edge connecting the two nodes. The shape consists of a series of subnodes (internal to sumo) that are connected together by straight lines to create a curved edge. If no shape is specified, the nodes are connected by a straight line. This attribute will be needed to create the circular arcs between the nodes in the system. We now create four arcs connected the nodes specified in section 1.2, with the direction of the edges directed counter-clockwise:
###Code
# some mathematical operations that may be used
from numpy import pi, sin, cos, linspace
class myNetwork(myNetwork): # update my network class
def specify_edges(self, net_params):
r = net_params.additional_params["radius"]
edgelen = r * pi / 2
# this will let us control the number of lanes in the network
lanes = net_params.additional_params["num_lanes"]
# speed limit of vehicles in the network
speed_limit = net_params.additional_params["speed_limit"]
edges = [
{
"id": "edge0",
"numLanes": lanes,
"speed": speed_limit,
"from": "bottom",
"to": "right",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(-pi/2, 0, 40)]
},
{
"id": "edge1",
"numLanes": lanes,
"speed": speed_limit,
"from": "right",
"to": "top",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(0, pi/2, 40)]
},
{
"id": "edge2",
"numLanes": lanes,
"speed": speed_limit,
"from": "top",
"to": "left",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi/2, pi, 40)]},
{
"id": "edge3",
"numLanes": lanes,
"speed": speed_limit,
"from": "left",
"to": "bottom",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi, 3*pi/2, 40)]
}
]
return edges
###Output
_____no_output_____
###Markdown
1.4 specify_routesThe routes are the sequence of edges vehicles traverse given their current position. For example, a vehicle beginning in the edge titled "edge0" (see section 1.3) must traverse, in sequence, the edges "edge0", "edge1", "edge2", and "edge3", before restarting its path.In order to specify the routes a vehicle may take, the function `specify_routes` is used. The routes in this method can be specified in one of three ways:**1. Single route per edge:**In this case of deterministic routes (as is the case in the ring road network), the routes can be specified as dictionary where the key element represents the starting edge and the element is a single list of edges the vehicle must traverse, with the first edge corresponding to the edge the vehicle begins on. Note that the edges must be connected for the route to be valid.For this network, the available routes under this setting can be defined as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"]}
return rts
###Output
_____no_output_____
###Markdown
**2. Multiple routes per edge:**Alternatively, if the routes are meant to be stochastic, each element can consist of a list of (route, probability) tuples, where the first element in the tuple is one of the routes a vehicle can take from a specific starting edge, and the second element is the probability that vehicles will choose that route. Note that, in this case, the sum of probability values for each dictionary key must sum up to one.For example, modifying the code snippet we presented above, another valid way of representing the route in a more probabilistic setting is:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": [(["edge0", "edge1", "edge2", "edge3"], 1)],
"edge1": [(["edge1", "edge2", "edge3", "edge0"], 1)],
"edge2": [(["edge2", "edge3", "edge0", "edge1"], 1)],
"edge3": [(["edge3", "edge0", "edge1", "edge2"], 1)]}
return rts
###Output
_____no_output_____
###Markdown
**3. Per-vehicle routes:**Finally, if you would like to assign a specific starting route to a vehicle with a specific ID, you can do so by adding a element into the dictionary whose key is the name of the vehicle and whose content is the list of edges the vehicle is meant to traverse as soon as it is introduced to the network.As an example, assume we have a vehicle named "human_0" in the network (as we will in the later sections), and it is initialized in the edge names "edge_0". Then, the route for this edge specifically can be added through the `specify_routes` method as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"],
"human_0": ["edge0", "edge1", "edge2", "edge3"]}
return rts
###Output
_____no_output_____
###Markdown
In all three cases, the routes are ultimately represented in the class in the form described under the multiple routes setting, i.e. >>> print(network.rts) { "edge0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ], "edge1": [ (["edge1", "edge2", "edge3", "edge0"], 1) ], "edge2": [ (["edge2", "edge3", "edge0", "edge1"], 1) ], "edge3": [ (["edge3", "edge0", "edge1", "edge2"], 1) ], "human_0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ] }where the vehicle-specific route is only included in the third case. 2. Specifying Auxiliary Network FeaturesOther auxiliary methods exist within the base network class to help support vehicle state initialization and acquisition. Of these methods, the only required abstract method is:* **specify_edge_starts**: defines edge starts for road sections with respect to some global referenceOther optional abstract methods within the base network class include:* **specify_internal_edge_starts**: defines the edge starts for internal edge nodes caused by finite length connections between road section* **specify_intersection_edge_starts**: defines edge starts for intersections with respect to some global reference frame. Only needed by environments with intersections.* **gen_custom_start_pos**: used to generate a user defined set of starting positions for vehicles in the network 2.2 Specifying the Starting Position of EdgesAll of the above functions starting with "specify" receive no inputs, and return a list of tuples in which the first element of the tuple is the name of the edge/intersection/internal_link, and the second value is the distance of the link from some global reference, i.e. [(link_0, pos_0), (link_1, pos_1), ...].The data specified in `specify_edge_starts` is used to provide a "global" sense of the location of vehicles, in one dimension. This is done either through the `get_x_by_id` method within an environment, or the `get_absolute_position` method in the `Vehicles` object within an environment. The `specify_internal_edge_starts` allows us to do the same to junctions/internal links when they are also located within the network (this is not the case for the ring road).In section 1, we created a network with 4 edges named: "edge0", "edge1", "edge2", and "edge3". We assume that the edge titled "edge0" is the origin, and accordingly the position of the edge start of "edge0" is 0. The next edge, "edge1", begins a quarter of the length of the network from the starting point of edge "edge0", and accordingly the position of its edge start is radius * pi/2. This process continues for each of the edges. We can then define the starting position of the edges as follows:
###Code
# import some math functions we may use
from numpy import pi
class myNetwork(myNetwork): # update my network class
def specify_edge_starts(self):
r = self.net_params.additional_params["radius"]
edgestarts = [("edge0", 0),
("edge1", r * 1/2 * pi),
("edge2", r * pi),
("edge3", r * 3/2 * pi)]
return edgestarts
###Output
_____no_output_____
###Markdown
3. Testing the New NetworkIn this section, we run a new sumo simulation using our newly generated network class. For information on running sumo experiments, see `exercise01_sumo.ipynb`.We begin by defining some of the components needed to run a sumo experiment.
###Code
from flow.core.params import VehicleParams
from flow.controllers import IDMController, ContinuousRouter
from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams
vehicles = VehicleParams()
vehicles.add(veh_id="human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=22)
sumo_params = SumoParams(sim_step=0.1, render=True)
initial_config = InitialConfig(bunching=40)
###Output
_____no_output_____
###Markdown
For visualizing purposes, we use the environment `AccelEnv`, as it works on any given network.
###Code
from flow.envs.ring.accel import AccelEnv, ADDITIONAL_ENV_PARAMS
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
###Output
_____no_output_____
###Markdown
Next, using the `ADDITIONAL_NET_PARAMS` component see created in section 1.1, we prepare the `NetParams` component.
###Code
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
net_params = NetParams(additional_params=additional_net_params)
###Output
_____no_output_____
###Markdown
We are ready now to create and run our network. Using the newly defined network classes, we create a network object and feed it into a `Experiment` simulation. Finally, we are able to visually confirm that are network has been properly generated.
###Code
from flow.core.experiment import Experiment
network = myNetwork( # we use the newly defined network class
name="test_network",
vehicles=vehicles,
net_params=net_params,
initial_config=initial_config
)
# AccelEnv allows us to test any newly generated network quickly
env = AccelEnv(env_params, sumo_params, network)
exp = Experiment(env)
# run the sumo simulation for a set number of time steps
_ = exp.run(1, 1500)
###Output
_____no_output_____
###Markdown
Tutorial 05: Creating Custom NetworksThis tutorial walks you through the process of generating custom networks. Networks define the network geometry of a task, as well as the constituents of the network, e.g. vehicles, traffic lights, etc... Various networks are available in Flow, depicting a diverse set of open and closed traffic networks such as ring roads, intersections, traffic light grids, straight highway merges, and more. In this tutorial, we will recreate the ring road network, seen in the figure below.In order to recreate this network, we will design a *network* class. This class creates the configuration files needed to produce a transportation network within the simulator. It also specifies the location of edge nodes in the network, as well as the positioning of vehicles at the start of a run.We begin by creating a class that inherits the methods of Flow's base network class. The separate methods are filled in in later sections.
###Code
# import Flow's base network class
from flow.networks import Network
# define the network class, and inherit properties from the base network class
class myNetwork(Network):
pass
###Output
_____no_output_____
###Markdown
The rest of the tutorial is organized as follows: sections 1 and 2 walk through the steps needed to specify custom traffic network geometry features and auxiliary features, respectively, while section 3 implements the new network in a simulation for visualization and testing purposes. 1. Specifying Traffic Network FeaturesOne of the core responsibilities of the network class is to to generate the necessary xml files needed to initialize a sumo instance. These xml files describe specific network features such as the position and directions of nodes and edges (see the figure above). Once the base network has been inherited, specifying these features becomes very systematic. All child classes are required to define at least the following three methods: * **specify_nodes**: specifies the attributes of nodes in the network* **specify_edges**: specifies the attributes of edges containing pairs on nodes in the network* **specify_routes**: specifies the routes vehicles can take starting from any edgeAdditionally, the following optional functions may also be defined:* **specify_types**: specifies the attributes of various edge types (if any exist)* **specify_connections**: specifies the attributes of connections. These attributes are used to describe how any specific node's incoming and outgoing edges/lane pairs are connected. If no connections are specified, sumo generates default connections.All of the functions mentioned above paragraph take in as input `net_params`, and output a list of dictionary elements, with each element providing the attributes of the component to be specified.This tutorial will cover the first three methods. For examples of `specify_types` and `specify_routes`, refer to source code located in `flow/networks/ring.py` and `flow/networks/bridge_toll.py`, respectively. 1.1 ADDITIONAL_NET_PARAMSThe features used to parametrize the network are specified within the `NetParams` input, as discussed in tutorial 1. Specifically, for the sake of our network, the `additional_params` attribute within `NetParams` will be responsible for storing information on the radius, number of lanes, and speed limit within each lane, as seen in the figure above. Accordingly, for this problem, we define an `ADDITIONAL_NET_PARAMS` variable of the form:
###Code
ADDITIONAL_NET_PARAMS = {
"radius": 40,
"num_lanes": 1,
"speed_limit": 30,
}
###Output
_____no_output_____
###Markdown
All networks presented in Flow provide a unique `ADDITIONAL_NET_PARAMS` component containing the information needed to properly define the network parameters of the network. We assume that these values are always provided by the user, and accordingly can be called from `net_params`. For example, if we would like to call the "radius" parameter, we simply type: radius = net_params.additional_params["radius"] 1.2 specify_nodesThe nodes of a network are the positions of a select few points in the network. These points are connected together using edges (see section 1.4). In order to specify the location of the nodes that will be placed in the network, the function `specify_nodes` is used. This method returns a list of dictionary elements, where each dictionary depicts the attributes of a single node. These node attributes include: * **id**: name of the node* **x**: x coordinate of the node* **y**: y coordinate of the node* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsNode_DescriptionsRefering to the figure at the top of this tutorial, we specify four nodes at the bottom (0,-r), top (0,r), left (-r,0), and right (0,r) of the ring. This is done as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_nodes(self, net_params):
# one of the elements net_params will need is a "radius" value
r = net_params.additional_params["radius"]
# specify the name and position (x,y) of each node
nodes = [{"id": "bottom", "x": 0, "y": -r},
{"id": "right", "x": r, "y": 0},
{"id": "top", "x": 0, "y": r},
{"id": "left", "x": -r, "y": 0}]
return nodes
###Output
_____no_output_____
###Markdown
1.3 specify_edgesOnce the nodes are specified, the nodes are linked together using directed edges. This done through the `specify_edges` method which, similar to `specify_nodes`, returns a list of dictionary elements, with each dictionary specifying the attributes of a single edge. The attributes include:* **id**: name of the edge* **from**: name of the node the edge starts from* **to**: the name of the node the edges ends at* **length**: length of the edge* **numLanes**: the number of lanes on the edge* **speed**: the speed limit for vehicles on the edge* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsEdge_Descriptions.One useful additional attribute is **shape**, which specifies the shape of the edge connecting the two nodes. The shape consists of a series of subnodes (internal to sumo) that are connected together by straight lines to create a curved edge. If no shape is specified, the nodes are connected by a straight line. This attribute will be needed to create the circular arcs between the nodes in the system. We now create four arcs connected the nodes specified in section 1.2, with the direction of the edges directed counter-clockwise:
###Code
# some mathematical operations that may be used
from numpy import pi, sin, cos, linspace
class myNetwork(myNetwork): # update my network class
def specify_edges(self, net_params):
r = net_params.additional_params["radius"]
edgelen = r * pi / 2
# this will let us control the number of lanes in the network
lanes = net_params.additional_params["num_lanes"]
# speed limit of vehicles in the network
speed_limit = net_params.additional_params["speed_limit"]
edges = [
{
"id": "edge0",
"numLanes": lanes,
"speed": speed_limit,
"from": "bottom",
"to": "right",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(-pi/2, 0, 40)]
},
{
"id": "edge1",
"numLanes": lanes,
"speed": speed_limit,
"from": "right",
"to": "top",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(0, pi/2, 40)]
},
{
"id": "edge2",
"numLanes": lanes,
"speed": speed_limit,
"from": "top",
"to": "left",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi/2, pi, 40)]},
{
"id": "edge3",
"numLanes": lanes,
"speed": speed_limit,
"from": "left",
"to": "bottom",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi, 3*pi/2, 40)]
}
]
return edges
###Output
_____no_output_____
###Markdown
1.4 specify_routesThe routes are the sequence of edges vehicles traverse given their current position. For example, a vehicle beginning in the edge titled "edge0" (see section 1.3) must traverse, in sequence, the edges "edge0", "edge1", "edge2", and "edge3", before restarting its path.In order to specify the routes a vehicle may take, the function `specify_routes` is used. The routes in this method can be specified in one of three ways:**1. Single route per edge:**In this case of deterministic routes (as is the case in the ring road network), the routes can be specified as dictionary where the key element represents the starting edge and the element is a single list of edges the vehicle must traverse, with the first edge corresponding to the edge the vehicle begins on. Note that the edges must be connected for the route to be valid.For this network, the available routes under this setting can be defined as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"]}
return rts
###Output
_____no_output_____
###Markdown
**2. Multiple routes per edge:**Alternatively, if the routes are meant to be stochastic, each element can consist of a list of (route, probability) tuples, where the first element in the tuple is one of the routes a vehicle can take from a specific starting edge, and the second element is the probability that vehicles will choose that route. Note that, in this case, the sum of probability values for each dictionary key must sum up to one.For example, modifying the code snippet we presented above, another valid way of representing the route in a more probabilistic setting is:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": [(["edge0", "edge1", "edge2", "edge3"], 1)],
"edge1": [(["edge1", "edge2", "edge3", "edge0"], 1)],
"edge2": [(["edge2", "edge3", "edge0", "edge1"], 1)],
"edge3": [(["edge3", "edge0", "edge1", "edge2"], 1)]}
return rts
###Output
_____no_output_____
###Markdown
**3. Per-vehicle routes:**Finally, if you would like to assign a specific starting route to a vehicle with a specific ID, you can do so by adding a element into the dictionary whose key is the name of the vehicle and whose content is the list of edges the vehicle is meant to traverse as soon as it is introduced to the network.As an example, assume we have a vehicle named "human_0" in the network (as we will in the later sections), and it is initialized in the edge names "edge_0". Then, the route for this edge specifically can be added through the `specify_routes` method as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"],
"human_0": ["edge0", "edge1", "edge2", "edge3"]}
return rts
###Output
_____no_output_____
###Markdown
In all three cases, the routes are ultimately represented in the class in the form described under the multiple routes setting, i.e. >>> print(network.rts) { "edge0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ], "edge1": [ (["edge1", "edge2", "edge3", "edge0"], 1) ], "edge2": [ (["edge2", "edge3", "edge0", "edge1"], 1) ], "edge3": [ (["edge3", "edge0", "edge1", "edge2"], 1) ], "human_0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ] }where the vehicle-specific route is only included in the third case. 2. Specifying Auxiliary Network FeaturesOther auxiliary methods exist within the base network class to help support vehicle state initialization and acquisition. Of these methods, the only required abstract method is:* **specify_edge_starts**: defines edge starts for road sections with respect to some global referenceOther optional abstract methods within the base network class include:* **specify_internal_edge_starts**: defines the edge starts for internal edge nodes caused by finite length connections between road section* **specify_intersection_edge_starts**: defines edge starts for intersections with respect to some global reference frame. Only needed by environments with intersections.* **gen_custom_start_pos**: used to generate a user defined set of starting positions for vehicles in the network 2.2 Specifying the Starting Position of EdgesAll of the above functions starting with "specify" receive no inputs, and return a list of tuples in which the first element of the tuple is the name of the edge/intersection/internal_link, and the second value is the distance of the link from some global reference, i.e. [(link_0, pos_0), (link_1, pos_1), ...].The data specified in `specify_edge_starts` is used to provide a "global" sense of the location of vehicles, in one dimension. This is done either through the `get_x_by_id` method within an environment, or the `get_absolute_position` method in the `Vehicles` object within an environment. The `specify_internal_edge_starts` allows us to do the same to junctions/internal links when they are also located within the network (this is not the case for the ring road).In section 1, we created a network with 4 edges named: "edge0", "edge1", "edge2", and "edge3". We assume that the edge titled "edge0" is the origin, and accordingly the position of the edge start of "edge0" is 0. The next edge, "edge1", begins a quarter of the length of the network from the starting point of edge "edge0", and accordingly the position of its edge start is radius * pi/2. This process continues for each of the edges. We can then define the starting position of the edges as follows:
###Code
# import some math functions we may use
from numpy import pi
class myNetwork(myNetwork): # update my network class
def specify_edge_starts(self):
r = self.net_params.additional_params["radius"]
edgestarts = [("edge0", 0),
("edge1", r * 1/2 * pi),
("edge2", r * pi),
("edge3", r * 3/2 * pi)]
return edgestarts
###Output
_____no_output_____
###Markdown
3. Testing the New NetworkIn this section, we run a new sumo simulation using our newly generated network class. For information on running sumo experiments, see `tutorial01_sumo.ipynb`.We begin by defining some of the components needed to run a sumo experiment.
###Code
from flow.core.params import VehicleParams
from flow.controllers import IDMController, ContinuousRouter
from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams
vehicles = VehicleParams()
vehicles.add(veh_id="human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=22)
sim_params = SumoParams(sim_step=0.1, render=True)
initial_config = InitialConfig(bunching=40)
###Output
_____no_output_____
###Markdown
For visualizing purposes, we use the environment `AccelEnv`, as it works on any given network.
###Code
from flow.envs.ring.accel import AccelEnv, ADDITIONAL_ENV_PARAMS
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
###Output
_____no_output_____
###Markdown
Next, using the `ADDITIONAL_NET_PARAMS` component see created in section 1.1, we prepare the `NetParams` component.
###Code
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
net_params = NetParams(additional_params=additional_net_params)
###Output
_____no_output_____
###Markdown
We are ready now to create and run our network. Using the newly defined network classes, we create a network object and feed it into a `Experiment` simulation. Finally, we are able to visually confirm that are network has been properly generated.
###Code
from flow.core.experiment import Experiment
flow_params = dict(
exp_tag='test_network',
env_name=AccelEnv,
network=myNetwork,
simulator='traci',
sim=sim_params,
env=env_params,
net=net_params,
veh=vehicles,
initial=initial_config,
)
# number of time steps
flow_params['env'].horizon = 1500
exp = Experiment(flow_params)
# run the sumo simulation
_ = exp.run(1)
###Output
_____no_output_____
###Markdown
Tutorial 05: Creating Custom NetworksThis tutorial walks you through the process of generating custom networks. Networks define the network geometry of a task, as well as the constituents of the network, e.g., vehicles, traffic lights, etc... Various networks are available in Flow, depicting a diverse set of open and closed traffic networks such as ring roads, intersections, traffic light grids, straight highway merges, and more. In this tutorial, we will recreate the ring road network, seen in the figure below.In order to recreate this network, we will design a *network* class. This class creates the configuration files needed to produce a transportation network within the simulator. It also specifies the location of edge nodes in the network, as well as the positioning of vehicles at the start of a run.We begin by creating a class that inherits the methods of Flow's base network class. The separate methods are filled in in later sections.
###Code
# import Flow's base network class
from flow.networks import Network
# define the network class, and inherit properties from the base network class
class myNetwork(Network):
pass
###Output
_____no_output_____
###Markdown
The rest of the tutorial is organized as follows: Sections 1 and 2 discuss the steps needed to specify custom traffic network geometry features and auxiliary features, respectively, while Section 3 implements the new network in a simulation for visualization and testing purposes. 1. Specifying Traffic Network FeaturesOne of the core responsibilities of the network class is to generate the necessary xml files needed to initialize a SUMO instance. These xml files describe specific network features such as the position and directions of nodes and edges (see the figure above). Once the base network has been inherited, specifying these features becomes very systematic. All child classes are required to define at least the following three methods: * **specify_nodes**: specifies the attributes of nodes in the network.* **specify_edges**: specifies the attributes of edges containing pairs of nodes in the network.* **specify_routes**: specifies the routes which vehicles can take starting from any edge.Additionally, the following optional functions may also be defined:* **specify_types**: specifies the attributes of various edge types (if any exist).* **specify_connections**: specifies the attributes of connections. These attributes are used to describe how any specific node's incoming and outgoing edge/lane pairs are connected. If no connections are specified, SUMO will generate default connections.All of the functions mentioned above take in as input `net_params`, and output a list of dictionary elements, with each element providing the attributes of the component to be specified.This tutorial will cover the first three methods. For examples of `specify_types` and `specify_routes`, we refer interested users to the source code located in `flow/networks/ring.py` and `flow/networks/bridge_toll.py`, respectively. 1.1 ADDITIONAL_NET_PARAMSThe features used to parametrize the network are specified within the `NetParams` input, as discussed in tutorial 1. Specifically, for the sake of our network, the `additional_params` attribute within `NetParams` will be responsible for storing information on the radius, number of lanes, and speed limit within each lane, as seen in the figure above. Accordingly, we define `ADDITIONAL_NET_PARAMS` as follows:
###Code
ADDITIONAL_NET_PARAMS = {
"radius": 40,
"num_lanes": 1,
"speed_limit": 30,
}
###Output
_____no_output_____
###Markdown
All networks presented in Flow provide a unique `ADDITIONAL_NET_PARAMS` component containing the information needed to properly define the network parameters. We assume that these values are always provided by the user, and accordingly can be called from `net_params`. For example, if we would like to call the "radius" parameter, we simply type: radius = net_params.additional_params["radius"] 1.2 specify_nodesThe nodes of a network are the positions of selected points in the network. These points are connected together using edges (see section 1.4). In order to specify the location of the nodes, the function `specify_nodes` is used. This function returns a list of dictionary elements, where each dictionary depicts the attributes of a single node. These node attributes include: * **id**: the name of the node* **x**: the x coordinate of the node* **y**: the y coordinate of the node* For other SUMO-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsNode_DescriptionsRefering to the figure at the top of this tutorial, we specify four nodes at the bottom (0,-r), top (0,r), left (-r,0), and right (0,r) of the ring, respectively. This is done as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_nodes(self, net_params):
# one of the elements net_params will need is a "radius" value
r = net_params.additional_params["radius"]
# specify the name and position (x,y) of each node
nodes = [{"id": "bottom", "x": 0, "y": -r},
{"id": "right", "x": r, "y": 0},
{"id": "top", "x": 0, "y": r},
{"id": "left", "x": -r, "y": 0}]
return nodes
###Output
_____no_output_____
###Markdown
1.3 specify_edgesOnce specified, the nodes are linked using directed edges. This is done through the `specify_edges` method which, similar to `specify_nodes`, returns a list of dictionary elements, with each dictionary specifying the attributes of a single edge. The attributes include:* **id**: the name of the edge* **from**: the name of the node the edge starts from* **to**: the name of the node the edges ends at* **length**: the length of the edge* **numLanes**: the number of lanes on the edge* **speed**: the speed limit for vehicles on the edge* For other SUMO-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsEdge_Descriptions.One useful additional attribute is **shape**, which specifies the shape of the edge connecting two nodes. The shape consists of a series of subnodes (internal to SUMO) that are connected by straight lines to create a curved edge. If no shape is specified, the nodes are connected by a straight line. This attribute is needed for creating circular arcs in the system. We now create four arcs connecting the nodes specified in Section 1.2 counter-clockwisely:
###Code
# some mathematical operations that may be used
from numpy import pi, sin, cos, linspace
class myNetwork(myNetwork): # update my network class
def specify_edges(self, net_params):
r = net_params.additional_params["radius"]
edgelen = r * pi / 2
# this will let us control the number of lanes in the network
lanes = net_params.additional_params["num_lanes"]
# speed limit of vehicles in the network
speed_limit = net_params.additional_params["speed_limit"]
edges = [
{
"id": "edge0",
"numLanes": lanes,
"speed": speed_limit,
"from": "bottom",
"to": "right",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(-pi/2, 0, 40)]
},
{
"id": "edge1",
"numLanes": lanes,
"speed": speed_limit,
"from": "right",
"to": "top",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(0, pi/2, 40)]
},
{
"id": "edge2",
"numLanes": lanes,
"speed": speed_limit,
"from": "top",
"to": "left",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi/2, pi, 40)]},
{
"id": "edge3",
"numLanes": lanes,
"speed": speed_limit,
"from": "left",
"to": "bottom",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi, 3*pi/2, 40)]
}
]
return edges
###Output
_____no_output_____
###Markdown
1.4 specify_routesThe route is a sequence of edges, which vehicles can traverse given their current positions. For example, a vehicle beginning in the edge titled "edge0" (see section 1.3) must traverse, in sequence, "edge0", "edge1", "edge2", and "edge3", before restarting its path.In order to specify the routes a vehicle may take, the function `specify_routes` is used. The routes in this method can be specified in one of three ways:**1. Single route per edge:**For deterministic routes (as is the case in the ring road scenario), the routes can be specified as a dictionary where the keys represent the starting edges and the elements represent sequences of edges that the vehicle must traverse, with the first edge corresponding to the edge that the vehicle begins on. Note that the edges must be connected for the route to be valid.For this network, the available routes can be defined as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"]}
return rts
###Output
_____no_output_____
###Markdown
**2. Multiple routes per edge:**Alternatively, if the routes are meant to be stochastic, each element in the dictionary can be enriched to contain a list of (route, probability) tuples, where the first element in the tuple is one of the routes a vehicle can take from a specific starting edge, and the second element is the probability that a vehicle will choose that route. Note that, in this case, the sum of probability values for each dictionary key must sum up to oneFor example, modifying the code snippet we presented above, another valid way of representing the route in a more probabilistic setting is:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": [(["edge0", "edge1", "edge2", "edge3"], 1)],
"edge1": [(["edge1", "edge2", "edge3", "edge0"], 1)],
"edge2": [(["edge2", "edge3", "edge0", "edge1"], 1)],
"edge3": [(["edge3", "edge0", "edge1", "edge2"], 1)]}
return rts
###Output
_____no_output_____
###Markdown
**3. Per-vehicle routes:**Finally, if you would like to assign a specific starting route to a vehicle, you can do so by adding an element into the dictionary whose key is the name of the vehicle and whose content is the list of edges the vehicle is meant to traverse as soon as being introduced to the network.As an example, assume we have a vehicle named \"human_0\" in the network, and it is initialized on the edge named \"edge0\". Then, the route for this vehicle can be specifically added through the `specify_routes` method as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"],
"human_0": ["edge0", "edge1", "edge2", "edge3"]}
return rts
###Output
_____no_output_____
###Markdown
In all three cases, the routes are ultimately represented in the class in the form described under the multiple routes setting, i.e. >>> print(network.rts) { "edge0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ], "edge1": [ (["edge1", "edge2", "edge3", "edge0"], 1) ], "edge2": [ (["edge2", "edge3", "edge0", "edge1"], 1) ], "edge3": [ (["edge3", "edge0", "edge1", "edge2"], 1) ], "human_0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ] }where the vehicle-specific route is only included in the third case. 2. Specifying Auxiliary Network FeaturesOther auxiliary methods exist within the base network class to help support vehicle state initialization and acquisition. Of these methods, the only required abstract method is:* **specify_edge_starts**: defines edge starts for road sections with respect to some global reference.Other optional abstract methods within the base network class include:* **specify_internal_edge_starts**: defines the edge starts for internal edge nodes caused by finite length connections between road sections.* **specify_intersection_edge_starts**: defines edge starts for intersections with respect to some global reference frames. Only needed by environments containing intersections.* **gen_custom_start_pos**: used to generate a user defined set of starting positions for vehicles in the network. 2.2 Specifying the Starting Position of EdgesAll of the above functions with prefix "specify" receive no inputs, and return a list of tuples in which the first element of the tuple is the name of the edge/intersection/internal_link, and the second element is the distance of the link from some global reference, i.e. [(link_0, pos_0), (link_1, pos_1), ...].The data specified in `specify_edge_starts` is used to provide a "global" sense of the location of vehicles, in one dimension. This is done either through the `get_x_by_id` method within an environment, or the `get_absolute_position` method in the `Vehicles` object within an environment. The `specify_internal_edge_starts` allows us to do the same to junctions/internal links when they are also located within the network (this is not the case for the ring road).In section 1, we created a network with four edges named: \"edge0\", \"edge1\", \"edge2\", and \"edge3\". We assume \"edge0\" is the origin. Accordingly, the position of the edge start of \"edge0\" is 0. The next edge, \"edge1\", begins a quarter of the length of the network from the starting point of \"edge0\", and accordingly the position of its edge start is radius * $\\frac{pi}{2}$. This process continues for each of the edges. We can then define the starting position of the edges as follows:
###Code
# import some math functions we may use
from numpy import pi
class myNetwork(myNetwork): # update my network class
def specify_edge_starts(self):
r = self.net_params.additional_params["radius"]
edgestarts = [("edge0", 0),
("edge1", r * 1/2 * pi),
("edge2", r * pi),
("edge3", r * 3/2 * pi)]
return edgestarts
###Output
_____no_output_____
###Markdown
3. Testing the New NetworkIn this section, we run a new sumo simulation using our newly generated network class. For information on running sumo experiments, see `exercise01_sumo.ipynb`.We begin by defining some of the components needed to run a sumo experiment.
###Code
from flow.core.params import VehicleParams
from flow.controllers import IDMController, ContinuousRouter
from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams
vehicles = VehicleParams()
vehicles.add(veh_id="human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=22)
sumo_params = SumoParams(sim_step=0.1, render=True)
initial_config = InitialConfig(bunching=40)
###Output
_____no_output_____
###Markdown
For visualizing purposes, we use the environment `AccelEnv`, as it works on any given network.
###Code
from flow.envs.ring.accel import AccelEnv, ADDITIONAL_ENV_PARAMS
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
###Output
_____no_output_____
###Markdown
Next, using the `ADDITIONAL_NET_PARAMS` component see created in Section 1.1, we prepare the `NetParams` component.
###Code
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
net_params = NetParams(additional_params=additional_net_params)
###Output
_____no_output_____
###Markdown
We are ready now to create and run our network. Using the newly defined network classes, we create a network object and feed it into a `Experiment` simulation. Finally, we are able to visually confirm that are network has been properly generated.
###Code
from flow.core.experiment import Experiment
network = myNetwork( # we use the newly defined network class
name="test_network",
vehicles=vehicles,
net_params=net_params,
initial_config=initial_config
)
# AccelEnv allows us to test any newly generated network quickly
env = AccelEnv(env_params, sumo_params, network)
exp = Experiment(env)
# run the sumo simulation for a set number of time steps
_ = exp.run(1, 1500)
###Output
_____no_output_____
###Markdown
Tutorial 05: Creating Custom NetworksThis tutorial walks you through the process of generating custom networks. Networks define the network geometry of a task, as well as the constituents of the network, e.g. vehicles, traffic lights, etc... Various networks are available in Flow, depicting a diverse set of open and closed traffic networks such as ring roads, intersections, traffic light grids, straight highway merges, and more. In this tutorial, we will recreate the ring road network, seen in the figure below.In order to recreate this network, we will design a *network* class. This class creates the configuration files needed to produce a transportation network within the simulator. It also specifies the location of edge nodes in the network, as well as the positioning of vehicles at the start of a run.We begin by creating a class that inherits the methods of Flow's base network class. The separate methods are filled in in later sections.
###Code
# import Flow's base network class
from flow.networks import Network
# define the network class, and inherit properties from the base network class
class myNetwork(Network):
pass
###Output
_____no_output_____
###Markdown
The rest of the tutorial is organized as follows: sections 1 and 2 walk through the steps needed to specify custom traffic network geometry features and auxiliary features, respectively, while section 3 implements the new network in a simulation for visualization and testing purposes. 1. Specifying Traffic Network FeaturesOne of the core responsibilities of the network class is to to generate the necessary xml files needed to initialize a sumo instance. These xml files describe specific network features such as the position and directions of nodes and edges (see the figure above). Once the base network has been inherited, specifying these features becomes very systematic. All child classes are required to define at least the following three methods: * **specify_nodes**: specifies the attributes of nodes in the network* **specify_edges**: specifies the attributes of edges containing pairs on nodes in the network* **specify_routes**: specifies the routes vehicles can take starting from any edgeAdditionally, the following optional functions may also be defined:* **specify_types**: specifies the attributes of various edge types (if any exist)* **specify_connections**: specifies the attributes of connections. These attributes are used to describe how any specific node's incoming and outgoing edges/lane pairs are connected. If no connections are specified, sumo generates default connections.All of the functions mentioned above paragraph take in as input `net_params`, and output a list of dictionary elements, with each element providing the attributes of the component to be specified.This tutorial will cover the first three methods. For examples of `specify_types` and `specify_routes`, refer to source code located in `flow/networks/ring.py` and `flow/networks/bridge_toll.py`, respectively. 1.1 ADDITIONAL_NET_PARAMSThe features used to parametrize the network are specified within the `NetParams` input, as discussed in tutorial 1. Specifically, for the sake of our network, the `additional_params` attribute within `NetParams` will be responsible for storing information on the radius, number of lanes, and speed limit within each lane, as seen in the figure above. Accordingly, for this problem, we define an `ADDITIONAL_NET_PARAMS` variable of the form:
###Code
ADDITIONAL_NET_PARAMS = {
"radius": 40,
"num_lanes": 1,
"speed_limit": 30,
}
###Output
_____no_output_____
###Markdown
All networks presented in Flow provide a unique `ADDITIONAL_NET_PARAMS` component containing the information needed to properly define the network parameters of the network. We assume that these values are always provided by the user, and accordingly can be called from `net_params`. For example, if we would like to call the "radius" parameter, we simply type: radius = net_params.additional_params["radius"] 1.2 specify_nodesThe nodes of a network are the positions of a select few points in the network. These points are connected together using edges (see section 1.4). In order to specify the location of the nodes that will be placed in the network, the function `specify_nodes` is used. This method returns a list of dictionary elements, where each dictionary depicts the attributes of a single node. These node attributes include: * **id**: name of the node* **x**: x coordinate of the node* **y**: y coordinate of the node* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsNode_DescriptionsRefering to the figure at the top of this tutorial, we specify four nodes at the bottom (0,-r), top (0,r), left (-r,0), and right (0,r) of the ring. This is done as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_nodes(self, net_params):
# one of the elements net_params will need is a "radius" value
r = net_params.additional_params["radius"]
# specify the name and position (x,y) of each node
nodes = [{"id": "bottom", "x": 0, "y": -r},
{"id": "right", "x": r, "y": 0},
{"id": "top", "x": 0, "y": r},
{"id": "left", "x": -r, "y": 0}]
return nodes
###Output
_____no_output_____
###Markdown
1.3 specify_edgesOnce the nodes are specified, the nodes are linked together using directed edges. This done through the `specify_edges` method which, similar to `specify_nodes`, returns a list of dictionary elements, with each dictionary specifying the attributes of a single edge. The attributes include:* **id**: name of the edge* **from**: name of the node the edge starts from* **to**: the name of the node the edges ends at* **length**: length of the edge* **numLanes**: the number of lanes on the edge* **speed**: the speed limit for vehicles on the edge* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsEdge_Descriptions.One useful additional attribute is **shape**, which specifies the shape of the edge connecting the two nodes. The shape consists of a series of subnodes (internal to sumo) that are connected together by straight lines to create a curved edge. If no shape is specified, the nodes are connected by a straight line. This attribute will be needed to create the circular arcs between the nodes in the system. We now create four arcs connected the nodes specified in section 1.2, with the direction of the edges directed counter-clockwise:
###Code
# some mathematical operations that may be used
from numpy import pi, sin, cos, linspace
class myNetwork(myNetwork): # update my network class
def specify_edges(self, net_params):
r = net_params.additional_params["radius"]
edgelen = r * pi / 2
# this will let us control the number of lanes in the network
lanes = net_params.additional_params["num_lanes"]
# speed limit of vehicles in the network
speed_limit = net_params.additional_params["speed_limit"]
edges = [
{
"id": "edge0",
"numLanes": lanes,
"speed": speed_limit,
"from": "bottom",
"to": "right",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(-pi/2, 0, 40)]
},
{
"id": "edge1",
"numLanes": lanes,
"speed": speed_limit,
"from": "right",
"to": "top",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(0, pi/2, 40)]
},
{
"id": "edge2",
"numLanes": lanes,
"speed": speed_limit,
"from": "top",
"to": "left",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi/2, pi, 40)]},
{
"id": "edge3",
"numLanes": lanes,
"speed": speed_limit,
"from": "left",
"to": "bottom",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi, 3*pi/2, 40)]
}
]
return edges
###Output
_____no_output_____
###Markdown
1.4 specify_routesThe routes are the sequence of edges vehicles traverse given their current position. For example, a vehicle beginning in the edge titled "edge0" (see section 1.3) must traverse, in sequence, the edges "edge0", "edge1", "edge2", and "edge3", before restarting its path.In order to specify the routes a vehicle may take, the function `specify_routes` is used. The routes in this method can be specified in one of three ways:**1. Single route per edge:**In this case of deterministic routes (as is the case in the ring road network), the routes can be specified as dictionary where the key element represents the starting edge and the element is a single list of edges the vehicle must traverse, with the first edge corresponding to the edge the vehicle begins on. Note that the edges must be connected for the route to be valid.For this network, the available routes under this setting can be defined as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"]}
return rts
###Output
_____no_output_____
###Markdown
**2. Multiple routes per edge:**Alternatively, if the routes are meant to be stochastic, each element can consist of a list of (route, probability) tuples, where the first element in the tuple is one of the routes a vehicle can take from a specific starting edge, and the second element is the probability that vehicles will choose that route. Note that, in this case, the sum of probability values for each dictionary key must sum up to one.For example, modifying the code snippet we presented above, another valid way of representing the route in a more probabilistic setting is:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": [(["edge0", "edge1", "edge2", "edge3"], 1)],
"edge1": [(["edge1", "edge2", "edge3", "edge0"], 1)],
"edge2": [(["edge2", "edge3", "edge0", "edge1"], 1)],
"edge3": [(["edge3", "edge0", "edge1", "edge2"], 1)]}
return rts
###Output
_____no_output_____
###Markdown
**3. Per-vehicle routes:**Finally, if you would like to assign a specific starting route to a vehicle with a specific ID, you can do so by adding a element into the dictionary whose key is the name of the vehicle and whose content is the list of edges the vehicle is meant to traverse as soon as it is introduced to the network.As an example, assume we have a vehicle named "human_0" in the network (as we will in the later sections), and it is initialized in the edge names "edge_0". Then, the route for this edge specifically can be added through the `specify_routes` method as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"],
"human_0": ["edge0", "edge1", "edge2", "edge3"]}
return rts
###Output
_____no_output_____
###Markdown
In all three cases, the routes are ultimately represented in the class in the form described under the multiple routes setting, i.e. >>> print(network.rts) { "edge0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ], "edge1": [ (["edge1", "edge2", "edge3", "edge0"], 1) ], "edge2": [ (["edge2", "edge3", "edge0", "edge1"], 1) ], "edge3": [ (["edge3", "edge0", "edge1", "edge2"], 1) ], "human_0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ] }where the vehicle-specific route is only included in the third case. 2. Specifying Auxiliary Network FeaturesOther auxiliary methods exist within the base network class to help support vehicle state initialization and acquisition. Of these methods, the only required abstract method is:* **specify_edge_starts**: defines edge starts for road sections with respect to some global referenceOther optional abstract methods within the base network class include:* **specify_internal_edge_starts**: defines the edge starts for internal edge nodes caused by finite length connections between road section* **specify_intersection_edge_starts**: defines edge starts for intersections with respect to some global reference frame. Only needed by environments with intersections.* **gen_custom_start_pos**: used to generate a user defined set of starting positions for vehicles in the network 2.2 Specifying the Starting Position of EdgesAll of the above functions starting with "specify" receive no inputs, and return a list of tuples in which the first element of the tuple is the name of the edge/intersection/internal_link, and the second value is the distance of the link from some global reference, i.e. [(link_0, pos_0), (link_1, pos_1), ...].The data specified in `specify_edge_starts` is used to provide a "global" sense of the location of vehicles, in one dimension. This is done either through the `get_x_by_id` method within an environment, or the `get_absolute_position` method in the `Vehicles` object within an environment. The `specify_internal_edge_starts` allows us to do the same to junctions/internal links when they are also located within the network (this is not the case for the ring road).In section 1, we created a network with 4 edges named: "edge0", "edge1", "edge2", and "edge3". We assume that the edge titled "edge0" is the origin, and accordingly the position of the edge start of "edge0" is 0. The next edge, "edge1", begins a quarter of the length of the network from the starting point of edge "edge0", and accordingly the position of its edge start is radius * pi/2. This process continues for each of the edges. We can then define the starting position of the edges as follows:
###Code
# import some math functions we may use
from numpy import pi
class myNetwork(myNetwork): # update my network class
def specify_edge_starts(self):
r = self.net_params.additional_params["radius"]
edgestarts = [("edge0", 0),
("edge1", r * 1/2 * pi),
("edge2", r * pi),
("edge3", r * 3/2 * pi)]
return edgestarts
###Output
_____no_output_____
###Markdown
3. Testing the New NetworkIn this section, we run a new sumo simulation using our newly generated network class. For information on running sumo experiments, see `tutorial01_sumo.ipynb`.We begin by defining some of the components needed to run a sumo experiment.
###Code
from flow.core.params import VehicleParams
from flow.controllers import IDMController, ContinuousRouter
from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams
vehicles = VehicleParams()
vehicles.add(veh_id="human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=22)
sim_params = SumoParams(sim_step=0.1, render=True)
initial_config = InitialConfig(bunching=40)
###Output
_____no_output_____
###Markdown
For visualizing purposes, we use the environment `AccelEnv`, as it works on any given network.
###Code
from flow.envs.ring.accel import AccelEnv, ADDITIONAL_ENV_PARAMS
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
###Output
_____no_output_____
###Markdown
Next, using the `ADDITIONAL_NET_PARAMS` component see created in section 1.1, we prepare the `NetParams` component.
###Code
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
net_params = NetParams(additional_params=additional_net_params)
###Output
_____no_output_____
###Markdown
We are ready now to create and run our network. Using the newly defined network classes, we create a network object and feed it into a `Experiment` simulation. Finally, we are able to visually confirm that are network has been properly generated.
###Code
from flow.core.experiment import Experiment
flow_params = dict(
exp_tag='test_network',
env_name=AccelEnv,
network=myNetwork,
simulator='traci',
sim=sim_params,
env=env_params,
net=net_params,
veh=vehicles,
initial=initial_config,
)
# number of time steps
flow_params['env'].horizon = 1500
exp = Experiment(flow_params)
# run the sumo simulation
_ = exp.run(1)
###Output
Round 0, return: 534.4892658276719
Average, std returns: 534.4892658276719, 0.0
Average, std velocities: 3.7311754502087413, 0.0
Average, std outflows: 0.0, 0.0
Total time: 53.2726366519928
steps/second: 70.98939643046134
###Markdown
Tutorial 05: Creating Custom NetworksThis tutorial walks you through the process of generating custom networks. Networks define the network geometry of a task, as well as the constituents of the network, e.g. vehicles, traffic lights, etc... Various networks are available in Flow, depicting a diverse set of open and closed traffic networks such as ring roads, intersections, traffic light grids, straight highway merges, and more. In this exercise, we will recreate the ring road network, seen in the figure below.In order to recreate this network, we will design a *network* class. This class creates the configuration files needed to produce a transportation network within the simulator. It also specifies the location of edge nodes in the network, as well as the positioning of vehicles at the start of a run.We begin by creating a class that inherits the methods of Flow's base network class. The separate methods are filled in in later sections.
###Code
# import Flow's base network class
from flow.networks import Network
# define the network class, and inherit properties from the base network class
class myNetwork(Network):
pass
###Output
_____no_output_____
###Markdown
The rest of the tutorial is organized as follows: sections 1 and 2 walk through the steps needed to specify custom traffic network geometry features and auxiliary features, respectively, while section 3 implements the new network in a simulation for visualization and testing purposes. 1. Specifying Traffic Network FeaturesOne of the core responsibilities of the network class is to to generate the necessary xml files needed to initialize a sumo instance. These xml files describe specific network features such as the position and directions of nodes and edges (see the figure above). Once the base network has been inherited, specifying these features becomes very systematic. All child classes are required to define at least the following three methods: * **specify_nodes**: specifies the attributes of nodes in the network* **specify_edges**: specifies the attributes of edges containing pairs on nodes in the network* **specify_routes**: specifies the routes vehicles can take starting from any edgeAdditionally, the following optional functions may also be defined:* **specify_types**: specifies the attributes of various edge types (if any exist)* **specify_connections**: specifies the attributes of connections. These attributes are used to describe how any specific node's incoming and outgoing edges/lane pairs are connected. If no connections are specified, sumo generates default connections.All of the functions mentioned above paragraph take in as input `net_params`, and output a list of dictionary elements, with each element providing the attributes of the component to be specified.This tutorial will cover the first three methods. For examples of `specify_types` and `specify_routes`, refer to source code located in `flow/networks/ring.py` and `flow/networks/bridge_toll.py`, respectively. 1.1 ADDITIONAL_NET_PARAMSThe features used to parametrize the network are specified within the `NetParams` input, as discussed in tutorial 1. Specifically, for the sake of our network, the `additional_params` attribute within `NetParams` will be responsible for storing information on the radius, number of lanes, and speed limit within each lane, as seen in the figure above. Accordingly, for this problem, we define an `ADDITIONAL_NET_PARAMS` variable of the form:
###Code
ADDITIONAL_NET_PARAMS = {
"radius": 40,
"num_lanes": 1,
"speed_limit": 30,
}
###Output
_____no_output_____
###Markdown
All networks presented in Flow provide a unique `ADDITIONAL_NET_PARAMS` component containing the information needed to properly define the network parameters of the network. We assume that these values are always provided by the user, and accordingly can be called from `net_params`. For example, if we would like to call the "radius" parameter, we simply type: radius = net_params.additional_params["radius"] 1.2 specify_nodesThe nodes of a network are the positions of a select few points in the network. These points are connected together using edges (see section 1.4). In order to specify the location of the nodes that will be placed in the network, the function `specify_nodes` is used. This method returns a list of dictionary elements, where each dictionary depicts the attributes of a single node. These node attributes include: * **id**: name of the node* **x**: x coordinate of the node* **y**: y coordinate of the node* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsNode_DescriptionsRefering to the figure at the top of this tutorial, we specify four nodes at the bottom (0,-r), top (0,r), left (-r,0), and right (0,r) of the ring. This is done as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_nodes(self, net_params):
# one of the elements net_params will need is a "radius" value
r = net_params.additional_params["radius"]
# specify the name and position (x,y) of each node
nodes = [{"id": "bottom", "x": 0, "y": -r},
{"id": "right", "x": r, "y": 0},
{"id": "top", "x": 0, "y": r},
{"id": "left", "x": -r, "y": 0}]
return nodes
###Output
_____no_output_____
###Markdown
1.3 specify_edgesOnce the nodes are specified, the nodes are linked together using directed edges. This done through the `specify_edges` method which, similar to `specify_nodes`, returns a list of dictionary elements, with each dictionary specifying the attributes of a single edge. The attributes include:* **id**: name of the edge* **from**: name of the node the edge starts from* **to**: the name of the node the edges ends at* **length**: length of the edge* **numLanes**: the number of lanes on the edge* **speed**: the speed limit for vehicles on the edge* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsEdge_Descriptions.One useful additional attribute is **shape**, which specifies the shape of the edge connecting the two nodes. The shape consists of a series of subnodes (internal to sumo) that are connected together by straight lines to create a curved edge. If no shape is specified, the nodes are connected by a straight line. This attribute will be needed to create the circular arcs between the nodes in the system. We now create four arcs connected the nodes specified in section 1.2, with the direction of the edges directed counter-clockwise:
###Code
# some mathematical operations that may be used
from numpy import pi, sin, cos, linspace
class myNetwork(myNetwork): # update my network class
def specify_edges(self, net_params):
r = net_params.additional_params["radius"]
edgelen = r * pi / 2
# this will let us control the number of lanes in the network
lanes = net_params.additional_params["num_lanes"]
# speed limit of vehicles in the network
speed_limit = net_params.additional_params["speed_limit"]
edges = [
{
"id": "edge0",
"numLanes": lanes,
"speed": speed_limit,
"from": "bottom",
"to": "right",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(-pi/2, 0, 40)]
},
{
"id": "edge1",
"numLanes": lanes,
"speed": speed_limit,
"from": "right",
"to": "top",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(0, pi/2, 40)]
},
{
"id": "edge2",
"numLanes": lanes,
"speed": speed_limit,
"from": "top",
"to": "left",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi/2, pi, 40)]},
{
"id": "edge3",
"numLanes": lanes,
"speed": speed_limit,
"from": "left",
"to": "bottom",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi, 3*pi/2, 40)]
}
]
return edges
###Output
_____no_output_____
###Markdown
1.4 specify_routesThe routes are the sequence of edges vehicles traverse given their current position. For example, a vehicle beginning in the edge titled "edge0" (see section 1.3) must traverse, in sequence, the edges "edge0", "edge1", "edge2", and "edge3", before restarting its path.In order to specify the routes a vehicle may take, the function `specify_routes` is used. The routes in this method can be specified in one of three ways:**1. Single route per edge:**In this case of deterministic routes (as is the case in the ring road network), the routes can be specified as dictionary where the key element represents the starting edge and the element is a single list of edges the vehicle must traverse, with the first edge corresponding to the edge the vehicle begins on. Note that the edges must be connected for the route to be valid.For this network, the available routes under this setting can be defined as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"]}
return rts
###Output
_____no_output_____
###Markdown
**2. Multiple routes per edge:**Alternatively, if the routes are meant to be stochastic, each element can consist of a list of (route, probability) tuples, where the first element in the tuple is one of the routes a vehicle can take from a specific starting edge, and the second element is the probability that vehicles will choose that route. Note that, in this case, the sum of probability values for each dictionary key must sum up to one.For example, modifying the code snippet we presented above, another valid way of representing the route in a more probabilistic setting is:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": [(["edge0", "edge1", "edge2", "edge3"], 1)],
"edge1": [(["edge1", "edge2", "edge3", "edge0"], 1)],
"edge2": [(["edge2", "edge3", "edge0", "edge1"], 1)],
"edge3": [(["edge3", "edge0", "edge1", "edge2"], 1)]}
return rts
###Output
_____no_output_____
###Markdown
**3. Per-vehicle routes:**Finally, if you would like to assign a specific starting route to a vehicle with a specific ID, you can do so by adding a element into the dictionary whose key is the name of the vehicle and whose content is the list of edges the vehicle is meant to traverse as soon as it is introduced to the network.As an example, assume we have a vehicle named "human_0" in the network (as we will in the later sections), and it is initialized in the edge names "edge_0". Then, the route for this edge specifically can be added through the `specify_routes` method as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"],
"human_0": ["edge0", "edge1", "edge2", "edge3"]}
return rts
###Output
_____no_output_____
###Markdown
In all three cases, the routes are ultimately represented in the class in the form described under the multiple routes setting, i.e. >>> print(network.rts) { "edge0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ], "edge1": [ (["edge1", "edge2", "edge3", "edge0"], 1) ], "edge2": [ (["edge2", "edge3", "edge0", "edge1"], 1) ], "edge3": [ (["edge3", "edge0", "edge1", "edge2"], 1) ], "human_0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ] }where the vehicle-specific route is only included in the third case. 2. Specifying Auxiliary Network FeaturesOther auxiliary methods exist within the base network class to help support vehicle state initialization and acquisition. Of these methods, the only required abstract method is:* **specify_edge_starts**: defines edge starts for road sections with respect to some global referenceOther optional abstract methods within the base network class include:* **specify_internal_edge_starts**: defines the edge starts for internal edge nodes caused by finite length connections between road section* **specify_intersection_edge_starts**: defines edge starts for intersections with respect to some global reference frame. Only needed by environments with intersections.* **gen_custom_start_pos**: used to generate a user defined set of starting positions for vehicles in the network 2.2 Specifying the Starting Position of EdgesAll of the above functions starting with "specify" receive no inputs, and return a list of tuples in which the first element of the tuple is the name of the edge/intersection/internal_link, and the second value is the distance of the link from some global reference, i.e. [(link_0, pos_0), (link_1, pos_1), ...].The data specified in `specify_edge_starts` is used to provide a "global" sense of the location of vehicles, in one dimension. This is done either through the `get_x_by_id` method within an environment, or the `get_absolute_position` method in the `Vehicles` object within an environment. The `specify_internal_edge_starts` allows us to do the same to junctions/internal links when they are also located within the network (this is not the case for the ring road).In section 1, we created a network with 4 edges named: "edge0", "edge1", "edge2", and "edge3". We assume that the edge titled "edge0" is the origin, and accordingly the position of the edge start of "edge0" is 0. The next edge, "edge1", begins a quarter of the length of the network from the starting point of edge "edge0", and accordingly the position of its edge start is radius * pi/2. This process continues for each of the edges. We can then define the starting position of the edges as follows:
###Code
# import some math functions we may use
from numpy import pi
class myNetwork(myNetwork): # update my network class
def specify_edge_starts(self):
r = self.net_params.additional_params["radius"]
edgestarts = [("edge0", 0),
("edge1", r * 1/2 * pi),
("edge2", r * pi),
("edge3", r * 3/2 * pi)]
return edgestarts
###Output
_____no_output_____
###Markdown
3. Testing the New NetworkIn this section, we run a new sumo simulation using our newly generated network class. For information on running sumo experiments, see `exercise01_sumo.ipynb`.We begin by defining some of the components needed to run a sumo experiment.
###Code
from flow.core.params import VehicleParams
from flow.controllers import IDMController, ContinuousRouter
from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams
vehicles = VehicleParams()
vehicles.add(veh_id="human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=22)
sumo_params = SumoParams(sim_step=0.1, render=True)
initial_config = InitialConfig(bunching=40)
###Output
_____no_output_____
###Markdown
For visualizing purposes, we use the environment `AccelEnv`, as it works on any given network.
###Code
from flow.envs.ring.accel import AccelEnv, ADDITIONAL_ENV_PARAMS
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
###Output
_____no_output_____
###Markdown
Next, using the `ADDITIONAL_NET_PARAMS` component see created in section 1.1, we prepare the `NetParams` component.
###Code
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
net_params = NetParams(additional_params=additional_net_params)
###Output
_____no_output_____
###Markdown
We are ready now to create and run our network. Using the newly defined network classes, we create a network object and feed it into a `Experiment` simulation. Finally, we are able to visually confirm that are network has been properly generated.
###Code
from flow.core.experiment import Experiment
network = myNetwork( # we use the newly defined network class
name="test_network",
vehicles=vehicles,
net_params=net_params,
initial_config=initial_config
)
# AccelEnv allows us to test any newly generated network quickly
env = AccelEnv(env_params, sumo_params, network)
exp = Experiment(env)
# run the sumo simulation for a set number of time steps
_ = exp.run(1, 1500)
###Output
_____no_output_____
###Markdown
Tutorial 05: Creating Custom NetworksThis tutorial walks you through the process of generating custom networks. Networks define the network geometry of a task, as well as the constituents of the network, e.g., vehicles, traffic lights, etc... Various networks are available in Flow, depicting a diverse set of open and closed traffic networks such as ring roads, intersections, traffic light grids, straight highway merges, and more. In this tutorial, we will recreate the ring road network, seen in the figure below.In order to recreate this network, we will design a *network* class. This class creates the configuration files needed to produce a transportation network within the simulator. It also specifies the location of edge nodes in the network, as well as the positioning of vehicles at the start of a run.We begin by creating a class that inherits the methods of Flow's base network class. The separate methods are filled in in later sections.
###Code
# import Flow's base network class
from flow.networks import Network
# define the network class, and inherit properties from the base network class
class myNetwork(Network):
pass
###Output
_____no_output_____
###Markdown
The rest of the tutorial is organized as follows: Sections 1 and 2 discuss the steps needed to specify custom traffic network geometry features and auxiliary features, respectively, while Section 3 implements the new network in a simulation for visualization and testing purposes. 1. Specifying Traffic Network FeaturesOne of the core responsibilities of the network class is to generate the necessary xml files needed to initialize a SUMO instance. These xml files describe specific network features such as the position and directions of nodes and edges (see the figure above). Once the base network has been inherited, specifying these features becomes very systematic. All child classes are required to define at least the following three methods: * **specify_nodes**: specifies the attributes of nodes in the network.* **specify_edges**: specifies the attributes of edges containing pairs of nodes in the network.* **specify_routes**: specifies the routes which vehicles can take starting from any edge.Additionally, the following optional functions may also be defined:* **specify_types**: specifies the attributes of various edge types (if any exist).* **specify_connections**: specifies the attributes of connections. These attributes are used to describe how any specific node's incoming and outgoing edge/lane pairs are connected. If no connections are specified, SUMO will generate default connections.All of the functions mentioned above take in as input `net_params`, and output a list of dictionary elements, with each element providing the attributes of the component to be specified.This tutorial will cover the first three methods. For examples of `specify_types` and `specify_routes`, we refer interested users to the source code located in `flow/networks/ring.py` and `flow/networks/bridge_toll.py`, respectively. 1.1 ADDITIONAL_NET_PARAMSThe features used to parametrize the network are specified within the `NetParams` input, as discussed in tutorial 1. Specifically, for the sake of our network, the `additional_params` attribute within `NetParams` will be responsible for storing information on the radius, number of lanes, and speed limit within each lane, as seen in the figure above. Accordingly, we define `ADDITIONAL_NET_PARAMS` as follows:
###Code
ADDITIONAL_NET_PARAMS = {
"radius": 40,
"num_lanes": 1,
"speed_limit": 30,
}
###Output
_____no_output_____
###Markdown
All networks presented in Flow provide a unique `ADDITIONAL_NET_PARAMS` component containing the information needed to properly define the network parameters. We assume that these values are always provided by the user, and accordingly can be called from `net_params`. For example, if we would like to call the "radius" parameter, we simply type: radius = net_params.additional_params["radius"] 1.2 specify_nodesThe nodes of a network are the positions of selected points in the network. These points are connected together using edges (see section 1.4). In order to specify the location of the nodes, the function `specify_nodes` is used. This function returns a list of dictionary elements, where each dictionary depicts the attributes of a single node. These node attributes include: * **id**: the name of the node* **x**: the x coordinate of the node* **y**: the y coordinate of the node* For other SUMO-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsNode_DescriptionsRefering to the figure at the top of this tutorial, we specify four nodes at the bottom (0,-r), top (0,r), left (-r,0), and right (0,r) of the ring, respectively. This is done as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_nodes(self, net_params):
# one of the elements net_params will need is a "radius" value
r = net_params.additional_params["radius"]
# specify the name and position (x,y) of each node
nodes = [{"id": "bottom", "x": 0, "y": -r},
{"id": "right", "x": r, "y": 0},
{"id": "top", "x": 0, "y": r},
{"id": "left", "x": -r, "y": 0}]
return nodes
###Output
_____no_output_____
###Markdown
1.3 specify_edgesOnce specified, the nodes are linked using directed edges. This is done through the `specify_edges` method which, similar to `specify_nodes`, returns a list of dictionary elements, with each dictionary specifying the attributes of a single edge. The attributes include:* **id**: the name of the edge* **from**: the name of the node the edge starts from* **to**: the name of the node the edges ends at* **length**: the length of the edge* **numLanes**: the number of lanes on the edge* **speed**: the speed limit for vehicles on the edge* For other SUMO-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsEdge_Descriptions.One useful additional attribute is **shape**, which specifies the shape of the edge connecting two nodes. The shape consists of a series of subnodes (internal to SUMO) that are connected by straight lines to create a curved edge. If no shape is specified, the nodes are connected by a straight line. This attribute is needed for creating circular arcs in the system. We now create four arcs connecting the nodes specified in Section 1.2 counter-clockwisely:
###Code
# some mathematical operations that may be used
from numpy import pi, sin, cos, linspace
class myNetwork(myNetwork): # update my network class
def specify_edges(self, net_params):
r = net_params.additional_params["radius"]
edgelen = r * pi / 2
# this will let us control the number of lanes in the network
lanes = net_params.additional_params["num_lanes"]
# speed limit of vehicles in the network
speed_limit = net_params.additional_params["speed_limit"]
edges = [
{
"id": "edge0",
"numLanes": lanes,
"speed": speed_limit,
"from": "bottom",
"to": "right",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(-pi/2, 0, 40)]
},
{
"id": "edge1",
"numLanes": lanes,
"speed": speed_limit,
"from": "right",
"to": "top",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(0, pi/2, 40)]
},
{
"id": "edge2",
"numLanes": lanes,
"speed": speed_limit,
"from": "top",
"to": "left",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi/2, pi, 40)]},
{
"id": "edge3",
"numLanes": lanes,
"speed": speed_limit,
"from": "left",
"to": "bottom",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi, 3*pi/2, 40)]
}
]
return edges
###Output
_____no_output_____
###Markdown
1.4 specify_routesThe route is a sequence of edges, which vehicles can traverse given their current positions. For example, a vehicle beginning in the edge titled "edge0" (see section 1.3) must traverse, in sequence, "edge0", "edge1", "edge2", and "edge3", before restarting its path.In order to specify the routes a vehicle may take, the function `specify_routes` is used. The routes in this method can be specified in one of three ways:**1. Single route per edge:**For deterministic routes (as is the case in the ring road scenario), the routes can be specified as a dictionary where the keys represent the starting edges and the elements represent sequences of edges that the vehicle must traverse, with the first edge corresponding to the edge that the vehicle begins on. Note that the edges must be connected for the route to be valid.For this network, the available routes can be defined as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"]}
return rts
###Output
_____no_output_____
###Markdown
**2. Multiple routes per edge:**Alternatively, if the routes are meant to be stochastic, each element in the dictionary can be enriched to contain a list of (route, probability) tuples, where the first element in the tuple is one of the routes a vehicle can take from a specific starting edge, and the second element is the probability that a vehicle will choose that route. Note that, in this case, the sum of probability values for each dictionary key must sum up to oneFor example, modifying the code snippet we presented above, another valid way of representing the route in a more probabilistic setting is:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": [(["edge0", "edge1", "edge2", "edge3"], 1)],
"edge1": [(["edge1", "edge2", "edge3", "edge0"], 1)],
"edge2": [(["edge2", "edge3", "edge0", "edge1"], 1)],
"edge3": [(["edge3", "edge0", "edge1", "edge2"], 1)]}
return rts
###Output
_____no_output_____
###Markdown
**3. Per-vehicle routes:**Finally, if you would like to assign a specific starting route to a vehicle, you can do so by adding an element into the dictionary whose key is the name of the vehicle and whose content is the list of edges the vehicle is meant to traverse as soon as being introduced to the network.As an example, assume we have a vehicle named \"human_0\" in the network, and it is initialized on the edge named \"edge0\". Then, the route for this vehicle can be specifically added through the `specify_routes` method as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"],
"human_0": ["edge0", "edge1", "edge2", "edge3"]}
return rts
###Output
_____no_output_____
###Markdown
In all three cases, the routes are ultimately represented in the class in the form described under the multiple routes setting, i.e. >>> print(network.rts) { "edge0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ], "edge1": [ (["edge1", "edge2", "edge3", "edge0"], 1) ], "edge2": [ (["edge2", "edge3", "edge0", "edge1"], 1) ], "edge3": [ (["edge3", "edge0", "edge1", "edge2"], 1) ], "human_0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ] }where the vehicle-specific route is only included in the third case. 2. Specifying Auxiliary Network FeaturesOther auxiliary methods exist within the base network class to help support vehicle state initialization and acquisition. Of these methods, the only required abstract method is:* **specify_edge_starts**: defines edge starts for road sections with respect to some global reference.Other optional abstract methods within the base network class include:* **specify_internal_edge_starts**: defines the edge starts for internal edge nodes caused by finite length connections between road sections.* **specify_intersection_edge_starts**: defines edge starts for intersections with respect to some global reference frames. Only needed by environments containing intersections.* **gen_custom_start_pos**: used to generate a user defined set of starting positions for vehicles in the network. 2.2 Specifying the Starting Position of EdgesAll of the above functions with prefix "specify" receive no inputs, and return a list of tuples in which the first element of the tuple is the name of the edge/intersection/internal_link, and the second element is the distance of the link from some global reference, i.e. [(link_0, pos_0), (link_1, pos_1), ...].The data specified in `specify_edge_starts` is used to provide a "global" sense of the location of vehicles, in one dimension. This is done either through the `get_x_by_id` method within an environment, or the `get_absolute_position` method in the `Vehicles` object within an environment. The `specify_internal_edge_starts` allows us to do the same to junctions/internal links when they are also located within the network (this is not the case for the ring road).In section 1, we created a network with four edges named: \"edge0\", \"edge1\", \"edge2\", and \"edge3\". We assume \"edge0\" is the origin. Accordingly, the position of the edge start of \"edge0\" is 0. The next edge, \"edge1\", begins a quarter of the length of the network from the starting point of \"edge0\", and accordingly the position of its edge start is radius * $\\frac{pi}{2}$. This process continues for each of the edges. We can then define the starting position of the edges as follows:
###Code
# import some math functions we may use
from numpy import pi
class myNetwork(myNetwork): # update my network class
def specify_edge_starts(self):
r = self.net_params.additional_params["radius"]
edgestarts = [("edge0", 0),
("edge1", r * 1/2 * pi),
("edge2", r * pi),
("edge3", r * 3/2 * pi)]
return edgestarts
###Output
_____no_output_____
###Markdown
3. Testing the New NetworkIn this section, we run a new sumo simulation using our newly generated network class. For information on running sumo experiments, see `exercise01_sumo.ipynb`.We begin by defining some of the components needed to run a sumo experiment.
###Code
from flow.core.params import VehicleParams
from flow.controllers import IDMController, ContinuousRouter
from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams
vehicles = VehicleParams()
vehicles.add(veh_id="human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=22)
sumo_params = SumoParams(sim_step=0.1, render=True)
initial_config = InitialConfig(bunching=40)
###Output
_____no_output_____
###Markdown
For visualizing purposes, we use the environment `AccelEnv`, as it works on any given network.
###Code
from flow.envs.ring.accel import AccelEnv, ADDITIONAL_ENV_PARAMS
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
###Output
_____no_output_____
###Markdown
Next, using the `ADDITIONAL_NET_PARAMS` component see created in Section 1.1, we prepare the `NetParams` component.
###Code
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
net_params = NetParams(additional_params=additional_net_params)
###Output
_____no_output_____
###Markdown
We are ready now to create and run our network. Using the newly defined network classes, we create a network object and feed it into a `Experiment` simulation. Finally, we are able to visually confirm that are network has been properly generated.
###Code
from flow.core.experiment import Experiment
network = myNetwork( # we use the newly defined network class
name="test_network",
vehicles=vehicles,
net_params=net_params,
initial_config=initial_config
)
# AccelEnv allows us to test any newly generated network quickly
env = AccelEnv(env_params, sumo_params, network)
exp = Experiment(env)
# run the sumo simulation for a set number of time steps
_ = exp.run(1, 1500)
###Output
_____no_output_____
###Markdown
Tutorial 05: Creating Custom NetworksThis tutorial walks you through the process of generating custom networks. Networks define the network geometry of a task, as well as the constituents of the network, e.g. vehicles, traffic lights, etc... Various networks are available in Flow, depicting a diverse set of open and closed traffic networks such as ring roads, intersections, traffic light grids, straight highway merges, and more. In this exercise, we will recreate the ring road network, seen in the figure below.In order to recreate this network, we will design a *network* class. This class creates the configuration files needed to produce a transportation network within the simulator. It also specifies the location of edge nodes in the network, as well as the positioning of vehicles at the start of a run.We begin by creating a class that inherits the methods of Flow's base network class. The separate methods are filled in in later sections.
###Code
# import Flow's base network class
from flow.networks import Network
# define the network class, and inherit properties from the base network class
class myNetwork(Network):
pass
###Output
_____no_output_____
###Markdown
The rest of the tutorial is organized as follows: sections 1 and 2 walk through the steps needed to specify custom traffic network geometry features and auxiliary features, respectively, while section 3 implements the new network in a simulation for visualization and testing purposes. 1. Specifying Traffic Network FeaturesOne of the core responsibilities of the network class is to to generate the necessary xml files needed to initialize a sumo instance. These xml files describe specific network features such as the position and directions of nodes and edges (see the figure above). Once the base network has been inherited, specifying these features becomes very systematic. All child classes are required to define at least the following three methods: * **specify_nodes**: specifies the attributes of nodes in the network* **specify_edges**: specifies the attributes of edges containing pairs on nodes in the network* **specify_routes**: specifies the routes vehicles can take starting from any edgeAdditionally, the following optional functions may also be defined:* **specify_types**: specifies the attributes of various edge types (if any exist)* **specify_connections**: specifies the attributes of connections. These attributes are used to describe how any specific node's incoming and outgoing edges/lane pairs are connected. If no connections are specified, sumo generates default connections.All of the functions mentioned above paragraph take in as input `net_params`, and output a list of dictionary elements, with each element providing the attributes of the component to be specified.This tutorial will cover the first three methods. For examples of `specify_types` and `specify_routes`, refer to source code located in `flow/networks/ring.py` and `flow/networks/bridge_toll.py`, respectively. 1.1 ADDITIONAL_NET_PARAMSThe features used to parametrize the network are specified within the `NetParams` input, as discussed in tutorial 1. Specifically, for the sake of our network, the `additional_params` attribute within `NetParams` will be responsible for storing information on the radius, number of lanes, and speed limit within each lane, as seen in the figure above. Accordingly, for this problem, we define an `ADDITIONAL_NET_PARAMS` variable of the form:
###Code
ADDITIONAL_NET_PARAMS = {
"radius": 40,
"num_lanes": 1,
"speed_limit": 30,
}
###Output
_____no_output_____
###Markdown
All networks presented in Flow provide a unique `ADDITIONAL_NET_PARAMS` component containing the information needed to properly define the network parameters of the network. We assume that these values are always provided by the user, and accordingly can be called from `net_params`. For example, if we would like to call the "radius" parameter, we simply type: radius = net_params.additional_params["radius"] 1.2 specify_nodesThe nodes of a network are the positions of a select few points in the network. These points are connected together using edges (see section 1.4). In order to specify the location of the nodes that will be placed in the network, the function `specify_nodes` is used. This method returns a list of dictionary elements, where each dictionary depicts the attributes of a single node. These node attributes include: * **id**: name of the node* **x**: x coordinate of the node* **y**: y coordinate of the node* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsNode_DescriptionsRefering to the figure at the top of this tutorial, we specify four nodes at the bottom (0,-r), top (0,r), left (-r,0), and right (0,r) of the ring. This is done as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_nodes(self, net_params):
# one of the elements net_params will need is a "radius" value
r = net_params.additional_params["radius"]
# specify the name and position (x,y) of each node
nodes = [{"id": "bottom", "x": 0, "y": -r},
{"id": "right", "x": r, "y": 0},
{"id": "top", "x": 0, "y": r},
{"id": "left", "x": -r, "y": 0}]
return nodes
###Output
_____no_output_____
###Markdown
1.3 specify_edgesOnce the nodes are specified, the nodes are linked together using directed edges. This done through the `specify_edges` method which, similar to `specify_nodes`, returns a list of dictionary elements, with each dictionary specifying the attributes of a single edge. The attributes include:* **id**: name of the edge* **from**: name of the node the edge starts from* **to**: the name of the node the edges ends at* **length**: length of the edge* **numLanes**: the number of lanes on the edge* **speed**: the speed limit for vehicles on the edge* other sumo-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptionsEdge_Descriptions.One useful additional attribute is **shape**, which specifies the shape of the edge connecting the two nodes. The shape consists of a series of subnodes (internal to sumo) that are connected together by straight lines to create a curved edge. If no shape is specified, the nodes are connected by a straight line. This attribute will be needed to create the circular arcs between the nodes in the system. We now create four arcs connected the nodes specified in section 1.2, with the direction of the edges directed counter-clockwise:
###Code
# some mathematical operations that may be used
from numpy import pi, sin, cos, linspace
class myNetwork(myNetwork): # update my network class
def specify_edges(self, net_params):
r = net_params.additional_params["radius"]
edgelen = r * pi / 2
# this will let us control the number of lanes in the network
lanes = net_params.additional_params["num_lanes"]
# speed limit of vehicles in the network
speed_limit = net_params.additional_params["speed_limit"]
edges = [
{
"id": "edge0",
"numLanes": lanes,
"speed": speed_limit,
"from": "bottom",
"to": "right",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(-pi/2, 0, 40)]
},
{
"id": "edge1",
"numLanes": lanes,
"speed": speed_limit,
"from": "right",
"to": "top",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(0, pi/2, 40)]
},
{
"id": "edge2",
"numLanes": lanes,
"speed": speed_limit,
"from": "top",
"to": "left",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi/2, pi, 40)]},
{
"id": "edge3",
"numLanes": lanes,
"speed": speed_limit,
"from": "left",
"to": "bottom",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi, 3*pi/2, 40)]
}
]
return edges
###Output
_____no_output_____
###Markdown
1.4 specify_routesThe routes are the sequence of edges vehicles traverse given their current position. For example, a vehicle beginning in the edge titled "edge0" (see section 1.3) must traverse, in sequence, the edges "edge0", "edge1", "edge2", and "edge3", before restarting its path.In order to specify the routes a vehicle may take, the function `specify_routes` is used. The routes in this method can be specified in one of three ways:**1. Single route per edge:**In this case of deterministic routes (as is the case in the ring road network), the routes can be specified as dictionary where the key element represents the starting edge and the element is a single list of edges the vehicle must traverse, with the first edge corresponding to the edge the vehicle begins on. Note that the edges must be connected for the route to be valid.For this network, the available routes under this setting can be defined as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"]}
return rts
###Output
_____no_output_____
###Markdown
**2. Multiple routes per edge:**Alternatively, if the routes are meant to be stochastic, each element can consist of a list of (route, probability) tuples, where the first element in the tuple is one of the routes a vehicle can take from a specific starting edge, and the second element is the probability that vehicles will choose that route. Note that, in this case, the sum of probability values for each dictionary key must sum up to one.For example, modifying the code snippet we presented above, another valid way of representing the route in a more probabilistic setting is:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": [(["edge0", "edge1", "edge2", "edge3"], 1)],
"edge1": [(["edge1", "edge2", "edge3", "edge0"], 1)],
"edge2": [(["edge2", "edge3", "edge0", "edge1"], 1)],
"edge3": [(["edge3", "edge0", "edge1", "edge2"], 1)]}
return rts
###Output
_____no_output_____
###Markdown
**3. Per-vehicle routes:**Finally, if you would like to assign a specific starting route to a vehicle with a specific ID, you can do so by adding a element into the dictionary whose key is the name of the vehicle and whose content is the list of edges the vehicle is meant to traverse as soon as it is introduced to the network.As an example, assume we have a vehicle named "human_0" in the network (as we will in the later sections), and it is initialized in the edge names "edge_0". Then, the route for this edge specifically can be added through the `specify_routes` method as follows:
###Code
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"],
"human_0": ["edge0", "edge1", "edge2", "edge3"]}
return rts
###Output
_____no_output_____
###Markdown
In all three cases, the routes are ultimately represented in the class in the form described under the multiple routes setting, i.e. >>> print(network.rts) { "edge0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ], "edge1": [ (["edge1", "edge2", "edge3", "edge0"], 1) ], "edge2": [ (["edge2", "edge3", "edge0", "edge1"], 1) ], "edge3": [ (["edge3", "edge0", "edge1", "edge2"], 1) ], "human_0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ] }where the vehicle-specific route is only included in the third case. 2. Specifying Auxiliary Network FeaturesOther auxiliary methods exist within the base network class to help support vehicle state initialization and acquisition. Of these methods, the only required abstract method is:* **specify_edge_starts**: defines edge starts for road sections with respect to some global referenceOther optional abstract methods within the base network class include:* **specify_internal_edge_starts**: defines the edge starts for internal edge nodes caused by finite length connections between road section* **specify_intersection_edge_starts**: defines edge starts for intersections with respect to some global reference frame. Only needed by environments with intersections.* **gen_custom_start_pos**: used to generate a user defined set of starting positions for vehicles in the network 2.2 Specifying the Starting Position of EdgesAll of the above functions starting with "specify" receive no inputs, and return a list of tuples in which the first element of the tuple is the name of the edge/intersection/internal_link, and the second value is the distance of the link from some global reference, i.e. [(link_0, pos_0), (link_1, pos_1), ...].The data specified in `specify_edge_starts` is used to provide a "global" sense of the location of vehicles, in one dimension. This is done either through the `get_x_by_id` method within an environment, or the `get_absolute_position` method in the `Vehicles` object within an environment. The `specify_internal_edge_starts` allows us to do the same to junctions/internal links when they are also located within the network (this is not the case for the ring road).In section 1, we created a network with 4 edges named: "edge0", "edge1", "edge2", and "edge3". We assume that the edge titled "edge0" is the origin, and accordingly the position of the edge start of "edge0" is 0. The next edge, "edge1", begins a quarter of the length of the network from the starting point of edge "edge0", and accordingly the position of its edge start is radius * pi/2. This process continues for each of the edges. We can then define the starting position of the edges as follows:
###Code
# import some math functions we may use
from numpy import pi
class myNetwork(myNetwork): # update my network class
def specify_edge_starts(self):
r = self.net_params.additional_params["radius"]
edgestarts = [("edge0", 0),
("edge1", r * 1/2 * pi),
("edge2", r * pi),
("edge3", r * 3/2 * pi)]
return edgestarts
###Output
_____no_output_____
###Markdown
3. Testing the New NetworkIn this section, we run a new sumo simulation using our newly generated network class. For information on running sumo experiments, see `exercise01_sumo.ipynb`.We begin by defining some of the components needed to run a sumo experiment.
###Code
from flow.core.params import VehicleParams
from flow.controllers import IDMController, ContinuousRouter
from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams
vehicles = VehicleParams()
vehicles.add(veh_id="human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=22)
sumo_params = SumoParams(sim_step=0.1, render=True)
initial_config = InitialConfig(bunching=40)
###Output
_____no_output_____
###Markdown
For visualizing purposes, we use the environment `AccelEnv`, as it works on any given network.
###Code
from flow.envs.ring.accel import AccelEnv, ADDITIONAL_ENV_PARAMS
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
###Output
_____no_output_____
###Markdown
Next, using the `ADDITIONAL_NET_PARAMS` component see created in section 1.1, we prepare the `NetParams` component.
###Code
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
net_params = NetParams(additional_params=additional_net_params)
###Output
_____no_output_____
###Markdown
We are ready now to create and run our network. Using the newly defined network classes, we create a network object and feed it into a `Experiment` simulation. Finally, we are able to visually confirm that are network has been properly generated.
###Code
from flow.core.experiment import Experiment
network = myNetwork( # we use the newly defined network class
name="test_network",
vehicles=vehicles,
net_params=net_params,
initial_config=initial_config
)
# AccelEnv allows us to test any newly generated network quickly
env = AccelEnv(env_params, sumo_params, network)
exp = Experiment(env)
# run the sumo simulation for a set number of time steps
_ = exp.run(1, 1500)
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/feat.cols_tf.data.ipynb | ###Markdown
Introduction to Feature Columns **Learning Objectives**1. Load a CSV file using [Pandas](https://pandas.pydata.org/)2. Create an input pipeline using tf.data3. Create multiple types of feature columns Introduction In this notebook, you classify structured data (e.g. tabular data in a CSV file) using [feature columns](https://www.tensorflow.org/tutorials/structured_data/feature_columns). Feature columns serve as a bridge to map from columns in a CSV file to features used to train a model. In a subsequent lab, we will use [Keras](https://www.tensorflow.org/guide/keras) to define the model.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/feat.cols_tf.data.ipynb). The DatasetWe will use a small [dataset](https://archive.ics.uci.edu/ml/datasets/heart+Disease) provided by the Cleveland Clinic Foundation for Heart Disease. There are several hundred rows in the CSV. Each row describes a patient, and each column describes an attribute. We will use this information to predict whether a patient has heart disease, which in this dataset is a binary classification task.Following is a [description](https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/heart-disease.names) of this dataset. Notice there are both numeric and categorical columns.>Column| Description| Feature Type | Data Type>------------|--------------------|----------------------|----------------->Age | Age in years | Numerical | integer>Sex | (1 = male; 0 = female) | Categorical | integer>CP | Chest pain type (0, 1, 2, 3, 4) | Categorical | integer>Trestbpd | Resting blood pressure (in mm Hg on admission to the hospital) | Numerical | integer>Chol | Serum cholestoral in mg/dl | Numerical | integer>FBS | (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) | Categorical | integer>RestECG | Resting electrocardiographic results (0, 1, 2) | Categorical | integer>Thalach | Maximum heart rate achieved | Numerical | integer>Exang | Exercise induced angina (1 = yes; 0 = no) | Categorical | integer>Oldpeak | ST depression induced by exercise relative to rest | Numerical | float>Slope | The slope of the peak exercise ST segment | Numerical | integer>CA | Number of major vessels (0-3) colored by flourosopy | Numerical | integer>Thal | 3 = normal; 6 = fixed defect; 7 = reversable defect | Categorical | string>Target | Diagnosis of heart disease (1 = true; 0 = false) | Classification | integer Import TensorFlow and other libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import tensorflow as tf
from tensorflow import feature_column
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.5.0
###Markdown
Lab Task 1: Use Pandas to create a dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset from a URL, and load it into a dataframe.
###Code
URL = 'https://storage.googleapis.com/download.tensorflow.org/data/heart.csv'
dataframe = pd.read_csv(URL)
dataframe.head()
dataframe.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 303 entries, 0 to 302
Data columns (total 14 columns):
age 303 non-null int64
sex 303 non-null int64
cp 303 non-null int64
trestbps 303 non-null int64
chol 303 non-null int64
fbs 303 non-null int64
restecg 303 non-null int64
thalach 303 non-null int64
exang 303 non-null int64
oldpeak 303 non-null float64
slope 303 non-null int64
ca 303 non-null int64
thal 303 non-null object
target 303 non-null int64
dtypes: float64(1), int64(12), object(1)
memory usage: 33.3+ KB
###Markdown
Split the dataframe into train, validation, and testThe dataset we downloaded was a single CSV file. As a best practice, Complete the below TODO by splitting this into train, validation, and test sets.
###Code
# TODO 1a
# TODO: Your code goes here
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
193 train examples
49 validation examples
61 test examples
###Markdown
Lab Task 2: Create an input pipeline using tf.dataNext, we will wrap the dataframes with [tf.data](https://www.tensorflow.org/datasets). This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train a model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly. That is not covered in this lab. Complete the `TODOs` in the below cells using `df_to_dataset` function.
###Code
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = # TODO 2a: Your code goes here
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 5 # A small batch sized is used for demonstration purposes
# TODO 2b
train_ds = # Your code goes here
val_ds = # Your code goes here
test_ds = # Your code goes here
###Output
_____no_output_____
###Markdown
Understand the input pipelineNow that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
###Code
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch)
###Output
Every feature: ['ca', 'thal', 'trestbps', 'restecg', 'oldpeak', 'exang', 'sex', 'age', 'slope', 'chol', 'fbs', 'thalach', 'cp']
A batch of ages: tf.Tensor([49 68 41 51 63], shape=(5,), dtype=int32)
A batch of targets: tf.Tensor([0 0 0 0 0], shape=(5,), dtype=int32)
###Markdown
Lab Task 3: Demonstrate several types of feature columnTensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.
###Code
# We will use this batch to demonstrate several types of feature columns
example_batch = next(iter(train_ds))[0]
# A utility method to create a feature column
# and to transform a batch of data
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch).numpy())
###Output
_____no_output_____
###Markdown
Numeric columnsThe output of a feature column becomes the input to the model. A [numeric column](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column) is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
###Code
age = feature_column.numeric_column("age")
tf.feature_column.numeric_column
print(age)
###Output
NumericColumn(key='age', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None)
###Markdown
Let's have a look at the output: key='age'A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns. shape=(1,)In the heart disease dataset, most columns from the dataframe are numeric. Recall that tensors have a rank. "Age" is a "vector" or "rank-1" tensor, which is like a list of values. A vector has 1-axis, thus the shape will always look like this: shape=(3,), where 3 is a scalar (or single number) and with 1-axis. default_value=NoneA single value compatible with dtype or an iterable of values compatible with dtype which the column takes on during tf.Example parsing if data is missing. A default value of None will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the default_value should be equal to the given shape. dtype=tf.float32defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type. normalizer_fn=NoneIf not None, a function that can be used to normalize the value of the tensor after default_value is applied for parsing. Normalizer function takes the input Tensor as its argument, and returns the output Tensor. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
###Code
demo(age)
###Output
WARNING:tensorflow:Layer dense_features_22 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[60.]
[58.]
[55.]
[54.]
[51.]]
###Markdown
Bucketized columnsOften, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person's age. Instead of representing age as a numeric column, we could split the age into several buckets using a [bucketized column](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column). Notice the one-hot values below describe which age range each row matches.
###Code
age_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(____) # TODO 3a: Replace the blanks with a correct value
###Output
WARNING:tensorflow:Layer dense_features_23 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]]
###Markdown
Categorical columnsIn this dataset, thal is represented as a string (e.g. 'fixed', 'normal', or 'reversible'). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets). The vocabulary can be passed as a list using [categorical_column_with_vocabulary_list](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list), or loaded from a file using [categorical_column_with_vocabulary_file](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file).
###Code
thal = tf.feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = tf.feature_column.indicator_column(thal)
demo(thal_one_hot)
###Output
WARNING:tensorflow:Layer dense_features_24 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 1.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]
[0. 1. 0.]]
###Markdown
In a more complex dataset, many columns would be categorical (e.g. strings). Feature columns are most valuable when working with categorical data. Although there is only one categorical column in this dataset, we will use it to demonstrate several important types of feature columns that you could use when working with other datasets. Embedding columnsSuppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an [embedding column](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column) represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. The size of the embedding (8, in the example below) is a parameter that must be tuned.Key point: using an embedding column is best when a categorical column has many possible values. We are using one here for demonstration purposes, so you have a complete example you can modify for a different dataset in the future.
###Code
# Notice the input to the embedding column is the categorical column
# we previously created
thal_embedding = tf.feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
###Output
WARNING:tensorflow:Layer dense_features_25 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]]
###Markdown
Hashed feature columnsAnother way to represent a categorical column with a large number of values is to use a [categorical_column_with_hash_bucket](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket). This feature column calculates a hash value of the input, then selects one of the `hash_bucket_size` buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash_buckets significantly smaller than the number of actual categories to save space.Key point: An important downside of this technique is that there may be collisions in which different strings are mapped to the same bucket. In practice, this can work well for some datasets regardless.
###Code
thal_hashed = tf.feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(thal_hashed))
###Output
WARNING:tensorflow:Layer dense_features_26 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Crossed feature columnsCombining features into a single feature, better known as [feature crosses](https://developers.google.com/machine-learning/glossary/feature_cross), enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that `crossed_column` does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a `hashed_column`, so you can choose how large the table is.
###Code
crossed_feature = tf.feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(crossed_feature))
###Output
WARNING:tensorflow:Layer dense_features_27 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Choose which columns to useWe have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.
###Code
feature_columns = []
# numeric cols
for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:
feature_columns.append(feature_column.numeric_column(header))
# bucketized cols
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# indicator cols
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# embedding cols
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# crossed cols
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
###Output
_____no_output_____
###Markdown
How to Input Feature Columns to a Keras ModelNow that we have defined our feature columns, we now use a [DenseFeatures](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/DenseFeatures) layer to input them to a Keras model. Don't worry if you have not used Keras before. There is a more detailed video and lab introducing the Keras Sequential and Functional models.
###Code
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.
###Code
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model
###Code
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_ds,
validation_data=val_ds,
epochs=5)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
2/2 [==============================] - 0s 4ms/step - loss: 0.4773 - accuracy: 0.7705
Accuracy 0.7704918
###Markdown
Visualize the model loss curveNext, we will use Matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the accuracy over the training epochs for both the train (blue) and test (orange) sets.
###Code
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'accuracy'])
###Output
_____no_output_____
###Markdown
Introduction to Feature Columns **Learning Objectives**1. Load a CSV file using [Pandas](https://pandas.pydata.org/)2. Create an input pipeline using tf.data3. Create multiple types of feature columns Introduction In this notebook, you classify structured data (e.g. tabular data in a CSV file) using [feature columns](https://www.tensorflow.org/guide/feature_columns). Feature columns serve as a bridge to map from columns in a CSV file to features used to train a model. In a subsequent lab, we will use [Keras](https://www.tensorflow.org/guide/keras) to define the model.Each learning objective will correspond to a **TODO** in the [student lab notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/feat.cols_tf.data.ipynb) -- try to complete that notebook first before reviewing this solution notebook. The DatasetWe will use a small [dataset](https://archive.ics.uci.edu/ml/datasets/heart+Disease) provided by the Cleveland Clinic Foundation for Heart Disease. There are several hundred rows in the CSV. Each row describes a patient, and each column describes an attribute. We will use this information to predict whether a patient has heart disease, which in this dataset is a binary classification task.Following is a [description](https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/heart-disease.names) of this dataset. Notice there are both numeric and categorical columns.>Column| Description| Feature Type | Data Type>------------|--------------------|----------------------|----------------->Age | Age in years | Numerical | integer>Sex | (1 = male; 0 = female) | Categorical | integer>CP | Chest pain type (0, 1, 2, 3, 4) | Categorical | integer>Trestbpd | Resting blood pressure (in mm Hg on admission to the hospital) | Numerical | integer>Chol | Serum cholestoral in mg/dl | Numerical | integer>FBS | (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) | Categorical | integer>RestECG | Resting electrocardiographic results (0, 1, 2) | Categorical | integer>Thalach | Maximum heart rate achieved | Numerical | integer>Exang | Exercise induced angina (1 = yes; 0 = no) | Categorical | integer>Oldpeak | ST depression induced by exercise relative to rest | Numerical | float>Slope | The slope of the peak exercise ST segment | Numerical | integer>CA | Number of major vessels (0-3) colored by flourosopy | Numerical | integer>Thal | 3 = normal; 6 = fixed defect; 7 = reversable defect | Categorical | string>Target | Diagnosis of heart disease (1 = true; 0 = false) | Classification | integer Import TensorFlow and other libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import tensorflow as tf
from tensorflow import feature_column
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.1.0
###Markdown
Lab Task 1: Use Pandas to create a dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset from a URL, and load it into a dataframe.
###Code
URL = 'https://storage.googleapis.com/applied-dl/heart.csv'
dataframe = pd.read_csv(URL)
dataframe.head()
dataframe.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 303 entries, 0 to 302
Data columns (total 14 columns):
age 303 non-null int64
sex 303 non-null int64
cp 303 non-null int64
trestbps 303 non-null int64
chol 303 non-null int64
fbs 303 non-null int64
restecg 303 non-null int64
thalach 303 non-null int64
exang 303 non-null int64
oldpeak 303 non-null float64
slope 303 non-null int64
ca 303 non-null int64
thal 303 non-null object
target 303 non-null int64
dtypes: float64(1), int64(12), object(1)
memory usage: 33.3+ KB
###Markdown
Split the dataframe into train, validation, and testThe dataset we downloaded was a single CSV file. As a best practice, Complete the below TODO by splitting this into train, validation, and test sets.
###Code
# TODO 1a
# TODO: Your code goes here
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
193 train examples
49 validation examples
61 test examples
###Markdown
Lab Task 2: Create an input pipeline using tf.dataNext, we will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets). This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train a model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly. That is not covered in this lab. Complete the `TODOs` in the below cells using `df_to_dataset` function.
###Code
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = # TODO 2a: Your code goes here
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 5 # A small batch sized is used for demonstration purposes
# TODO 2b
train_ds = # Your code goes here
val_ds = # Your code goes here
test_ds = # Your code goes here
###Output
_____no_output_____
###Markdown
Understand the input pipelineNow that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
###Code
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch)
###Output
Every feature: ['ca', 'thal', 'trestbps', 'restecg', 'oldpeak', 'exang', 'sex', 'age', 'slope', 'chol', 'fbs', 'thalach', 'cp']
A batch of ages: tf.Tensor([49 68 41 51 63], shape=(5,), dtype=int32)
A batch of targets: tf.Tensor([0 0 0 0 0], shape=(5,), dtype=int32)
###Markdown
Lab Task 3: Demonstrate several types of feature columnTensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.
###Code
# We will use this batch to demonstrate several types of feature columns
example_batch = next(iter(train_ds))[0]
# A utility method to create a feature column
# and to transform a batch of data
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch).numpy())
###Output
_____no_output_____
###Markdown
Numeric columnsThe output of a feature column becomes the input to the model. A [numeric column](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column) is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
###Code
age = feature_column.numeric_column("age")
tf.feature_column.numeric_column
print(age)
###Output
NumericColumn(key='age', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None)
###Markdown
Let's have a look at the output: key='age'A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns. shape=(1,)In the heart disease dataset, most columns from the dataframe are numeric. Recall that tensors have a rank. "Age" is a "vector" or "rank-1" tensor, which is like a list of values. A vector has 1-axis, thus the shape will always look like this: shape=(3,), where 3 is a scalar (or single number) and with 1-axis. default_value=NoneA single value compatible with dtype or an iterable of values compatible with dtype which the column takes on during tf.Example parsing if data is missing. A default value of None will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the default_value should be equal to the given shape. dtype=tf.float32defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type. normalizer_fn=NoneIf not None, a function that can be used to normalize the value of the tensor after default_value is applied for parsing. Normalizer function takes the input Tensor as its argument, and returns the output Tensor. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
###Code
demo(age)
###Output
WARNING:tensorflow:Layer dense_features_22 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[60.]
[58.]
[55.]
[54.]
[51.]]
###Markdown
Bucketized columnsOften, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person's age. Instead of representing age as a numeric column, we could split the age into several buckets using a [bucketized column](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column). Notice the one-hot values below describe which age range each row matches.
###Code
age_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(____) # TODO 3a: Replace the blanks with a correct value
###Output
WARNING:tensorflow:Layer dense_features_23 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]]
###Markdown
Categorical columnsIn this dataset, thal is represented as a string (e.g. 'fixed', 'normal', or 'reversible'). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets). The vocabulary can be passed as a list using [categorical_column_with_vocabulary_list](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list), or loaded from a file using [categorical_column_with_vocabulary_file](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file).
###Code
thal = tf.feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = tf.feature_column.indicator_column(thal)
demo(thal_one_hot)
###Output
WARNING:tensorflow:Layer dense_features_24 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 1.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]
[0. 1. 0.]]
###Markdown
In a more complex dataset, many columns would be categorical (e.g. strings). Feature columns are most valuable when working with categorical data. Although there is only one categorical column in this dataset, we will use it to demonstrate several important types of feature columns that you could use when working with other datasets. Embedding columnsSuppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an [embedding column](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column) represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. The size of the embedding (8, in the example below) is a parameter that must be tuned.Key point: using an embedding column is best when a categorical column has many possible values. We are using one here for demonstration purposes, so you have a complete example you can modify for a different dataset in the future.
###Code
# Notice the input to the embedding column is the categorical column
# we previously created
thal_embedding = tf.feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
###Output
WARNING:tensorflow:Layer dense_features_25 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]]
###Markdown
Hashed feature columnsAnother way to represent a categorical column with a large number of values is to use a [categorical_column_with_hash_bucket](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket). This feature column calculates a hash value of the input, then selects one of the `hash_bucket_size` buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash_buckets significantly smaller than the number of actual categories to save space.Key point: An important downside of this technique is that there may be collisions in which different strings are mapped to the same bucket. In practice, this can work well for some datasets regardless.
###Code
thal_hashed = tf.feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(thal_hashed))
###Output
WARNING:tensorflow:Layer dense_features_26 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Crossed feature columnsCombining features into a single feature, better known as [feature crosses](https://developers.google.com/machine-learning/glossary/feature_cross), enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that `crossed_column` does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a `hashed_column`, so you can choose how large the table is.
###Code
crossed_feature = tf.feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(crossed_feature))
###Output
WARNING:tensorflow:Layer dense_features_27 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Choose which columns to useWe have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.
###Code
feature_columns = []
# numeric cols
for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:
feature_columns.append(feature_column.numeric_column(header))
# bucketized cols
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# indicator cols
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# embedding cols
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# crossed cols
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
###Output
_____no_output_____
###Markdown
How to Input Feature Columns to a Keras ModelNow that we have defined our feature columns, we now use a [DenseFeatures](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/DenseFeatures) layer to input them to a Keras model. Don't worry if you have not used Keras before. There is a more detailed video and lab introducing the Keras Sequential and Functional models.
###Code
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.
###Code
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model
###Code
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_ds,
validation_data=val_ds,
epochs=5)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
2/2 [==============================] - 0s 4ms/step - loss: 0.4773 - accuracy: 0.7705
Accuracy 0.7704918
###Markdown
Visualize the model loss curveNext, we will use Matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the accuracy over the training epochs for both the train (blue) and test (orange) sets.
###Code
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'accuracy'])
###Output
_____no_output_____
###Markdown
Introduction to Feature Columns **Learning Objectives**1. Load a CSV file using [Pandas](https://pandas.pydata.org/)2. Create an input pipeline using tf.data3. Create multiple types of feature columns Introduction In this notebook, you classify structured data (e.g. tabular data in a CSV file) using [feature columns](https://www.tensorflow.org/guide/feature_columns). Feature columns serve as a bridge to map from columns in a CSV file to features used to train a model. In a subsequent lab, we will use [Keras](https://www.tensorflow.org/guide/keras) to define the model.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/feat.cols_tf.data.ipynb). The DatasetWe will use a small [dataset](https://archive.ics.uci.edu/ml/datasets/heart+Disease) provided by the Cleveland Clinic Foundation for Heart Disease. There are several hundred rows in the CSV. Each row describes a patient, and each column describes an attribute. We will use this information to predict whether a patient has heart disease, which in this dataset is a binary classification task.Following is a [description](https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/heart-disease.names) of this dataset. Notice there are both numeric and categorical columns.>Column| Description| Feature Type | Data Type>------------|--------------------|----------------------|----------------->Age | Age in years | Numerical | integer>Sex | (1 = male; 0 = female) | Categorical | integer>CP | Chest pain type (0, 1, 2, 3, 4) | Categorical | integer>Trestbpd | Resting blood pressure (in mm Hg on admission to the hospital) | Numerical | integer>Chol | Serum cholestoral in mg/dl | Numerical | integer>FBS | (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) | Categorical | integer>RestECG | Resting electrocardiographic results (0, 1, 2) | Categorical | integer>Thalach | Maximum heart rate achieved | Numerical | integer>Exang | Exercise induced angina (1 = yes; 0 = no) | Categorical | integer>Oldpeak | ST depression induced by exercise relative to rest | Numerical | float>Slope | The slope of the peak exercise ST segment | Numerical | integer>CA | Number of major vessels (0-3) colored by flourosopy | Numerical | integer>Thal | 3 = normal; 6 = fixed defect; 7 = reversable defect | Categorical | string>Target | Diagnosis of heart disease (1 = true; 0 = false) | Classification | integer Import TensorFlow and other libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import tensorflow as tf
from tensorflow import feature_column
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.1.0
###Markdown
Lab Task 1: Use Pandas to create a dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset from a URL, and load it into a dataframe.
###Code
URL = 'https://storage.googleapis.com/applied-dl/heart.csv'
dataframe = pd.read_csv(URL)
dataframe.head()
dataframe.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 303 entries, 0 to 302
Data columns (total 14 columns):
age 303 non-null int64
sex 303 non-null int64
cp 303 non-null int64
trestbps 303 non-null int64
chol 303 non-null int64
fbs 303 non-null int64
restecg 303 non-null int64
thalach 303 non-null int64
exang 303 non-null int64
oldpeak 303 non-null float64
slope 303 non-null int64
ca 303 non-null int64
thal 303 non-null object
target 303 non-null int64
dtypes: float64(1), int64(12), object(1)
memory usage: 33.3+ KB
###Markdown
Split the dataframe into train, validation, and testThe dataset we downloaded was a single CSV file. As a best practice, Complete the below TODO by splitting this into train, validation, and test sets.
###Code
# TODO 1a
# TODO: Your code goes here
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
193 train examples
49 validation examples
61 test examples
###Markdown
Lab Task 2: Create an input pipeline using tf.dataNext, we will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets). This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train a model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly. That is not covered in this lab. Complete the `TODOs` in the below cells using `df_to_dataset` function.
###Code
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = # TODO 2a: Your code goes here
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 5 # A small batch sized is used for demonstration purposes
# TODO 2b
train_ds = # Your code goes here
val_ds = # Your code goes here
test_ds = # Your code goes here
###Output
_____no_output_____
###Markdown
Understand the input pipelineNow that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
###Code
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch)
###Output
Every feature: ['ca', 'thal', 'trestbps', 'restecg', 'oldpeak', 'exang', 'sex', 'age', 'slope', 'chol', 'fbs', 'thalach', 'cp']
A batch of ages: tf.Tensor([49 68 41 51 63], shape=(5,), dtype=int32)
A batch of targets: tf.Tensor([0 0 0 0 0], shape=(5,), dtype=int32)
###Markdown
Lab Task 3: Demonstrate several types of feature columnTensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.
###Code
# We will use this batch to demonstrate several types of feature columns
example_batch = next(iter(train_ds))[0]
# A utility method to create a feature column
# and to transform a batch of data
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch).numpy())
###Output
_____no_output_____
###Markdown
Numeric columnsThe output of a feature column becomes the input to the model. A [numeric column](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column) is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
###Code
age = feature_column.numeric_column("age")
tf.feature_column.numeric_column
print(age)
###Output
NumericColumn(key='age', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None)
###Markdown
Let's have a look at the output: key='age'A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns. shape=(1,)In the heart disease dataset, most columns from the dataframe are numeric. Recall that tensors have a rank. "Age" is a "vector" or "rank-1" tensor, which is like a list of values. A vector has 1-axis, thus the shape will always look like this: shape=(3,), where 3 is a scalar (or single number) and with 1-axis. default_value=NoneA single value compatible with dtype or an iterable of values compatible with dtype which the column takes on during tf.Example parsing if data is missing. A default value of None will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the default_value should be equal to the given shape. dtype=tf.float32defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type. normalizer_fn=NoneIf not None, a function that can be used to normalize the value of the tensor after default_value is applied for parsing. Normalizer function takes the input Tensor as its argument, and returns the output Tensor. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
###Code
demo(age)
###Output
WARNING:tensorflow:Layer dense_features_22 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[60.]
[58.]
[55.]
[54.]
[51.]]
###Markdown
Bucketized columnsOften, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person's age. Instead of representing age as a numeric column, we could split the age into several buckets using a [bucketized column](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column). Notice the one-hot values below describe which age range each row matches.
###Code
age_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(____) # TODO 3a: Replace the blanks with a correct value
###Output
WARNING:tensorflow:Layer dense_features_23 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]]
###Markdown
Categorical columnsIn this dataset, thal is represented as a string (e.g. 'fixed', 'normal', or 'reversible'). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets). The vocabulary can be passed as a list using [categorical_column_with_vocabulary_list](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list), or loaded from a file using [categorical_column_with_vocabulary_file](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file).
###Code
thal = tf.feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = tf.feature_column.indicator_column(thal)
demo(thal_one_hot)
###Output
WARNING:tensorflow:Layer dense_features_24 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 1.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]
[0. 1. 0.]]
###Markdown
In a more complex dataset, many columns would be categorical (e.g. strings). Feature columns are most valuable when working with categorical data. Although there is only one categorical column in this dataset, we will use it to demonstrate several important types of feature columns that you could use when working with other datasets. Embedding columnsSuppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an [embedding column](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column) represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. The size of the embedding (8, in the example below) is a parameter that must be tuned.Key point: using an embedding column is best when a categorical column has many possible values. We are using one here for demonstration purposes, so you have a complete example you can modify for a different dataset in the future.
###Code
# Notice the input to the embedding column is the categorical column
# we previously created
thal_embedding = tf.feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
###Output
WARNING:tensorflow:Layer dense_features_25 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]]
###Markdown
Hashed feature columnsAnother way to represent a categorical column with a large number of values is to use a [categorical_column_with_hash_bucket](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket). This feature column calculates a hash value of the input, then selects one of the `hash_bucket_size` buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash_buckets significantly smaller than the number of actual categories to save space.Key point: An important downside of this technique is that there may be collisions in which different strings are mapped to the same bucket. In practice, this can work well for some datasets regardless.
###Code
thal_hashed = tf.feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(thal_hashed))
###Output
WARNING:tensorflow:Layer dense_features_26 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Crossed feature columnsCombining features into a single feature, better known as [feature crosses](https://developers.google.com/machine-learning/glossary/feature_cross), enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that `crossed_column` does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a `hashed_column`, so you can choose how large the table is.
###Code
crossed_feature = tf.feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(crossed_feature))
###Output
WARNING:tensorflow:Layer dense_features_27 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Choose which columns to useWe have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.
###Code
feature_columns = []
# numeric cols
for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:
feature_columns.append(feature_column.numeric_column(header))
# bucketized cols
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# indicator cols
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# embedding cols
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# crossed cols
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
###Output
_____no_output_____
###Markdown
How to Input Feature Columns to a Keras ModelNow that we have defined our feature columns, we now use a [DenseFeatures](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/DenseFeatures) layer to input them to a Keras model. Don't worry if you have not used Keras before. There is a more detailed video and lab introducing the Keras Sequential and Functional models.
###Code
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.
###Code
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model
###Code
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_ds,
validation_data=val_ds,
epochs=5)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
2/2 [==============================] - 0s 4ms/step - loss: 0.4773 - accuracy: 0.7705
Accuracy 0.7704918
###Markdown
Visualize the model loss curveNext, we will use Matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the accuracy over the training epochs for both the train (blue) and test (orange) sets.
###Code
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'accuracy'])
###Output
_____no_output_____
###Markdown
Introduction to Feature Columns **Learning Objectives**1. Load a CSV file using [Pandas](https://pandas.pydata.org/)2. Create an input pipeline using tf.data3. Create multiple types of feature columns Introduction In this notebook, you classify structured data (e.g. tabular data in a CSV file) using [feature columns](https://www.tensorflow.org/tutorials/structured_data/feature_columns). Feature columns serve as a bridge to map from columns in a CSV file to features used to train a model. In a subsequent lab, we will use [Keras](https://www.tensorflow.org/guide/keras) to define the model.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/feat.cols_tf.data.ipynb). The DatasetWe will use a small [dataset](https://archive.ics.uci.edu/ml/datasets/heart+Disease) provided by the Cleveland Clinic Foundation for Heart Disease. There are several hundred rows in the CSV. Each row describes a patient, and each column describes an attribute. We will use this information to predict whether a patient has heart disease, which in this dataset is a binary classification task.Following is a [description](https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/heart-disease.names) of this dataset. Notice there are both numeric and categorical columns.>Column| Description| Feature Type | Data Type>------------|--------------------|----------------------|----------------->Age | Age in years | Numerical | integer>Sex | (1 = male; 0 = female) | Categorical | integer>CP | Chest pain type (0, 1, 2, 3, 4) | Categorical | integer>Trestbpd | Resting blood pressure (in mm Hg on admission to the hospital) | Numerical | integer>Chol | Serum cholestoral in mg/dl | Numerical | integer>FBS | (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) | Categorical | integer>RestECG | Resting electrocardiographic results (0, 1, 2) | Categorical | integer>Thalach | Maximum heart rate achieved | Numerical | integer>Exang | Exercise induced angina (1 = yes; 0 = no) | Categorical | integer>Oldpeak | ST depression induced by exercise relative to rest | Numerical | float>Slope | The slope of the peak exercise ST segment | Numerical | integer>CA | Number of major vessels (0-3) colored by flourosopy | Numerical | integer>Thal | 3 = normal; 6 = fixed defect; 7 = reversable defect | Categorical | string>Target | Diagnosis of heart disease (1 = true; 0 = false) | Classification | integer Import TensorFlow and other libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import tensorflow as tf
from tensorflow import feature_column
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.1.0
###Markdown
Lab Task 1: Use Pandas to create a dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset from a URL, and load it into a dataframe.
###Code
URL = 'https://storage.googleapis.com/download.tensorflow.org/data/heart.csv'
dataframe = pd.read_csv(URL)
dataframe.head()
dataframe.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 303 entries, 0 to 302
Data columns (total 14 columns):
age 303 non-null int64
sex 303 non-null int64
cp 303 non-null int64
trestbps 303 non-null int64
chol 303 non-null int64
fbs 303 non-null int64
restecg 303 non-null int64
thalach 303 non-null int64
exang 303 non-null int64
oldpeak 303 non-null float64
slope 303 non-null int64
ca 303 non-null int64
thal 303 non-null object
target 303 non-null int64
dtypes: float64(1), int64(12), object(1)
memory usage: 33.3+ KB
###Markdown
Split the dataframe into train, validation, and testThe dataset we downloaded was a single CSV file. As a best practice, Complete the below TODO by splitting this into train, validation, and test sets.
###Code
# TODO 1a
# TODO: Your code goes here
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
193 train examples
49 validation examples
61 test examples
###Markdown
Lab Task 2: Create an input pipeline using tf.dataNext, we will wrap the dataframes with [tf.data](https://www.tensorflow.org/datasets). This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train a model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly. That is not covered in this lab. Complete the `TODOs` in the below cells using `df_to_dataset` function.
###Code
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = # TODO 2a: Your code goes here
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 5 # A small batch sized is used for demonstration purposes
# TODO 2b
train_ds = # Your code goes here
val_ds = # Your code goes here
test_ds = # Your code goes here
###Output
_____no_output_____
###Markdown
Understand the input pipelineNow that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
###Code
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch)
###Output
Every feature: ['ca', 'thal', 'trestbps', 'restecg', 'oldpeak', 'exang', 'sex', 'age', 'slope', 'chol', 'fbs', 'thalach', 'cp']
A batch of ages: tf.Tensor([49 68 41 51 63], shape=(5,), dtype=int32)
A batch of targets: tf.Tensor([0 0 0 0 0], shape=(5,), dtype=int32)
###Markdown
Lab Task 3: Demonstrate several types of feature columnTensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.
###Code
# We will use this batch to demonstrate several types of feature columns
example_batch = next(iter(train_ds))[0]
# A utility method to create a feature column
# and to transform a batch of data
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch).numpy())
###Output
_____no_output_____
###Markdown
Numeric columnsThe output of a feature column becomes the input to the model. A [numeric column](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column) is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
###Code
age = feature_column.numeric_column("age")
tf.feature_column.numeric_column
print(age)
###Output
NumericColumn(key='age', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None)
###Markdown
Let's have a look at the output: key='age'A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns. shape=(1,)In the heart disease dataset, most columns from the dataframe are numeric. Recall that tensors have a rank. "Age" is a "vector" or "rank-1" tensor, which is like a list of values. A vector has 1-axis, thus the shape will always look like this: shape=(3,), where 3 is a scalar (or single number) and with 1-axis. default_value=NoneA single value compatible with dtype or an iterable of values compatible with dtype which the column takes on during tf.Example parsing if data is missing. A default value of None will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the default_value should be equal to the given shape. dtype=tf.float32defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type. normalizer_fn=NoneIf not None, a function that can be used to normalize the value of the tensor after default_value is applied for parsing. Normalizer function takes the input Tensor as its argument, and returns the output Tensor. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
###Code
demo(age)
###Output
WARNING:tensorflow:Layer dense_features_22 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[60.]
[58.]
[55.]
[54.]
[51.]]
###Markdown
Bucketized columnsOften, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person's age. Instead of representing age as a numeric column, we could split the age into several buckets using a [bucketized column](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column). Notice the one-hot values below describe which age range each row matches.
###Code
age_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(____) # TODO 3a: Replace the blanks with a correct value
###Output
WARNING:tensorflow:Layer dense_features_23 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]]
###Markdown
Categorical columnsIn this dataset, thal is represented as a string (e.g. 'fixed', 'normal', or 'reversible'). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets). The vocabulary can be passed as a list using [categorical_column_with_vocabulary_list](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list), or loaded from a file using [categorical_column_with_vocabulary_file](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file).
###Code
thal = tf.feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = tf.feature_column.indicator_column(thal)
demo(thal_one_hot)
###Output
WARNING:tensorflow:Layer dense_features_24 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 1.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]
[0. 1. 0.]]
###Markdown
In a more complex dataset, many columns would be categorical (e.g. strings). Feature columns are most valuable when working with categorical data. Although there is only one categorical column in this dataset, we will use it to demonstrate several important types of feature columns that you could use when working with other datasets. Embedding columnsSuppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an [embedding column](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column) represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. The size of the embedding (8, in the example below) is a parameter that must be tuned.Key point: using an embedding column is best when a categorical column has many possible values. We are using one here for demonstration purposes, so you have a complete example you can modify for a different dataset in the future.
###Code
# Notice the input to the embedding column is the categorical column
# we previously created
thal_embedding = tf.feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
###Output
WARNING:tensorflow:Layer dense_features_25 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]]
###Markdown
Hashed feature columnsAnother way to represent a categorical column with a large number of values is to use a [categorical_column_with_hash_bucket](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket). This feature column calculates a hash value of the input, then selects one of the `hash_bucket_size` buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash_buckets significantly smaller than the number of actual categories to save space.Key point: An important downside of this technique is that there may be collisions in which different strings are mapped to the same bucket. In practice, this can work well for some datasets regardless.
###Code
thal_hashed = tf.feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(thal_hashed))
###Output
WARNING:tensorflow:Layer dense_features_26 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Crossed feature columnsCombining features into a single feature, better known as [feature crosses](https://developers.google.com/machine-learning/glossary/feature_cross), enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that `crossed_column` does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a `hashed_column`, so you can choose how large the table is.
###Code
crossed_feature = tf.feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(crossed_feature))
###Output
WARNING:tensorflow:Layer dense_features_27 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Choose which columns to useWe have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.
###Code
feature_columns = []
# numeric cols
for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:
feature_columns.append(feature_column.numeric_column(header))
# bucketized cols
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# indicator cols
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# embedding cols
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# crossed cols
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
###Output
_____no_output_____
###Markdown
How to Input Feature Columns to a Keras ModelNow that we have defined our feature columns, we now use a [DenseFeatures](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/DenseFeatures) layer to input them to a Keras model. Don't worry if you have not used Keras before. There is a more detailed video and lab introducing the Keras Sequential and Functional models.
###Code
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.
###Code
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model
###Code
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_ds,
validation_data=val_ds,
epochs=5)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
2/2 [==============================] - 0s 4ms/step - loss: 0.4773 - accuracy: 0.7705
Accuracy 0.7704918
###Markdown
Visualize the model loss curveNext, we will use Matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the accuracy over the training epochs for both the train (blue) and test (orange) sets.
###Code
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'accuracy'])
###Output
_____no_output_____
###Markdown
Introduction to Feature Columns **Learning Objectives**1. Load a CSV file using [Pandas](https://pandas.pydata.org/)2. Create an input pipeline using tf.data3. Create multiple types of feature columns Introduction In this notebook, you classify structured data (e.g. tabular data in a CSV file) using [feature columns](https://www.tensorflow.org/guide/feature_columns). Feature columns serve as a bridge to map from columns in a CSV file to features used to train a model. In a subsequent lab, we will use [Keras](https://www.tensorflow.org/guide/keras) to define the model. The DatasetWe will use a small [dataset](https://archive.ics.uci.edu/ml/datasets/heart+Disease) provided by the Cleveland Clinic Foundation for Heart Disease. There are several hundred rows in the CSV. Each row describes a patient, and each column describes an attribute. We will use this information to predict whether a patient has heart disease, which in this dataset is a binary classification task.Following is a [description](https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/heart-disease.names) of this dataset. Notice there are both numeric and categorical columns.>Column| Description| Feature Type | Data Type>------------|--------------------|----------------------|----------------->Age | Age in years | Numerical | integer>Sex | (1 = male; 0 = female) | Categorical | integer>CP | Chest pain type (0, 1, 2, 3, 4) | Categorical | integer>Trestbpd | Resting blood pressure (in mm Hg on admission to the hospital) | Numerical | integer>Chol | Serum cholestoral in mg/dl | Numerical | integer>FBS | (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) | Categorical | integer>RestECG | Resting electrocardiographic results (0, 1, 2) | Categorical | integer>Thalach | Maximum heart rate achieved | Numerical | integer>Exang | Exercise induced angina (1 = yes; 0 = no) | Categorical | integer>Oldpeak | ST depression induced by exercise relative to rest | Numerical | float>Slope | The slope of the peak exercise ST segment | Numerical | integer>CA | Number of major vessels (0-3) colored by flourosopy | Numerical | integer>Thal | 3 = normal; 6 = fixed defect; 7 = reversable defect | Categorical | string>Target | Diagnosis of heart disease (1 = true; 0 = false) | Classification | integer Import TensorFlow and other libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import tensorflow as tf
from tensorflow import feature_column
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.1.0
###Markdown
Use Pandas to create a dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset from a URL, and load it into a dataframe.
###Code
URL = 'https://storage.googleapis.com/applied-dl/heart.csv'
dataframe = pd.read_csv(URL)
dataframe.head()
dataframe.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 303 entries, 0 to 302
Data columns (total 14 columns):
age 303 non-null int64
sex 303 non-null int64
cp 303 non-null int64
trestbps 303 non-null int64
chol 303 non-null int64
fbs 303 non-null int64
restecg 303 non-null int64
thalach 303 non-null int64
exang 303 non-null int64
oldpeak 303 non-null float64
slope 303 non-null int64
ca 303 non-null int64
thal 303 non-null object
target 303 non-null int64
dtypes: float64(1), int64(12), object(1)
memory usage: 33.3+ KB
###Markdown
Split the dataframe into train, validation, and testThe dataset we downloaded was a single CSV file. As a best practice, we will split this into train, validation, and test sets.
###Code
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
193 train examples
49 validation examples
61 test examples
###Markdown
Create an input pipeline using tf.dataNext, we will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets). This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train a model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly. That is not covered in this lab.
###Code
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 5 # A small batch sized is used for demonstration purposes
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Understand the input pipelineNow that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
###Code
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch)
###Output
Every feature: ['ca', 'thal', 'trestbps', 'restecg', 'oldpeak', 'exang', 'sex', 'age', 'slope', 'chol', 'fbs', 'thalach', 'cp']
A batch of ages: tf.Tensor([49 68 41 51 63], shape=(5,), dtype=int32)
A batch of targets: tf.Tensor([0 0 0 0 0], shape=(5,), dtype=int32)
###Markdown
Demonstrate several types of feature columnTensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.
###Code
# We will use this batch to demonstrate several types of feature columns
example_batch = next(iter(train_ds))[0]
# A utility method to create a feature column
# and to transform a batch of data
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch).numpy())
###Output
_____no_output_____
###Markdown
Numeric columnsThe output of a feature column becomes the input to the model. A [numeric column](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column) is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
###Code
age = feature_column.numeric_column("age")
tf.feature_column.numeric_column
print(age)
###Output
NumericColumn(key='age', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None)
###Markdown
Let's have a look at the output: key='age'A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns. shape=(1,)In the heart disease dataset, most columns from the dataframe are numeric. Recall that tensors have a rank. "Age" is a "vector" or "rank-1" tensor, which is like a list of values. A vector has 1-axis, thus the shape will always look like this: shape=(3,), where 3 is a scalar (or single number) and with 1-axis. default_value=NoneA single value compatible with dtype or an iterable of values compatible with dtype which the column takes on during tf.Example parsing if data is missing. A default value of None will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the default_value should be equal to the given shape. dtype=tf.float32defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type. normalizer_fn=NoneIf not None, a function that can be used to normalize the value of the tensor after default_value is applied for parsing. Normalizer function takes the input Tensor as its argument, and returns the output Tensor. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
###Code
demo(age)
###Output
WARNING:tensorflow:Layer dense_features_22 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[60.]
[58.]
[55.]
[54.]
[51.]]
###Markdown
Bucketized columnsOften, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person's age. Instead of representing age as a numeric column, we could split the age into several buckets using a [bucketized column](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column). Notice the one-hot values below describe which age range each row matches.
###Code
age_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(age_buckets)
###Output
WARNING:tensorflow:Layer dense_features_23 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]]
###Markdown
Categorical columnsIn this dataset, thal is represented as a string (e.g. 'fixed', 'normal', or 'reversible'). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets). The vocabulary can be passed as a list using [categorical_column_with_vocabulary_list](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list), or loaded from a file using [categorical_column_with_vocabulary_file](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file).
###Code
thal = tf.feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = tf.feature_column.indicator_column(thal)
demo(thal_one_hot)
###Output
WARNING:tensorflow:Layer dense_features_24 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 1.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]
[0. 1. 0.]]
###Markdown
In a more complex dataset, many columns would be categorical (e.g. strings). Feature columns are most valuable when working with categorical data. Although there is only one categorical column in this dataset, we will use it to demonstrate several important types of feature columns that you could use when working with other datasets. Embedding columnsSuppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an [embedding column](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column) represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. The size of the embedding (8, in the example below) is a parameter that must be tuned.Key point: using an embedding column is best when a categorical column has many possible values. We are using one here for demonstration purposes, so you have a complete example you can modify for a different dataset in the future.
###Code
# Notice the input to the embedding column is the categorical column
# we previously created
thal_embedding = tf.feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
###Output
WARNING:tensorflow:Layer dense_features_25 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]]
###Markdown
Hashed feature columnsAnother way to represent a categorical column with a large number of values is to use a [categorical_column_with_hash_bucket](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket). This feature column calculates a hash value of the input, then selects one of the `hash_bucket_size` buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash_buckets significantly smaller than the number of actual categories to save space.Key point: An important downside of this technique is that there may be collisions in which different strings are mapped to the same bucket. In practice, this can work well for some datasets regardless.
###Code
thal_hashed = tf.feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(thal_hashed))
###Output
WARNING:tensorflow:Layer dense_features_26 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Crossed feature columnsCombining features into a single feature, better known as [feature crosses](https://developers.google.com/machine-learning/glossary/feature_cross), enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that `crossed_column` does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a `hashed_column`, so you can choose how large the table is.
###Code
crossed_feature = tf.feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(crossed_feature))
###Output
WARNING:tensorflow:Layer dense_features_27 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Choose which columns to useWe have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.
###Code
feature_columns = []
# numeric cols
for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:
feature_columns.append(feature_column.numeric_column(header))
# bucketized cols
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# indicator cols
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# embedding cols
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# crossed cols
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
###Output
_____no_output_____
###Markdown
How to Input Feature Columns to a Keras ModelNow that we have defined our feature columns, we now use a [DenseFeatures](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/DenseFeatures) layer to input them to a Keras model. Don't worry if you have not used Keras before. There is a more detailed video and lab introducing the Keras Sequential and Funcational models.
###Code
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.
###Code
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model
###Code
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_ds,
validation_data=val_ds,
epochs=5)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
2/2 [==============================] - 0s 4ms/step - loss: 0.4773 - accuracy: 0.7705
Accuracy 0.7704918
###Markdown
Visualize the model loss curveNext, we will use Matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the accuracy over the training epochs for both the train (blue) and test (orange) sets.
###Code
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'accuracy'])
###Output
_____no_output_____
###Markdown
Introduction to Feature Columns **Learning Objectives**1. Load a CSV file using [Pandas](https://pandas.pydata.org/)2. Create an input pipeline using tf.data3. Create multiple types of feature columns Introduction In this notebook, you classify structured data (e.g. tabular data in a CSV file) using [feature columns](https://www.tensorflow.org/tutorials/structured_data/feature_columns). Feature columns serve as a bridge to map from columns in a CSV file to features used to train a model. In a subsequent lab, we will use [Keras](https://www.tensorflow.org/guide/keras) to define the model.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/feat.cols_tf.data.ipynb). The DatasetWe will use a small [dataset](https://archive.ics.uci.edu/ml/datasets/heart+Disease) provided by the Cleveland Clinic Foundation for Heart Disease. There are several hundred rows in the CSV. Each row describes a patient, and each column describes an attribute. We will use this information to predict whether a patient has heart disease, which in this dataset is a binary classification task.Following is a [description](https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/heart-disease.names) of this dataset. Notice there are both numeric and categorical columns.>Column| Description| Feature Type | Data Type>------------|--------------------|----------------------|----------------->Age | Age in years | Numerical | integer>Sex | (1 = male; 0 = female) | Categorical | integer>CP | Chest pain type (0, 1, 2, 3, 4) | Categorical | integer>Trestbpd | Resting blood pressure (in mm Hg on admission to the hospital) | Numerical | integer>Chol | Serum cholestoral in mg/dl | Numerical | integer>FBS | (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) | Categorical | integer>RestECG | Resting electrocardiographic results (0, 1, 2) | Categorical | integer>Thalach | Maximum heart rate achieved | Numerical | integer>Exang | Exercise induced angina (1 = yes; 0 = no) | Categorical | integer>Oldpeak | ST depression induced by exercise relative to rest | Numerical | float>Slope | The slope of the peak exercise ST segment | Numerical | integer>CA | Number of major vessels (0-3) colored by flourosopy | Numerical | integer>Thal | 3 = normal; 6 = fixed defect; 7 = reversable defect | Categorical | string>Target | Diagnosis of heart disease (1 = true; 0 = false) | Classification | integer Import TensorFlow and other libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import tensorflow as tf
from tensorflow import feature_column
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.5.0
###Markdown
Lab Task 1: Use Pandas to create a dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset from a URL, and load it into a dataframe.
###Code
URL = 'https://storage.googleapis.com/download.tensorflow.org/data/heart.csv'
dataframe = pd.read_csv(URL)
dataframe.head()
dataframe.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 303 entries, 0 to 302
Data columns (total 14 columns):
age 303 non-null int64
sex 303 non-null int64
cp 303 non-null int64
trestbps 303 non-null int64
chol 303 non-null int64
fbs 303 non-null int64
restecg 303 non-null int64
thalach 303 non-null int64
exang 303 non-null int64
oldpeak 303 non-null float64
slope 303 non-null int64
ca 303 non-null int64
thal 303 non-null object
target 303 non-null int64
dtypes: float64(1), int64(12), object(1)
memory usage: 33.3+ KB
###Markdown
Split the dataframe into train, validation, and testThe dataset we downloaded was a single CSV file. As a best practice, Complete the below TODO by splitting this into train, validation, and test sets.
###Code
# TODO 1a
# TODO: Your code goes here
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
193 train examples
49 validation examples
61 test examples
###Markdown
Lab Task 2: Create an input pipeline using tf.dataNext, we will wrap the dataframes with [tf.data](https://www.tensorflow.org/datasets). This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train a model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly. That is not covered in this lab. Complete the `TODOs` in the below cells using `df_to_dataset` function.
###Code
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = # TODO 2a: Your code goes here
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 5 # A small batch sized is used for demonstration purposes
# TODO 2b
train_ds = # Your code goes here
val_ds = # Your code goes here
test_ds = # Your code goes here
###Output
_____no_output_____
###Markdown
Understand the input pipelineNow that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
###Code
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch)
###Output
Every feature: ['ca', 'thal', 'trestbps', 'restecg', 'oldpeak', 'exang', 'sex', 'age', 'slope', 'chol', 'fbs', 'thalach', 'cp']
A batch of ages: tf.Tensor([49 68 41 51 63], shape=(5,), dtype=int32)
A batch of targets: tf.Tensor([0 0 0 0 0], shape=(5,), dtype=int32)
###Markdown
Lab Task 3: Demonstrate several types of feature columnTensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.
###Code
# We will use this batch to demonstrate several types of feature columns
example_batch = next(iter(train_ds))[0]
# A utility method to create a feature column
# and to transform a batch of data
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch).numpy())
###Output
_____no_output_____
###Markdown
Numeric columnsThe output of a feature column becomes the input to the model. A [numeric column](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column) is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
###Code
age = feature_column.numeric_column("age")
tf.feature_column.numeric_column
print(age)
###Output
NumericColumn(key='age', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None)
###Markdown
Let's have a look at the output: key='age'A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns. shape=(1,)In the heart disease dataset, most columns from the dataframe are numeric. Recall that tensors have a rank. "Age" is a "vector" or "rank-1" tensor, which is like a list of values. A vector has 1-axis, thus the shape will always look like this: shape=(3,), where 3 is a scalar (or single number) and with 1-axis. default_value=NoneA single value compatible with dtype or an iterable of values compatible with dtype which the column takes on during tf.Example parsing if data is missing. A default value of None will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the default_value should be equal to the given shape. dtype=tf.float32defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type. normalizer_fn=NoneIf not None, a function that can be used to normalize the value of the tensor after default_value is applied for parsing. Normalizer function takes the input Tensor as its argument, and returns the output Tensor. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
###Code
demo(age)
###Output
WARNING:tensorflow:Layer dense_features_22 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[60.]
[58.]
[55.]
[54.]
[51.]]
###Markdown
Bucketized columnsOften, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person's age. Instead of representing age as a numeric column, we could split the age into several buckets using a [bucketized column](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column). Notice the one-hot values below describe which age range each row matches.
###Code
age_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(____) # TODO 3a: Replace the blanks with a correct value
###Output
WARNING:tensorflow:Layer dense_features_23 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]]
###Markdown
Categorical columnsIn this dataset, thal is represented as a string (e.g. 'fixed', 'normal', or 'reversible'). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets). The vocabulary can be passed as a list using [categorical_column_with_vocabulary_list](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list), or loaded from a file using [categorical_column_with_vocabulary_file](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file).
###Code
thal = tf.feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = tf.feature_column.indicator_column(thal)
demo(thal_one_hot)
###Output
WARNING:tensorflow:Layer dense_features_24 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 1.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]
[0. 1. 0.]]
###Markdown
In a more complex dataset, many columns would be categorical (e.g. strings). Feature columns are most valuable when working with categorical data. Although there is only one categorical column in this dataset, we will use it to demonstrate several important types of feature columns that you could use when working with other datasets. Embedding columnsSuppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an [embedding column](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column) represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. The size of the embedding (8, in the example below) is a parameter that must be tuned.Key point: using an embedding column is best when a categorical column has many possible values. We are using one here for demonstration purposes, so you have a complete example you can modify for a different dataset in the future.
###Code
# Notice the input to the embedding column is the categorical column
# we previously created
thal_embedding = tf.feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
###Output
WARNING:tensorflow:Layer dense_features_25 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]]
###Markdown
Hashed feature columnsAnother way to represent a categorical column with a large number of values is to use a [categorical_column_with_hash_bucket](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket). This feature column calculates a hash value of the input, then selects one of the `hash_bucket_size` buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash_buckets significantly smaller than the number of actual categories to save space.Key point: An important downside of this technique is that there may be collisions in which different strings are mapped to the same bucket. In practice, this can work well for some datasets regardless.
###Code
thal_hashed = tf.feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(thal_hashed))
###Output
WARNING:tensorflow:Layer dense_features_26 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Crossed feature columnsCombining features into a single feature, better known as [feature crosses](https://developers.google.com/machine-learning/glossary/feature_cross), enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that `crossed_column` does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a `hashed_column`, so you can choose how large the table is.
###Code
crossed_feature = tf.feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(crossed_feature))
###Output
WARNING:tensorflow:Layer dense_features_27 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Choose which columns to useWe have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.
###Code
feature_columns = []
# numeric cols
for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:
feature_columns.append(feature_column.numeric_column(header))
# bucketized cols
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# indicator cols
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# embedding cols
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# crossed cols
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
###Output
_____no_output_____
###Markdown
How to Input Feature Columns to a Keras ModelNow that we have defined our feature columns, we now use a [DenseFeatures](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/DenseFeatures) layer to input them to a Keras model. Don't worry if you have not used Keras before. There is a more detailed video and lab introducing the Keras Sequential and Functional models.
###Code
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.
###Code
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model
###Code
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_ds,
validation_data=val_ds,
epochs=5)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
2/2 [==============================] - 0s 4ms/step - loss: 0.4773 - accuracy: 0.7705
Accuracy 0.7704918
###Markdown
Visualize the model loss curveNext, we will use Matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the accuracy over the training epochs for both the train (blue) and test (orange) sets.
###Code
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'accuracy'])
###Output
_____no_output_____
###Markdown
Introduction to Feature Columns **Learning Objectives**1. Load a CSV file using [Pandas](https://pandas.pydata.org/)2. Create an input pipeline using tf.data3. Create multiple types of feature columns Introduction In this notebook, you classify structured data (e.g. tabular data in a CSV file) using [feature columns](https://www.tensorflow.org/guide/feature_columns). Feature columns serve as a bridge to map from columns in a CSV file to features used to train a model. In a subsequent lab, we will use [Keras](https://www.tensorflow.org/guide/keras) to define the model.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/feat.cols_tf.data.ipynb). The DatasetWe will use a small [dataset](https://archive.ics.uci.edu/ml/datasets/heart+Disease) provided by the Cleveland Clinic Foundation for Heart Disease. There are several hundred rows in the CSV. Each row describes a patient, and each column describes an attribute. We will use this information to predict whether a patient has heart disease, which in this dataset is a binary classification task.Following is a [description](https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/heart-disease.names) of this dataset. Notice there are both numeric and categorical columns.>Column| Description| Feature Type | Data Type>------------|--------------------|----------------------|----------------->Age | Age in years | Numerical | integer>Sex | (1 = male; 0 = female) | Categorical | integer>CP | Chest pain type (0, 1, 2, 3, 4) | Categorical | integer>Trestbpd | Resting blood pressure (in mm Hg on admission to the hospital) | Numerical | integer>Chol | Serum cholestoral in mg/dl | Numerical | integer>FBS | (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) | Categorical | integer>RestECG | Resting electrocardiographic results (0, 1, 2) | Categorical | integer>Thalach | Maximum heart rate achieved | Numerical | integer>Exang | Exercise induced angina (1 = yes; 0 = no) | Categorical | integer>Oldpeak | ST depression induced by exercise relative to rest | Numerical | float>Slope | The slope of the peak exercise ST segment | Numerical | integer>CA | Number of major vessels (0-3) colored by flourosopy | Numerical | integer>Thal | 3 = normal; 6 = fixed defect; 7 = reversable defect | Categorical | string>Target | Diagnosis of heart disease (1 = true; 0 = false) | Classification | integer Import TensorFlow and other libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import tensorflow as tf
from tensorflow import feature_column
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.1.0
###Markdown
Lab Task 1: Use Pandas to create a dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset from a URL, and load it into a dataframe.
###Code
URL = 'https://storage.googleapis.com/download.tensorflow.org/data/heart.csv'
dataframe = pd.read_csv(URL)
dataframe.head()
dataframe.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 303 entries, 0 to 302
Data columns (total 14 columns):
age 303 non-null int64
sex 303 non-null int64
cp 303 non-null int64
trestbps 303 non-null int64
chol 303 non-null int64
fbs 303 non-null int64
restecg 303 non-null int64
thalach 303 non-null int64
exang 303 non-null int64
oldpeak 303 non-null float64
slope 303 non-null int64
ca 303 non-null int64
thal 303 non-null object
target 303 non-null int64
dtypes: float64(1), int64(12), object(1)
memory usage: 33.3+ KB
###Markdown
Split the dataframe into train, validation, and testThe dataset we downloaded was a single CSV file. As a best practice, Complete the below TODO by splitting this into train, validation, and test sets.
###Code
# TODO 1a
# TODO: Your code goes here
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
193 train examples
49 validation examples
61 test examples
###Markdown
Lab Task 2: Create an input pipeline using tf.dataNext, we will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets). This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train a model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly. That is not covered in this lab. Complete the `TODOs` in the below cells using `df_to_dataset` function.
###Code
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = # TODO 2a: Your code goes here
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 5 # A small batch sized is used for demonstration purposes
# TODO 2b
train_ds = # Your code goes here
val_ds = # Your code goes here
test_ds = # Your code goes here
###Output
_____no_output_____
###Markdown
Understand the input pipelineNow that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
###Code
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch)
###Output
Every feature: ['ca', 'thal', 'trestbps', 'restecg', 'oldpeak', 'exang', 'sex', 'age', 'slope', 'chol', 'fbs', 'thalach', 'cp']
A batch of ages: tf.Tensor([49 68 41 51 63], shape=(5,), dtype=int32)
A batch of targets: tf.Tensor([0 0 0 0 0], shape=(5,), dtype=int32)
###Markdown
Lab Task 3: Demonstrate several types of feature columnTensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.
###Code
# We will use this batch to demonstrate several types of feature columns
example_batch = next(iter(train_ds))[0]
# A utility method to create a feature column
# and to transform a batch of data
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch).numpy())
###Output
_____no_output_____
###Markdown
Numeric columnsThe output of a feature column becomes the input to the model. A [numeric column](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column) is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
###Code
age = feature_column.numeric_column("age")
tf.feature_column.numeric_column
print(age)
###Output
NumericColumn(key='age', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None)
###Markdown
Let's have a look at the output: key='age'A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns. shape=(1,)In the heart disease dataset, most columns from the dataframe are numeric. Recall that tensors have a rank. "Age" is a "vector" or "rank-1" tensor, which is like a list of values. A vector has 1-axis, thus the shape will always look like this: shape=(3,), where 3 is a scalar (or single number) and with 1-axis. default_value=NoneA single value compatible with dtype or an iterable of values compatible with dtype which the column takes on during tf.Example parsing if data is missing. A default value of None will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the default_value should be equal to the given shape. dtype=tf.float32defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type. normalizer_fn=NoneIf not None, a function that can be used to normalize the value of the tensor after default_value is applied for parsing. Normalizer function takes the input Tensor as its argument, and returns the output Tensor. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
###Code
demo(age)
###Output
WARNING:tensorflow:Layer dense_features_22 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[60.]
[58.]
[55.]
[54.]
[51.]]
###Markdown
Bucketized columnsOften, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person's age. Instead of representing age as a numeric column, we could split the age into several buckets using a [bucketized column](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column). Notice the one-hot values below describe which age range each row matches.
###Code
age_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(____) # TODO 3a: Replace the blanks with a correct value
###Output
WARNING:tensorflow:Layer dense_features_23 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]]
###Markdown
Categorical columnsIn this dataset, thal is represented as a string (e.g. 'fixed', 'normal', or 'reversible'). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets). The vocabulary can be passed as a list using [categorical_column_with_vocabulary_list](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list), or loaded from a file using [categorical_column_with_vocabulary_file](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file).
###Code
thal = tf.feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = tf.feature_column.indicator_column(thal)
demo(thal_one_hot)
###Output
WARNING:tensorflow:Layer dense_features_24 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 1.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]
[0. 1. 0.]]
###Markdown
In a more complex dataset, many columns would be categorical (e.g. strings). Feature columns are most valuable when working with categorical data. Although there is only one categorical column in this dataset, we will use it to demonstrate several important types of feature columns that you could use when working with other datasets. Embedding columnsSuppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an [embedding column](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column) represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. The size of the embedding (8, in the example below) is a parameter that must be tuned.Key point: using an embedding column is best when a categorical column has many possible values. We are using one here for demonstration purposes, so you have a complete example you can modify for a different dataset in the future.
###Code
# Notice the input to the embedding column is the categorical column
# we previously created
thal_embedding = tf.feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
###Output
WARNING:tensorflow:Layer dense_features_25 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[ 0.26216975 -0.66194284 0.33328214 -0.09756625 0.20408471 0.57926923
-0.07685163 0.4386801 ]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]
[-0.24602154 0.0877578 0.07975551 0.34634778 0.2708743 -0.6707659
-0.15825593 -0.08179379]]
###Markdown
Hashed feature columnsAnother way to represent a categorical column with a large number of values is to use a [categorical_column_with_hash_bucket](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket). This feature column calculates a hash value of the input, then selects one of the `hash_bucket_size` buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash_buckets significantly smaller than the number of actual categories to save space.Key point: An important downside of this technique is that there may be collisions in which different strings are mapped to the same bucket. In practice, this can work well for some datasets regardless.
###Code
thal_hashed = tf.feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(thal_hashed))
###Output
WARNING:tensorflow:Layer dense_features_26 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Crossed feature columnsCombining features into a single feature, better known as [feature crosses](https://developers.google.com/machine-learning/glossary/feature_cross), enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that `crossed_column` does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a `hashed_column`, so you can choose how large the table is.
###Code
crossed_feature = tf.feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(tf.feature_column.indicator_column(crossed_feature))
###Output
WARNING:tensorflow:Layer dense_features_27 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Choose which columns to useWe have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.
###Code
feature_columns = []
# numeric cols
for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:
feature_columns.append(feature_column.numeric_column(header))
# bucketized cols
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# indicator cols
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# embedding cols
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# crossed cols
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
###Output
_____no_output_____
###Markdown
How to Input Feature Columns to a Keras ModelNow that we have defined our feature columns, we now use a [DenseFeatures](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/DenseFeatures) layer to input them to a Keras model. Don't worry if you have not used Keras before. There is a more detailed video and lab introducing the Keras Sequential and Functional models.
###Code
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.
###Code
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model
###Code
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_ds,
validation_data=val_ds,
epochs=5)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
2/2 [==============================] - 0s 4ms/step - loss: 0.4773 - accuracy: 0.7705
Accuracy 0.7704918
###Markdown
Visualize the model loss curveNext, we will use Matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the accuracy over the training epochs for both the train (blue) and test (orange) sets.
###Code
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'accuracy'])
###Output
_____no_output_____ |
positions-data.ipynb | ###Markdown
Pozicije koje čine dataset:
###Code
import numpy as np
from pandas import Series,DataFrame
import pandas as pd
df = pd.read_csv('NBA_Stats.txt', sep='\t')
df.ix[:5,-2:]
df.columns
df.columns = ['Position', 'Player']
df[:5]
df.Position.unique()
df.loc[df['Position'] == 'F'][:10]
df.loc[df['Position'] == 'PG'][:10]
###Output
_____no_output_____ |
Analysis/3_Cell-cell_comm_analysis/S3_extract_interactions.ipynb | ###Markdown
Cell-cell communication analysis Extracting all interactions from the scRNA-seq reference dataset
###Code
%%bash
# takes some time, needs about 20 Gb of RAM
cellphonedb method analysis \
./cellphonedb_meta.tsv \
./adata_for_cellphone.h5ad \
--database ./database/cellphonedb_user_2021-05-02-15_16.db \
--counts-data hgnc_symbol \
--output-path ./out/ \
--threshold 0 > cellphone_log.txt
###Output
/opt/conda/lib/python3.8/site-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.cluster.k_means_ module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.cluster. Anything that cannot be imported from sklearn.cluster is now part of the private API.
warnings.warn(message, FutureWarning)
|
02. Intro to pandas/05. Five-Step Process for Data Exploration.ipynb | ###Markdown
Five-Step Process for Data ExplorationMajor issues arise for beginners when too many lines of code are written in a single cell of a notebook. It's important to get feedback on every single line of code that you write and verify that it is in fact correct. Only once you have verified the result should you move on to the next line of code. To help increase your ability to do data exploration in Jupyter Notebooks, I recommend the following five-step process:1. Write and execute a single line of code to explore your data1. Verify that this line of code works by inspecting the output1. Assign the result to a variable1. Within the same cell, in a second line, output the head of the DataFrame or Series1. Continue to the next cell. Do not add more lines of code to the cell Apply to every part of the analysisYou can apply this five-step process to every part of your data analysis. Let's begin by reading in the bikes dataset and applying the five-step process for setting the index of our DataFrame as the `trip_id` column.
###Code
import pandas as pd
bikes = pd.read_csv('../data/bikes.csv')
bikes.head(3)
###Output
_____no_output_____
###Markdown
Step 1: Write and execute a single line of code to explore your dataIn this step, we call the `set_index` method to be the `trip_id` column.
###Code
bikes.set_index('trip_id').head(3)
###Output
_____no_output_____
###Markdown
Step 2: Verify that this line of code works by inspecting the outputLooking above, the output appears to be correct. The `trip_id` column has been set as the index and is no longer a column. Step 3: Assign the result to a variableYou would normally do this step in the same cell, but for this demonstration, we will place it in the cell below.
###Code
bikes2 = bikes.set_index('trip_id')
###Output
_____no_output_____
###Markdown
Step 4: Within the same cell, in a second line, output the head of the DataFrame or SeriesAgain, all these steps would be combined in the same cell.
###Code
bikes2.head(3)
###Output
_____no_output_____
###Markdown
Step 5: Continue to the next cell. Do not add more lines of code to the cellIt is tempting to do more analysis in a single cell. I advise against doing so when you are a beginner. By limiting your analysis to a single main line of code per cell, and outputting that result, you can easily trace your work from one step to the next. Most lines of code in a notebook will apply some operation to the data. It is vital that you can see exactly what this operation is doing. If you put multiple lines of code in a single cell, you lose track of what is happening and can't easily determine the veracity of each operation. All steps in one cellThe five-step process was shown above one step at a time in different cells. When you actually explore data with this process, you would complete it in a single cell and up with the following result.
###Code
bikes2 = bikes.set_index('trip_id')
bikes2.head(3)
###Output
_____no_output_____
###Markdown
More examplesLet's see a more complex example of the five-step process. Let's find the `from_station_name` that has the longest average trip duration. This example will be completed with two rounds of the five-step process. First we will find the average trip duration for each station and then we will sort it. This example uses the `groupby` method which is covered in the **Grouping Data** part of the book.
###Code
avg_td = bikes.groupby('from_station_name').agg({'tripduration':'mean'})
avg_td.head(3)
###Output
_____no_output_____
###Markdown
After grouping, we can sort from greatest to least.
###Code
top_stations = avg_td.sort_values('tripduration', ascending=False)
top_stations.head(3)
###Output
_____no_output_____
###Markdown
While it is possible to complete this exercise in a single cell, I recommend executing only a single main line of code that explores the data. No strict requirement for one line of codeThe above examples each had a single main line of code followed by outputting the head of the DataFrame. Often times there will be a few more simple lines of code that can be written in the same cell. You should not strictly adhere to writing a single line of code, but instead, think about keeping the amount of code written in a single cell to a minimum.For instance, the following block is used to select a subset of the data with three lines of code. The first is simple and creates a list of column names as strings. This is an instance where multiple lines of code are easily interpreted.
###Code
cols = ['gender', 'tripduration']
bikes_gt = bikes[cols]
bikes_gt.head(3)
###Output
_____no_output_____ |
jupyter_notebooks/Introduction to Forwarding Change Validation.ipynb | ###Markdown
Introduction to Forwarding Change Validation Network engineers frequently have to make changes to network that can impact forwarding behavior: add new routes, open or close flows, route traffic through different devices, etc. These changes are often hard to get right and hard to validate. This notebook will show how Batfish can help validate changes to network forwarding _before_ you deploy them. We will do this using Batfish's *reachability* and *differentialReachability* questions that can provide guarantees that our changes are correct and have no unintended side-effects. As we will see, these anaylses are a powerful way to understand, test, and validate changes to the network. Check out a video demo of this notebook [here](https://youtu.be/Yje70Q8R79w).
###Code
# Import packages
%run startup.py
bf = Session(host="localhost")
###Output
_____no_output_____
###Markdown
In this notebook we will use the network shown in the diagram below. You can view and download the device configuration files [here](https://github.com/batfish/pybatfish/tree/master/jupyter_notebooks/networks/forwarding-change-validation/base).![example-network](https://raw.githubusercontent.com/batfish/pybatfish/master/jupyter_notebooks/networks/forwarding-change-validation/differential%20forwarding%20network.png) Change Scenario 1: Costing out a core routerThe network is overprovisioned with failover redundancy for the core routers. All traffic is normally routed through `core1` but will automatically switch to `core2` in case of a failure or during maintenance. In this scenario, we want to service `core1` and thus want to shift traffic to `core2`. We'll implement a change to cost out `core1`, and verify that it does not affect end-to-end reachability. In general, we care about three classes of end-to-end traffic: external-to-host, host-to-external, and host-to-host. For simplicity, we focus on the external-to-host traffic in this notebook but similar queries can cover other classes. Step 1: Test current behavior Before beginning, let's check that the network is working as expected (i.e., routing through `core1`). First we load our snapshot into Batfish.
###Code
NETWORK_NAME = "forwarding-change-validation"
BASE_NAME = "base"
BASE_PATH = "networks/forwarding-change-validation/base"
bf.set_network(NETWORK_NAME)
bf.init_snapshot(BASE_PATH, name=BASE_NAME, overwrite=True)
###Output
_____no_output_____
###Markdown
Batfish will automatically compute the RIBs and FIBs from the configuration files in the snapshot, allowing us to test the forwarding behavior offline. Let's do that now, by using the `traceroute` question to see how external-to-host traffic is routed. The parameter `startLocation="@enter(/border/[GigabitEthernet0/0])"` says to start the trace entering the external border router interfaces. The parameter `dstIps="/host/)"` indicates that the flow should be addressed to one of the internal hosts. These two parameters are using [specifier grammar](https://github.com/batfish/batfish/blob/master/questions/Parameters.md).
###Code
answer = bf.q.traceroute(
startLocation="@enter(/border/[GigabitEthernet0/0])",
headers=HeaderConstraints(dstIps="/host/")
).answer(snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
The `traceroute` results include a flow from each border router, and all possible paths of each flow. As we can see in the `Traces` column, both flows are routed through `core1`. For more detail on `traceroute` question, see the notebook [Introduction to Forwarding Analysis](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Introduction%20to%20Forwarding%20Analysis.ipynb). Next, we'll cost out `core1` and cause all traffic to start being routed through `core2`. Below you can see the configuration changes we're going to make. We add the command `ip ospf cost 500` to each interface on `core1`, increasing its OSPF cost from the previous value of `1`. This will cause the lower-cost routes through `core2` to be preferred.```$ diff -r base/ change1/diff -r base/configs/core1.cfg change1/configs/core1.cfg68c68< ip ospf cost 1---> ip ospf cost 50073c73< ip ospf cost 1---> ip ospf cost 50078c78< ip ospf cost 1---> ip ospf cost 50083c83< ip ospf cost 1---> ip ospf cost 500```We implemented this change offline in a new snapshot, and will validate that the change doesn't affect reachability. Having done so, we will be able to push the change to the network with complete confidence.We'll validate the change using a two-step process, verifying that it has the intended effect, and that it causes no collateral damage. More specifically, the change must:1. Ensure that no traffic is routed through `core1`.1. Have no effect on external-to-host traffic. Step 2: Ensure that no traffic is routed through `core1` The following commands will load our change snapshot into batfish:
###Code
CHANGE1_NAME = "change"
CHANGE1_PATH = "networks/forwarding-change-validation/change1"
bf.init_snapshot(CHANGE1_PATH, name=CHANGE1_NAME, overwrite=True)
###Output
_____no_output_____
###Markdown
To verify that **no** outside-to-host traffic is routed through `core1`, we need to search for counterexamples: outside-to-host traffic that *is* routed through `core1`. If no counterexamples are found, we have *proven* that `core1` is never used. We do this by running the `reachability` question with the `transitLocations` parameter to search for flows that transit `core1`. We set the `actions` parameter to `SUCCESS,FAILURE` to include dropped flows as well as those that are successfully delivered.
###Code
# Search for any traffic routed through core1
answer = bf.q.reachability(
pathConstraints=PathConstraints(
startLocation="@enter(/border/[GigabitEthernet0/0])",
transitLocations="core1"),
headers=HeaderConstraints(dstIps="/host/"),
actions="SUCCESS,FAILURE"
).answer(snapshot=CHANGE1_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Good! Since we found no counter-examples, we are guaranteed that no outside-to-host traffic from will be routed through `core1`. This verifies the first requirement of the change. Having done so, let's check our second requirement -- that end-to-end reachability is completely unchanged. Step 3: Outside-to-host traffic is unaffected. In this step, we'll compare the forwarding behavior of the candidate change snapshot against the original using the `differentialReachability` question. In particular, we'll use the question to search for flows that are successfully delivered in one snapshot but not the other. If the change is correct, no such flows will be found, because costing out `core1` should have no effect on end-to-end reachability.
###Code
answer = bf.q.differentialReachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="/host/")
).answer(
snapshot=CHANGE1_NAME,
reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
As we can see, moving traffic from `core1` to `core2` does affect reachability: some traffic that was being delivered in the reference snapshot (before the change) is being *null routed* in the change snapshot (after the change). This means if we deploy the change now, there will be a loss of connectivity. Fortunately the `differentialReachability` question was able to identify that bug before we deployed the change. The results include an example flow from each start location that has traffic affected by the change. Each flow comes with detailed traces of all the paths it can take through the network, which helps us diagnose the problem: `core2` has a rogue static route for `2.180.0.0/24` that should have been removed. A similar problem could occur with rogue ACLs along the backup path (which Batfish will find as well) Step 2 (again): Ensure that no traffic is routed through core1 We remove the bad static route and load the updated change snapshot into batfish. Then we perform the same validation steps again.
###Code
CHANGE1_FIXED_NAME = "change-fixed"
CHANGE1_FIXED_PATH = "networks/forwarding-change-validation/change1-fixed"
bf.init_snapshot(CHANGE1_FIXED_PATH, name=CHANGE1_FIXED_NAME, overwrite=True)
# Requirement 1: No traffic is routed through core1.
answer = bf.q.reachability(
pathConstraints=PathConstraints(
startLocation="@enter(/border/[GigabitEthernet0/0])",
transitLocations="core1"),
headers=HeaderConstraints(dstIps="/host/"),
actions="SUCCESS,FAILURE"
).answer(snapshot=CHANGE1_FIXED_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Again, we find no traffic being routed through `core1`, so it is still correctly costed-out. Step 3 (again): Outside-to-host traffic is unaffected. We now move on to check that after removing the bad null route, costing out `core1` has no impact on the reachability matrix:
###Code
# Requirement 2: Outside-to-host traffic is unaffected.
answer = bf.q.differentialReachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="/host/")
).answer(
snapshot=CHANGE1_FIXED_NAME,
reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Success! We have now verified that our change will correctly cost-out `core1` without affecting reachability. We are ready to deploy the change and do the maintenance work for `core1` with complete confidence. Summary Let's recap the steps we took to verify this change:1. First, we verified that the primary intent of the change is achieved: traffic is moved from `core1` to `core2`. We used the `reachability` query to search *all* outside-to-host flows in the network and verify that none will transit `core1` after the change.1. Second, we verified that moving the traffic did not affect reachability. For this, we used the `differentialReachability` query to compare the forwarding behavior of two snapshots. This verified that *no flow* is affected by the change. Change Scenario 2: Validating the end-to-end impact of an ACL change In this second part of this notebook, we'll validate another change to the same network. Unlike the previous scenario, this time we do want to alter end-to-end reachability, and we will verify that our change has the intended effect. As before, we will also verify that it has no *unintended* effects. In this scenario, we have developed and tested a new web service on host `host-www`, and are now ready to open it to HTTP traffic from the outside world. The service is running on the hosts behind `leaf1`, which has an ACL in place that filters traffic to each host. The change we'll make and validate will open traffic to the host subnet in the border router ACLs that filter traffic entering the network. Step 1: Test current behavior We start by using the `traceroute` question to verify that `host-www` is not accessible via HTTP from outside the network. The parameter `dstIps=ofLocation(host-www)` tells traceroute to pick any IP belonging to `host-www` as the destination IP.
###Code
answer = bf.q.traceroute(
startLocation="@enter(/border/[GigabitEthernet0/0])",
headers=HeaderConstraints(dstIps="host-www", applications="HTTP")
).answer(snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
As you can see, the flow is dropped by the ingress ACL `OUTSIDE_TO_INSIDE` on each border router. This is where we'll make our change. The following snippet shows the original ACL definition: ```ip access-list extended OUTSIDE_TO_INSIDE permit tcp any 2.128.0.0 0.0.1.255 eq ssh permit udp any 2.0.0.0 0.255.255.255 deny ip any any``` The first line permits SSH traffic to host subnet. We'll create a similar rule for HTTP, since the `leaf1` already does the required per-host filtering. Here's the updated version of the ACL: ```ip access-list extended OUTSIDE_TO_INSIDE permit tcp any 2.128.0.0 0.0.1.255 eq ssh permit tcp any 2.128.0.0 0.0.1.255 eq www permit udp any 2.0.0.0 0.255.255.255 deny ip any any``` Next we load the snapshot with our change into batfish so we can validate it before deployment.
###Code
CHANGE2_NAME = "change2"
CHANGE2_PATH = "networks/forwarding-change-validation/change2"
bf.init_snapshot(CHANGE2_PATH, name=CHANGE2_NAME, overwrite=True)
###Output
_____no_output_____
###Markdown
We can test our change by running the above `traceroute` command on the change snapshot:
###Code
answer = bf.q.traceroute(
startLocation="@enter(/border/[GigabitEthernet0/0])",
headers=HeaderConstraints(dstIps="host-www", applications="HTTP")
).answer(snapshot=CHANGE2_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Good. We now see that HTTP traffic can reach `host-www` from outside the network. We may be tempted to call it good and ship the change. However, batfish gives us the ability to do much more to ensure complete correctness. Following the steps outlined in the [Provably Safe ACL and Firewall Changes](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Provably%20Safe%20ACL%20and%20Firewall%20Changes.ipynb) notebook, we can independently validate the change to each border router ACL. We omit those steps from this notebook, and proceed to validating the end-to-end network behavior. As before, end-to-end validation has two requirements:1. The change has the intended effect: HTTP traffic from outside the network can reach `host-www`.1. The change has no unintended effects: No other traffic is affected. Step 2: External HTTP traffic can now reach `host-www` The `traceroute` results above show that *some* HTTP traffic can now reach `host-www` from outside the network. However, this doesn't ensure that *all* such traffic can reach `host-www`. For that, we use the `reachability` query to search for counterexamples of the requirement: HTTP flows from the outside that *cannot* reach `host-www`.
###Code
answer = bf.q.reachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(
dstIps="host-www",
srcIps="0.0.0.0/0",
applications="HTTP"),
actions="FAILURE"
).answer(snapshot=CHANGE2_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Good! Since batfish's comprehensive search found no counterexamples, we are guaranteed that none exist. In other words, the requirement is met. Step 3: No unintended consequences Next, we check the second requirement -- that the change has no unintended effects. As before, we'll use the `differentialReachability` question to compare the reachability of our change snapshot against the original network. We search all flows entering the border routers *that are not* HTTP traffic addressed to `host-www`. The `invertSearch=True` parameter causes batfish to search outside the specified header space instead of within it.
###Code
answer = bf.q.differentialReachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="host-www", applications="HTTP"),
invertSearch=True
).answer(snapshot=CHANGE2_NAME, reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Unfortunately, our change had a broader impact than we intended. It turns out that `leaf1` was not properly filtering traffic to `host-db`: it permits HTTP to both hosts, rather than just `host-www`. Step 2 (again): Verify HTTP traffic can now reach `host-www` We fix the buggy ACL on `leaf1`, load the fixed change snapshot into batfish and begin the validation process again. Here is the difference relative the first change attempt: ```$ diff -r change2/ change2-fixed/diff -r change2/configs/leaf1.cfg change2-fixed/configs/leaf1.cfg119c119< permit tcp any 2.128.0.0 0.0.255.255 eq www---> permit tcp any 2.128.1.0 0.0.0.255 eq www```
###Code
CHANGE2_FIXED_NAME = "change2-fixed"
CHANGE2_FIXED_PATH = "networks/forwarding-change-validation/change2-fixed"
bf.init_snapshot(CHANGE2_FIXED_PATH, name=CHANGE2_FIXED_NAME, overwrite=True)
answer = bf.q.reachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="host-www", applications="HTTP"),
actions="FAILURE"
).answer(snapshot=CHANGE2_FIXED_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
As before, the requirement is met: since we did not find any dropped HTTP flows to `host-www`, we are guaranteed that all will be delivered successfully. Our first requirement is still met. Step 3 (again): No unintended consequences
###Code
answer = bf.q.differentialReachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="host-www", applications="HTTP"),
invertSearch=True
).answer(snapshot=CHANGE2_FIXED_NAME, reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Introduction to Forwarding Change Validation Network engineers frequently have to make changes to network that can impact forwarding behavior: add new routes, open or close flows, route traffic through different devices, etc. These changes are often hard to get right and hard to validate. This notebook will show how Batfish can help validate changes to network forwarding _before_ you deploy them. We will do this using Batfish's *reachability* and *differentialReachability* questions that can provide guarantees that our changes are correct and have no unintended side-effects. As we will see, these anaylses are a powerful way to understand, test, and validate changes to the network. Check out a video demo of this notebook [here](https://youtu.be/Yje70Q8R79w).
###Code
# Import packages and load questions
%run startup.py
###Output
_____no_output_____
###Markdown
In this notebook we will use the network shown in the diagram below. You can view and download the device configuration files [here](https://github.com/batfish/pybatfish/tree/master/jupyter_notebooks/networks/forwarding-change-validation/base).![example-network](https://raw.githubusercontent.com/batfish/pybatfish/master/jupyter_notebooks/networks/forwarding-change-validation/differential%20forwarding%20network.png) Change Scenario 1: Costing out a core routerThe network is overprovisioned with failover redundancy for the core routers. All traffic is normally routed through `core1` but will automatically switch to `core2` in case of a failure or during maintenance. In this scenario, we want to service `core1` and thus want to shift traffic to `core2`. We'll implement a change to cost out `core1`, and verify that it does not affect end-to-end reachability. In general, we care about three classes of end-to-end traffic: external-to-host, host-to-external, and host-to-host. For simplicity, we focus on the external-to-host traffic in this notebook but similar queries can cover other classes. Step 1: Test current behavior Before beginning, let's check that the network is working as expected (i.e., routing through `core1`). First we load our snapshot into Batfish.
###Code
NETWORK_NAME = "forwarding-change-validation"
BASE_NAME = "base"
BASE_PATH = "networks/forwarding-change-validation/base"
bf_set_network(NETWORK_NAME)
bf_init_snapshot(BASE_PATH, name=BASE_NAME, overwrite=True)
###Output
_____no_output_____
###Markdown
Batfish will automatically compute the RIBs and FIBs from the configuration files in the snapshot, allowing us to test the forwarding behavior offline. Let's do that now, by using the `traceroute` question to see how external-to-host traffic is routed. The parameter `startLocation="enter(border.*[GigabitEthernet0/0])"` says to start the trace entering the external border router interfaces. The parameter `dstIps="ofLocation(host.*)"` indicates that the flow should be addressed to one of the internal hosts.
###Code
answer = bfq.traceroute(
startLocation="enter(border.*[GigabitEthernet0/0])",
headers=HeaderConstraints(dstIps="ofLocation(host.*)")
).answer(snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
The `traceroute` results include a flow from each border router, and all possible paths of each flow. As we can see in the `Traces` column, both flows are routed through `core1`. For more detail on `traceroute` question, see the notebook [Introduction to Forwarding Analysis](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Introduction%20to%20Forwarding%20Analysis.ipynb). Next, we'll cost out `core1` and cause all traffic to start being routed through `core2`. Below you can see the configuration changes we're going to make. We add the command `ip ospf cost 500` to each interface on `core1`, increasing its OSPF cost from the previous value of `1`. This will cause the lower-cost routes through `core2` to be preferred.```$ diff -r base/ change1/diff -r base/configs/core1.cfg change1/configs/core1.cfg68c68< ip ospf cost 1---> ip ospf cost 50073c73< ip ospf cost 1---> ip ospf cost 50078c78< ip ospf cost 1---> ip ospf cost 50083c83< ip ospf cost 1---> ip ospf cost 500```We implemented this change offline in a new snapshot, and will validate that the change doesn't affect reachability. Having done so, we will be able to push the change to the network with complete confidence.We'll validate the change using a two-step process, verifying that it has the intended effect, and that it causes no collateral damage. More specifically, the change must:1. Ensure that no traffic is routed through `core1`.1. Have no effect on external-to-host traffic. Step 2: Ensure that no traffic is routed through `core1` The following commands will load our change snapshot into batfish:
###Code
CHANGE1_NAME = "change"
CHANGE1_PATH = "networks/forwarding-change-validation/change1"
bf_init_snapshot(CHANGE1_PATH, name=CHANGE1_NAME, overwrite=True)
###Output
_____no_output_____
###Markdown
To verify that **no** outside-to-host traffic is routed through `core1`, we need to search for counterexamples: outside-to-host traffic that *is* routed through `core1`. If no counterexamples are found, we have *proven* that `core1` is never used. We do this by running the `reachability` question with the `transitLocations` parameter to search for flows that transit `core1`. We set the `actions` parameter to `SUCCESS,FAILURE` to include dropped flows as well as those that are successfully delivered.
###Code
# Search for any traffic routed through core1
answer = bfq.reachability(
pathConstraints=PathConstraints(
startLocation="enter(border.*[GigabitEthernet0/0])",
transitLocations="core1"),
headers=HeaderConstraints(dstIps="ofLocation(host.*)"),
actions="SUCCESS,FAILURE"
).answer(snapshot=CHANGE1_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Good! Since we found no counter-examples, we are guaranteed that no outside-to-host traffic from will be routed through `core1`. This verifies the first requirement of the change. Having done so, let's check our second requirement -- that end-to-end reachability is completely unchanged. Step 3: Outside-to-host traffic is unaffected. In this step, we'll compare the forwarding behavior of the candidate change snapshot against the original using the `differentialReachability` question. In particular, we'll use the question to search for flows that are successfully delivered in one snapshot but not the other. If the change is correct, no such flows will be found, because costing out `core1` should have no effect on end-to-end reachability.
###Code
answer = bfq.differentialReachability(
pathConstraints=PathConstraints(startLocation="enter(border.*[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="ofLocation(host.*)")
).answer(
snapshot=CHANGE1_NAME,
reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
As we can see, moving traffic from `core1` to `core2` does affect reachability: some traffic that was being delivered in the reference snapshot (before the change) is being *null routed* in the change snapshot (after the change). This means if we deploy the change now, there will be a loss of connectivity. Fortunately the `differentialReachability` question was able to identify that bug before we deployed the change. The results include an example flow from each start location that has traffic affected by the change. Each flow comes with detailed traces of all the paths it can take through the network, which helps us diagnose the problem: `core2` has a rogue static route for `2.180.0.0/24` that should have been removed. A similar problem could occur with rogue ACLs along the backup path (which Batfish will find as well) Step 2 (again): Ensure that no traffic is routed through core1 We remove the bad static route and load the updated change snapshot into batfish. Then we perform the same validation steps again.
###Code
CHANGE1_FIXED_NAME = "change-fixed"
CHANGE1_FIXED_PATH = "networks/forwarding-change-validation/change1-fixed"
bf_init_snapshot(CHANGE1_FIXED_PATH, name=CHANGE1_FIXED_NAME, overwrite=True)
# Requirement 1: No traffic is routed through core1.
answer = bfq.reachability(
pathConstraints=PathConstraints(
startLocation="enter(border.*[GigabitEthernet0/0])",
transitLocations="core1"),
headers=HeaderConstraints(dstIps="ofLocation(host.*)"),
actions="SUCCESS,FAILURE"
).answer(snapshot=CHANGE1_FIXED_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Again, we find no traffic being routed through `core1`, so it is still correctly costed-out. Step 3 (again): Outside-to-host traffic is unaffected. We now move on to check that after removing the bad null route, costing out `core1` has no impact on the reachability matrix:
###Code
# Requirement 2: Outside-to-host traffic is unaffected.
answer = bfq.differentialReachability(
pathConstraints=PathConstraints(startLocation="enter(border.*[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="ofLocation(host.*)")
).answer(
snapshot=CHANGE1_FIXED_NAME,
reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Success! We have now verified that our change will correctly cost-out `core1` without affecting reachability. We are ready to deploy the change and do the maintenance work for `core1` with complete confidence. Summary Let's recap the steps we took to verify this change:1. First, we verified that the primary intent of the change is achieved: traffic is moved from `core1` to `core2`. We used the `reachability` query to search *all* outside-to-host flows in the network and verify that none will transit `core1` after the change.1. Second, we verified that moving the traffic did not affect reachability. For this, we used the `differentialReachability` query to compare the forwarding behavior of two snapshots. This verified that *no flow* is affected by the change. Change Scenario 2: Validating the end-to-end impact of an ACL change In this second part of this notebook, we'll validate another change to the same network. Unlike the previous scenario, this time we do want to alter end-to-end reachability, and we will verify that our change has the intended effect. As before, we will also verify that it has no *unintended* effects. In this scenario, we have developed and tested a new web service on host `host-www`, and are now ready to open it to HTTP traffic from the outside world. The service is running on the hosts behind `leaf1`, which has an ACL in place that filters traffic to each host. The change we'll make and validate will open traffic to the host subnet in the border router ACLs that filter traffic entering the network. Step 1: Test current behavior We start by using the `traceroute` question to verify that `host-www` is not accessible via HTTP from outside the network. The parameter `dstIps=ofLocation(host-www)` tells traceroute to pick any IP belonging to `host-www` as the destination IP.
###Code
answer = bfq.traceroute(
startLocation="enter(border.*[GigabitEthernet0/0])",
headers=HeaderConstraints(dstIps="ofLocation(host-www)", applications="HTTP")
).answer(snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
As you can see, the flow is dropped by the ingress ACL `OUTSIDE_TO_INSIDE` on each border router. This is where we'll make our change. The following snippet shows the original ACL definition: ```ip access-list extended OUTSIDE_TO_INSIDE permit tcp any 2.128.0.0 0.0.1.255 eq ssh permit udp any 2.0.0.0 0.255.255.255 deny ip any any``` The first line permits SSH traffic to host subnet. We'll create a similar rule for HTTP, since the `leaf1` already does the required per-host filtering. Here's the updated version of the ACL: ```ip access-list extended OUTSIDE_TO_INSIDE permit tcp any 2.128.0.0 0.0.1.255 eq ssh permit tcp any 2.128.0.0 0.0.1.255 eq www permit udp any 2.0.0.0 0.255.255.255 deny ip any any``` Next we load the snapshot with our change into batfish so we can validate it before deployment.
###Code
CHANGE2_NAME = "change2"
CHANGE2_PATH = "networks/forwarding-change-validation/change2"
bf_init_snapshot(CHANGE2_PATH, name=CHANGE2_NAME, overwrite=True)
###Output
_____no_output_____
###Markdown
We can test our change by running the above `traceroute` command on the change snapshot:
###Code
answer = bfq.traceroute(
startLocation="enter(border.*[GigabitEthernet0/0])",
headers=HeaderConstraints(dstIps="ofLocation(host-www)", applications="HTTP")
).answer(snapshot=CHANGE2_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Good. We now see that HTTP traffic can reach `host-www` from outside the network. We may be tempted to call it good and ship the change. However, batfish gives us the ability to do much more to ensure complete correctness. Following the steps outlined in the [Provably Safe ACL and Firewall Changes](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Provably%20Safe%20ACL%20and%20Firewall%20Changes.ipynb) notebook, we can independently validate the change to each border router ACL. We omit those steps from this notebook, and proceed to validating the end-to-end network behavior. As before, end-to-end validation has two requirements:1. The change has the intended effect: HTTP traffic from outside the network can reach `host-www`.1. The change has no unintended effects: No other traffic is affected. Step 2: External HTTP traffic can now reach `host-www` The `traceroute` results above show that *some* HTTP traffic can now reach `host-www` from outside the network. However, this doesn't ensure that *all* such traffic can reach `host-www`. For that, we use the `reachability` query to search for counterexamples of the requirement: HTTP flows from the outside that *cannot* reach `host-www`.
###Code
answer = bfq.reachability(
pathConstraints=PathConstraints(startLocation="enter(border.*[GigabitEthernet0/0])"),
headers=HeaderConstraints(
dstIps="ofLocation(host-www)",
srcIps="0.0.0.0/0",
applications="HTTP"),
actions="FAILURE"
).answer(snapshot=CHANGE2_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Good! Since batfish's comprehensive search found no counterexamples, we are guaranteed that none exist. In other words, the requirement is met. Step 3: No unintended consequences Next, we check the second requirement -- that the change has no unintended effects. As before, we'll use the `differentialReachability` question to compare the reachability of our change snapshot against the original network. We search all flows entering the border routers *that are not* HTTP traffic addressed to `host-www`. The `invertSearch=True` parameter causes batfish to search outside the specified header space instead of within it.
###Code
answer = bfq.differentialReachability(
pathConstraints=PathConstraints(startLocation="enter(border.*[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="ofLocation(host-www)", applications="HTTP"),
invertSearch=True
).answer(snapshot=CHANGE2_NAME, reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Unfortunately, our change had a broader impact than we intended. It turns out that `leaf1` was not properly filtering traffic to `host-db`: it permits HTTP to both hosts, rather than just `host-www`. Step 2 (again): Verify HTTP traffic can now reach `host-www` We fix the buggy ACL on `leaf1`, load the fixed change snapshot into batfish and begin the validation process again. Here is the difference relative the first change attempt: ```$ diff -r change2/ change2-fixed/diff -r change2/configs/leaf1.cfg change2-fixed/configs/leaf1.cfg119c119< permit tcp any 2.128.0.0 0.0.255.255 eq www---> permit tcp any 2.128.1.0 0.0.0.255 eq www```
###Code
CHANGE2_FIXED_NAME = "change2-fixed"
CHANGE2_FIXED_PATH = "networks/forwarding-change-validation/change2-fixed"
bf_init_snapshot(CHANGE2_FIXED_PATH, name=CHANGE2_FIXED_NAME, overwrite=True)
answer = bfq.reachability(
pathConstraints=PathConstraints(startLocation="enter(border.*[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="ofLocation(host-www)", applications="HTTP"),
actions="FAILURE"
).answer(snapshot=CHANGE2_FIXED_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
As before, the requirement is met: since we did not find any dropped HTTP flows to `host-www`, we are guaranteed that all will be delivered successfully. Our first requirement is still met. Step 3 (again): No unintended consequences
###Code
answer = bfq.differentialReachability(
pathConstraints=PathConstraints(startLocation="enter(border.*[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="ofLocation(host-www)", applications="HTTP"),
invertSearch=True
).answer(snapshot=CHANGE2_FIXED_NAME, reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Introduction to Forwarding Change Validation Network engineers frequently have to make changes to network that can impact forwarding behavior: add new routes, open or close flows, route traffic through different devices, etc. These changes are often hard to get right and hard to validate. This notebook will show how Batfish can help validate changes to network forwarding _before_ you deploy them. We will do this using Batfish's *reachability* and *differentialReachability* questions that can provide guarantees that our changes are correct and have no unintended side-effects. As we will see, these anaylses are a powerful way to understand, test, and validate changes to the network. Check out a video demo of this notebook [here](https://youtu.be/Yje70Q8R79w).
###Code
# Import packages and load questions
%run startup.py
load_questions()
###Output
_____no_output_____
###Markdown
In this notebook we will use the network shown in the diagram below. You can view and download the device configuration files [here](https://github.com/batfish/pybatfish/tree/master/jupyter_notebooks/networks/forwarding-change-validation/base).![example-network](https://raw.githubusercontent.com/batfish/pybatfish/master/jupyter_notebooks/networks/forwarding-change-validation/differential%20forwarding%20network.png) Change Scenario 1: Costing out a core routerThe network is overprovisioned with failover redundancy for the core routers. All traffic is normally routed through `core1` but will automatically switch to `core2` in case of a failure or during maintenance. In this scenario, we want to service `core1` and thus want to shift traffic to `core2`. We'll implement a change to cost out `core1`, and verify that it does not affect end-to-end reachability. In general, we care about three classes of end-to-end traffic: external-to-host, host-to-external, and host-to-host. For simplicity, we focus on the external-to-host traffic in this notebook but similar queries can cover other classes. Step 1: Test current behavior Before beginning, let's check that the network is working as expected (i.e., routing through `core1`). First we load our snapshot into Batfish.
###Code
NETWORK_NAME = "forwarding-change-validation"
BASE_NAME = "base"
BASE_PATH = "networks/forwarding-change-validation/base"
bf_set_network(NETWORK_NAME)
bf_init_snapshot(BASE_PATH, name=BASE_NAME, overwrite=True)
###Output
_____no_output_____
###Markdown
Batfish will automatically compute the RIBs and FIBs from the configuration files in the snapshot, allowing us to test the forwarding behavior offline. Let's do that now, by using the `traceroute` question to see how external-to-host traffic is routed. The parameter `startLocation="@enter(/border/[GigabitEthernet0/0])"` says to start the trace entering the external border router interfaces. The parameter `dstIps="/host/)"` indicates that the flow should be addressed to one of the internal hosts. These two parameters are using [specifier grammar](https://github.com/batfish/batfish/blob/master/questions/Parameters.md).
###Code
answer = bfq.traceroute(
startLocation="@enter(/border/[GigabitEthernet0/0])",
headers=HeaderConstraints(dstIps="/host/")
).answer(snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
The `traceroute` results include a flow from each border router, and all possible paths of each flow. As we can see in the `Traces` column, both flows are routed through `core1`. For more detail on `traceroute` question, see the notebook [Introduction to Forwarding Analysis](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Introduction%20to%20Forwarding%20Analysis.ipynb). Next, we'll cost out `core1` and cause all traffic to start being routed through `core2`. Below you can see the configuration changes we're going to make. We add the command `ip ospf cost 500` to each interface on `core1`, increasing its OSPF cost from the previous value of `1`. This will cause the lower-cost routes through `core2` to be preferred.```$ diff -r base/ change1/diff -r base/configs/core1.cfg change1/configs/core1.cfg68c68< ip ospf cost 1---> ip ospf cost 50073c73< ip ospf cost 1---> ip ospf cost 50078c78< ip ospf cost 1---> ip ospf cost 50083c83< ip ospf cost 1---> ip ospf cost 500```We implemented this change offline in a new snapshot, and will validate that the change doesn't affect reachability. Having done so, we will be able to push the change to the network with complete confidence.We'll validate the change using a two-step process, verifying that it has the intended effect, and that it causes no collateral damage. More specifically, the change must:1. Ensure that no traffic is routed through `core1`.1. Have no effect on external-to-host traffic. Step 2: Ensure that no traffic is routed through `core1` The following commands will load our change snapshot into batfish:
###Code
CHANGE1_NAME = "change"
CHANGE1_PATH = "networks/forwarding-change-validation/change1"
bf_init_snapshot(CHANGE1_PATH, name=CHANGE1_NAME, overwrite=True)
###Output
_____no_output_____
###Markdown
To verify that **no** outside-to-host traffic is routed through `core1`, we need to search for counterexamples: outside-to-host traffic that *is* routed through `core1`. If no counterexamples are found, we have *proven* that `core1` is never used. We do this by running the `reachability` question with the `transitLocations` parameter to search for flows that transit `core1`. We set the `actions` parameter to `SUCCESS,FAILURE` to include dropped flows as well as those that are successfully delivered.
###Code
# Search for any traffic routed through core1
answer = bfq.reachability(
pathConstraints=PathConstraints(
startLocation="@enter(/border/[GigabitEthernet0/0])",
transitLocations="core1"),
headers=HeaderConstraints(dstIps="/host/"),
actions="SUCCESS,FAILURE"
).answer(snapshot=CHANGE1_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Good! Since we found no counter-examples, we are guaranteed that no outside-to-host traffic from will be routed through `core1`. This verifies the first requirement of the change. Having done so, let's check our second requirement -- that end-to-end reachability is completely unchanged. Step 3: Outside-to-host traffic is unaffected. In this step, we'll compare the forwarding behavior of the candidate change snapshot against the original using the `differentialReachability` question. In particular, we'll use the question to search for flows that are successfully delivered in one snapshot but not the other. If the change is correct, no such flows will be found, because costing out `core1` should have no effect on end-to-end reachability.
###Code
answer = bfq.differentialReachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="/host/")
).answer(
snapshot=CHANGE1_NAME,
reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
As we can see, moving traffic from `core1` to `core2` does affect reachability: some traffic that was being delivered in the reference snapshot (before the change) is being *null routed* in the change snapshot (after the change). This means if we deploy the change now, there will be a loss of connectivity. Fortunately the `differentialReachability` question was able to identify that bug before we deployed the change. The results include an example flow from each start location that has traffic affected by the change. Each flow comes with detailed traces of all the paths it can take through the network, which helps us diagnose the problem: `core2` has a rogue static route for `2.180.0.0/24` that should have been removed. A similar problem could occur with rogue ACLs along the backup path (which Batfish will find as well) Step 2 (again): Ensure that no traffic is routed through core1 We remove the bad static route and load the updated change snapshot into batfish. Then we perform the same validation steps again.
###Code
CHANGE1_FIXED_NAME = "change-fixed"
CHANGE1_FIXED_PATH = "networks/forwarding-change-validation/change1-fixed"
bf_init_snapshot(CHANGE1_FIXED_PATH, name=CHANGE1_FIXED_NAME, overwrite=True)
# Requirement 1: No traffic is routed through core1.
answer = bfq.reachability(
pathConstraints=PathConstraints(
startLocation="@enter(/border/[GigabitEthernet0/0])",
transitLocations="core1"),
headers=HeaderConstraints(dstIps="/host/"),
actions="SUCCESS,FAILURE"
).answer(snapshot=CHANGE1_FIXED_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Again, we find no traffic being routed through `core1`, so it is still correctly costed-out. Step 3 (again): Outside-to-host traffic is unaffected. We now move on to check that after removing the bad null route, costing out `core1` has no impact on the reachability matrix:
###Code
# Requirement 2: Outside-to-host traffic is unaffected.
answer = bfq.differentialReachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="/host/")
).answer(
snapshot=CHANGE1_FIXED_NAME,
reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Success! We have now verified that our change will correctly cost-out `core1` without affecting reachability. We are ready to deploy the change and do the maintenance work for `core1` with complete confidence. Summary Let's recap the steps we took to verify this change:1. First, we verified that the primary intent of the change is achieved: traffic is moved from `core1` to `core2`. We used the `reachability` query to search *all* outside-to-host flows in the network and verify that none will transit `core1` after the change.1. Second, we verified that moving the traffic did not affect reachability. For this, we used the `differentialReachability` query to compare the forwarding behavior of two snapshots. This verified that *no flow* is affected by the change. Change Scenario 2: Validating the end-to-end impact of an ACL change In this second part of this notebook, we'll validate another change to the same network. Unlike the previous scenario, this time we do want to alter end-to-end reachability, and we will verify that our change has the intended effect. As before, we will also verify that it has no *unintended* effects. In this scenario, we have developed and tested a new web service on host `host-www`, and are now ready to open it to HTTP traffic from the outside world. The service is running on the hosts behind `leaf1`, which has an ACL in place that filters traffic to each host. The change we'll make and validate will open traffic to the host subnet in the border router ACLs that filter traffic entering the network. Step 1: Test current behavior We start by using the `traceroute` question to verify that `host-www` is not accessible via HTTP from outside the network. The parameter `dstIps=ofLocation(host-www)` tells traceroute to pick any IP belonging to `host-www` as the destination IP.
###Code
answer = bfq.traceroute(
startLocation="enter(/border/[GigabitEthernet0/0])",
headers=HeaderConstraints(dstIps="host-www", applications="HTTP")
).answer(snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
As you can see, the flow is dropped by the ingress ACL `OUTSIDE_TO_INSIDE` on each border router. This is where we'll make our change. The following snippet shows the original ACL definition: ```ip access-list extended OUTSIDE_TO_INSIDE permit tcp any 2.128.0.0 0.0.1.255 eq ssh permit udp any 2.0.0.0 0.255.255.255 deny ip any any``` The first line permits SSH traffic to host subnet. We'll create a similar rule for HTTP, since the `leaf1` already does the required per-host filtering. Here's the updated version of the ACL: ```ip access-list extended OUTSIDE_TO_INSIDE permit tcp any 2.128.0.0 0.0.1.255 eq ssh permit tcp any 2.128.0.0 0.0.1.255 eq www permit udp any 2.0.0.0 0.255.255.255 deny ip any any``` Next we load the snapshot with our change into batfish so we can validate it before deployment.
###Code
CHANGE2_NAME = "change2"
CHANGE2_PATH = "networks/forwarding-change-validation/change2"
bf_init_snapshot(CHANGE2_PATH, name=CHANGE2_NAME, overwrite=True)
###Output
_____no_output_____
###Markdown
We can test our change by running the above `traceroute` command on the change snapshot:
###Code
answer = bfq.traceroute(
startLocation="enter(/border/[GigabitEthernet0/0])",
headers=HeaderConstraints(dstIps="host-www", applications="HTTP")
).answer(snapshot=CHANGE2_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Good. We now see that HTTP traffic can reach `host-www` from outside the network. We may be tempted to call it good and ship the change. However, batfish gives us the ability to do much more to ensure complete correctness. Following the steps outlined in the [Provably Safe ACL and Firewall Changes](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Provably%20Safe%20ACL%20and%20Firewall%20Changes.ipynb) notebook, we can independently validate the change to each border router ACL. We omit those steps from this notebook, and proceed to validating the end-to-end network behavior. As before, end-to-end validation has two requirements:1. The change has the intended effect: HTTP traffic from outside the network can reach `host-www`.1. The change has no unintended effects: No other traffic is affected. Step 2: External HTTP traffic can now reach `host-www` The `traceroute` results above show that *some* HTTP traffic can now reach `host-www` from outside the network. However, this doesn't ensure that *all* such traffic can reach `host-www`. For that, we use the `reachability` query to search for counterexamples of the requirement: HTTP flows from the outside that *cannot* reach `host-www`.
###Code
answer = bfq.reachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(
dstIps="host-www",
srcIps="0.0.0.0/0",
applications="HTTP"),
actions="FAILURE"
).answer(snapshot=CHANGE2_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Good! Since batfish's comprehensive search found no counterexamples, we are guaranteed that none exist. In other words, the requirement is met. Step 3: No unintended consequences Next, we check the second requirement -- that the change has no unintended effects. As before, we'll use the `differentialReachability` question to compare the reachability of our change snapshot against the original network. We search all flows entering the border routers *that are not* HTTP traffic addressed to `host-www`. The `invertSearch=True` parameter causes batfish to search outside the specified header space instead of within it.
###Code
answer = bfq.differentialReachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="host-www", applications="HTTP"),
invertSearch=True
).answer(snapshot=CHANGE2_NAME, reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Unfortunately, our change had a broader impact than we intended. It turns out that `leaf1` was not properly filtering traffic to `host-db`: it permits HTTP to both hosts, rather than just `host-www`. Step 2 (again): Verify HTTP traffic can now reach `host-www` We fix the buggy ACL on `leaf1`, load the fixed change snapshot into batfish and begin the validation process again. Here is the difference relative the first change attempt: ```$ diff -r change2/ change2-fixed/diff -r change2/configs/leaf1.cfg change2-fixed/configs/leaf1.cfg119c119< permit tcp any 2.128.0.0 0.0.255.255 eq www---> permit tcp any 2.128.1.0 0.0.0.255 eq www```
###Code
CHANGE2_FIXED_NAME = "change2-fixed"
CHANGE2_FIXED_PATH = "networks/forwarding-change-validation/change2-fixed"
bf_init_snapshot(CHANGE2_FIXED_PATH, name=CHANGE2_FIXED_NAME, overwrite=True)
answer = bfq.reachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="host-www", applications="HTTP"),
actions="FAILURE"
).answer(snapshot=CHANGE2_FIXED_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
As before, the requirement is met: since we did not find any dropped HTTP flows to `host-www`, we are guaranteed that all will be delivered successfully. Our first requirement is still met. Step 3 (again): No unintended consequences
###Code
answer = bfq.differentialReachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="host-www", applications="HTTP"),
invertSearch=True
).answer(snapshot=CHANGE2_FIXED_NAME, reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Introduction to Forwarding Change Validation Network engineers frequently have to make changes to network that can impact forwarding behavior: add new routes, open or close flows, route traffic through different devices, etc. These changes are often hard to get right and hard to validate. This notebook will show how Batfish can help validate changes to network forwarding _before_ you deploy them. We will do this using Batfish's *reachability* and *differentialReachability* questions that can provide guarantees that our changes are correct and have no unintended side-effects. As we will see, these anaylses are a powerful way to understand, test, and validate changes to the network. Check out a video demo of this notebook [here](https://youtu.be/Yje70Q8R79w).
###Code
# Import packages and load questions
%run startup.py
load_questions()
###Output
_____no_output_____
###Markdown
In this notebook we will use the network shown in the diagram below. You can view and download the device configuration files [here](https://github.com/batfish/pybatfish/tree/master/jupyter_notebooks/networks/forwarding-change-validation/base).![example-network](https://raw.githubusercontent.com/batfish/pybatfish/master/jupyter_notebooks/networks/forwarding-change-validation/differential%20forwarding%20network.png) Change Scenario 1: Costing out a core routerThe network is overprovisioned with failover redundancy for the core routers. All traffic is normally routed through `core1` but will automatically switch to `core2` in case of a failure or during maintenance. In this scenario, we want to service `core1` and thus want to shift traffic to `core2`. We'll implement a change to cost out `core1`, and verify that it does not affect end-to-end reachability. In general, we care about three classes of end-to-end traffic: external-to-host, host-to-external, and host-to-host. For simplicity, we focus on the external-to-host traffic in this notebook but similar queries can cover other classes. Step 1: Test current behavior Before beginning, let's check that the network is working as expected (i.e., routing through `core1`). First we load our snapshot into Batfish.
###Code
NETWORK_NAME = "forwarding-change-validation"
BASE_NAME = "base"
BASE_PATH = "networks/forwarding-change-validation/base"
bf_set_network(NETWORK_NAME)
bf_init_snapshot(BASE_PATH, name=BASE_NAME, overwrite=True)
###Output
_____no_output_____
###Markdown
Batfish will automatically compute the RIBs and FIBs from the configuration files in the snapshot, allowing us to test the forwarding behavior offline. Let's do that now, by using the `traceroute` question to see how external-to-host traffic is routed. The parameter `startLocation="@enter(/border/[GigabitEthernet0/0])"` says to start the trace entering the external border router interfaces. The parameter `dstIps="/host/)"` indicates that the flow should be addressed to one of the internal hosts. These two parameters are using [specifier grammar](https://github.com/batfish/batfish/blob/master/questions/Parameters.md).
###Code
answer = bfq.traceroute(
startLocation="@enter(/border/[GigabitEthernet0/0])",
headers=HeaderConstraints(dstIps="/host/")
).answer(snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
The `traceroute` results include a flow from each border router, and all possible paths of each flow. As we can see in the `Traces` column, both flows are routed through `core1`. For more detail on `traceroute` question, see the notebook [Introduction to Forwarding Analysis](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Introduction%20to%20Forwarding%20Analysis.ipynb). Next, we'll cost out `core1` and cause all traffic to start being routed through `core2`. Below you can see the configuration changes we're going to make. We add the command `ip ospf cost 500` to each interface on `core1`, increasing its OSPF cost from the previous value of `1`. This will cause the lower-cost routes through `core2` to be preferred.```$ diff -r base/ change1/diff -r base/configs/core1.cfg change1/configs/core1.cfg68c68< ip ospf cost 1---> ip ospf cost 50073c73< ip ospf cost 1---> ip ospf cost 50078c78< ip ospf cost 1---> ip ospf cost 50083c83< ip ospf cost 1---> ip ospf cost 500```We implemented this change offline in a new snapshot, and will validate that the change doesn't affect reachability. Having done so, we will be able to push the change to the network with complete confidence.We'll validate the change using a two-step process, verifying that it has the intended effect, and that it causes no collateral damage. More specifically, the change must:1. Ensure that no traffic is routed through `core1`.1. Have no effect on external-to-host traffic. Step 2: Ensure that no traffic is routed through `core1` The following commands will load our change snapshot into batfish:
###Code
CHANGE1_NAME = "change"
CHANGE1_PATH = "networks/forwarding-change-validation/change1"
bf_init_snapshot(CHANGE1_PATH, name=CHANGE1_NAME, overwrite=True)
###Output
_____no_output_____
###Markdown
To verify that **no** outside-to-host traffic is routed through `core1`, we need to search for counterexamples: outside-to-host traffic that *is* routed through `core1`. If no counterexamples are found, we have *proven* that `core1` is never used. We do this by running the `reachability` question with the `transitLocations` parameter to search for flows that transit `core1`. We set the `actions` parameter to `SUCCESS,FAILURE` to include dropped flows as well as those that are successfully delivered.
###Code
# Search for any traffic routed through core1
answer = bfq.reachability(
pathConstraints=PathConstraints(
startLocation="@enter(/border/[GigabitEthernet0/0])",
transitLocations="core1"),
headers=HeaderConstraints(dstIps="/host/"),
actions="SUCCESS,FAILURE"
).answer(snapshot=CHANGE1_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Good! Since we found no counter-examples, we are guaranteed that no outside-to-host traffic from will be routed through `core1`. This verifies the first requirement of the change. Having done so, let's check our second requirement -- that end-to-end reachability is completely unchanged. Step 3: Outside-to-host traffic is unaffected. In this step, we'll compare the forwarding behavior of the candidate change snapshot against the original using the `differentialReachability` question. In particular, we'll use the question to search for flows that are successfully delivered in one snapshot but not the other. If the change is correct, no such flows will be found, because costing out `core1` should have no effect on end-to-end reachability.
###Code
answer = bfq.differentialReachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="/host/")
).answer(
snapshot=CHANGE1_NAME,
reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
As we can see, moving traffic from `core1` to `core2` does affect reachability: some traffic that was being delivered in the reference snapshot (before the change) is being *null routed* in the change snapshot (after the change). This means if we deploy the change now, there will be a loss of connectivity. Fortunately the `differentialReachability` question was able to identify that bug before we deployed the change. The results include an example flow from each start location that has traffic affected by the change. Each flow comes with detailed traces of all the paths it can take through the network, which helps us diagnose the problem: `core2` has a rogue static route for `2.180.0.0/24` that should have been removed. A similar problem could occur with rogue ACLs along the backup path (which Batfish will find as well) Step 2 (again): Ensure that no traffic is routed through core1 We remove the bad static route and load the updated change snapshot into batfish. Then we perform the same validation steps again.
###Code
CHANGE1_FIXED_NAME = "change-fixed"
CHANGE1_FIXED_PATH = "networks/forwarding-change-validation/change1-fixed"
bf_init_snapshot(CHANGE1_FIXED_PATH, name=CHANGE1_FIXED_NAME, overwrite=True)
# Requirement 1: No traffic is routed through core1.
answer = bfq.reachability(
pathConstraints=PathConstraints(
startLocation="@enter(/border/[GigabitEthernet0/0])",
transitLocations="core1"),
headers=HeaderConstraints(dstIps="/host/"),
actions="SUCCESS,FAILURE"
).answer(snapshot=CHANGE1_FIXED_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Again, we find no traffic being routed through `core1`, so it is still correctly costed-out. Step 3 (again): Outside-to-host traffic is unaffected. We now move on to check that after removing the bad null route, costing out `core1` has no impact on the reachability matrix:
###Code
# Requirement 2: Outside-to-host traffic is unaffected.
answer = bfq.differentialReachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="/host/")
).answer(
snapshot=CHANGE1_FIXED_NAME,
reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Success! We have now verified that our change will correctly cost-out `core1` without affecting reachability. We are ready to deploy the change and do the maintenance work for `core1` with complete confidence. Summary Let's recap the steps we took to verify this change:1. First, we verified that the primary intent of the change is achieved: traffic is moved from `core1` to `core2`. We used the `reachability` query to search *all* outside-to-host flows in the network and verify that none will transit `core1` after the change.1. Second, we verified that moving the traffic did not affect reachability. For this, we used the `differentialReachability` query to compare the forwarding behavior of two snapshots. This verified that *no flow* is affected by the change. Change Scenario 2: Validating the end-to-end impact of an ACL change In this second part of this notebook, we'll validate another change to the same network. Unlike the previous scenario, this time we do want to alter end-to-end reachability, and we will verify that our change has the intended effect. As before, we will also verify that it has no *unintended* effects. In this scenario, we have developed and tested a new web service on host `host-www`, and are now ready to open it to HTTP traffic from the outside world. The service is running on the hosts behind `leaf1`, which has an ACL in place that filters traffic to each host. The change we'll make and validate will open traffic to the host subnet in the border router ACLs that filter traffic entering the network. Step 1: Test current behavior We start by using the `traceroute` question to verify that `host-www` is not accessible via HTTP from outside the network. The parameter `dstIps=ofLocation(host-www)` tells traceroute to pick any IP belonging to `host-www` as the destination IP.
###Code
answer = bfq.traceroute(
startLocation="enter(/border/[GigabitEthernet0/0])",
headers=HeaderConstraints(dstIps="host-www", applications="HTTP")
).answer(snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
As you can see, the flow is dropped by the ingress ACL `OUTSIDE_TO_INSIDE` on each border router. This is where we'll make our change. The following snippet shows the original ACL definition: ```ip access-list extended OUTSIDE_TO_INSIDE permit tcp any 2.128.0.0 0.0.1.255 eq ssh permit udp any 2.0.0.0 0.255.255.255 deny ip any any``` The first line permits SSH traffic to host subnet. We'll create a similar rule for HTTP, since the `leaf1` already does the required per-host filtering. Here's the updated version of the ACL: ```ip access-list extended OUTSIDE_TO_INSIDE permit tcp any 2.128.0.0 0.0.1.255 eq ssh permit tcp any 2.128.0.0 0.0.1.255 eq www permit udp any 2.0.0.0 0.255.255.255 deny ip any any``` Next we load the snapshot with our change into batfish so we can validate it before deployment.
###Code
CHANGE2_NAME = "change2"
CHANGE2_PATH = "networks/forwarding-change-validation/change2"
bf_init_snapshot(CHANGE2_PATH, name=CHANGE2_NAME, overwrite=True)
###Output
_____no_output_____
###Markdown
We can test our change by running the above `traceroute` command on the change snapshot:
###Code
answer = bfq.traceroute(
startLocation="enter(/border/[GigabitEthernet0/0])",
headers=HeaderConstraints(dstIps="host-www", applications="HTTP")
).answer(snapshot=CHANGE2_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Good. We now see that HTTP traffic can reach `host-www` from outside the network. We may be tempted to call it good and ship the change. However, batfish gives us the ability to do much more to ensure complete correctness. Following the steps outlined in the [Provably Safe ACL and Firewall Changes](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Provably%20Safe%20ACL%20and%20Firewall%20Changes.ipynb) notebook, we can independently validate the change to each border router ACL. We omit those steps from this notebook, and proceed to validating the end-to-end network behavior. As before, end-to-end validation has two requirements:1. The change has the intended effect: HTTP traffic from outside the network can reach `host-www`.1. The change has no unintended effects: No other traffic is affected. Step 2: External HTTP traffic can now reach `host-www` The `traceroute` results above show that *some* HTTP traffic can now reach `host-www` from outside the network. However, this doesn't ensure that *all* such traffic can reach `host-www`. For that, we use the `reachability` query to search for counterexamples of the requirement: HTTP flows from the outside that *cannot* reach `host-www`.
###Code
answer = bfq.reachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(
dstIps="host-www",
srcIps="0.0.0.0/0",
applications="HTTP"),
actions="FAILURE"
).answer(snapshot=CHANGE2_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Good! Since batfish's comprehensive search found no counterexamples, we are guaranteed that none exist. In other words, the requirement is met. Step 3: No unintended consequences Next, we check the second requirement -- that the change has no unintended effects. As before, we'll use the `differentialReachability` question to compare the reachability of our change snapshot against the original network. We search all flows entering the border routers *that are not* HTTP traffic addressed to `host-www`. The `invertSearch=True` parameter causes batfish to search outside the specified header space instead of within it.
###Code
answer = bfq.differentialReachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="host-www", applications="HTTP"),
invertSearch=True
).answer(snapshot=CHANGE2_NAME, reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
Unfortunately, our change had a broader impact than we intended. It turns out that `leaf1` was not properly filtering traffic to `host-db`: it permits HTTP to both hosts, rather than just `host-www`. Step 2 (again): Verify HTTP traffic can now reach `host-www` We fix the buggy ACL on `leaf1`, load the fixed change snapshot into batfish and begin the validation process again. Here is the difference relative the first change attempt: ```$ diff -r change2/ change2-fixed/diff -r change2/configs/leaf1.cfg change2-fixed/configs/leaf1.cfg119c119< permit tcp any 2.128.0.0 0.0.255.255 eq www---> permit tcp any 2.128.1.0 0.0.0.255 eq www```
###Code
CHANGE2_FIXED_NAME = "change2-fixed"
CHANGE2_FIXED_PATH = "networks/forwarding-change-validation/change2-fixed"
bf_init_snapshot(CHANGE2_FIXED_PATH, name=CHANGE2_FIXED_NAME, overwrite=True)
answer = bfq.reachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="host-www", applications="HTTP"),
actions="FAILURE"
).answer(snapshot=CHANGE2_FIXED_NAME)
show(answer.frame())
###Output
_____no_output_____
###Markdown
As before, the requirement is met: since we did not find any dropped HTTP flows to `host-www`, we are guaranteed that all will be delivered successfully. Our first requirement is still met. Step 3 (again): No unintended consequences
###Code
answer = bfq.differentialReachability(
pathConstraints=PathConstraints(startLocation="@enter(/border/[GigabitEthernet0/0])"),
headers=HeaderConstraints(dstIps="host-www", applications="HTTP"),
invertSearch=True
).answer(snapshot=CHANGE2_FIXED_NAME, reference_snapshot=BASE_NAME)
show(answer.frame())
###Output
_____no_output_____ |
src/Third Party Modules/PyTesseract/pytesseract_handwritten_examples.ipynb | ###Markdown
OCR with PyTesseract Using Raw Image (No Preprocessing)
###Code
import cv2
import matplotlib.pyplot as plt
import pytesseract
%matplotlib inline
img = cv2.imread(filename='./handwritten_digits.jpg', flags=cv2.IMREAD_GRAYSCALE)
plt.imshow(X=img, aspect='equal', origin='upper', cmap='gray')
result = pytesseract.image_to_string(image=img)
print(result)
print('-' * 125)
print('Numerals Detected:')
numerals = [int(c) for c in list(result) if c.isdigit()]
for digit in numerals:
print(digit)
###Output
ry— AT
NH Worn
THM ws
9-4 Dd
WANK YT
-----------------------------------------------------------------------------------------------------------------------------
Numerals Detected:
9
4
###Markdown
Using Simple Thresholding Beforehand
###Code
import cv2
import matplotlib.pyplot as plt
import pytesseract
%matplotlib inline
img = cv2.imread(filename='./handwritten_digits.jpg', flags=cv2.IMREAD_GRAYSCALE)
# simple thresholding
res, img_bin = cv2.threshold(src=img, thresh=80, maxval=255, type=cv2.THRESH_BINARY)
# comparing original and thresholded image
plt.subplot(121)
plt.title(label='Original', size=12)
plt.imshow(X=img, aspect='equal', origin='upper', cmap='gray')
plt.subplot(122)
plt.title(label='Simple Thresholded (BINARY)', size=12)
plt.imshow(X=img_bin, aspect='equal', origin='upper', cmap='gray')
result = pytesseract.image_to_string(image=img_bin)
print(result)
print('-' * 125)
print('Numerals Detected:')
numerals = [int(c) for c in list(result) if c.isdigit()]
for digit in numerals:
print(digit)
print('-' * 125)
print('ACTUAllY BECAME lESS ACCURATE!')
import cv2
import matplotlib.pyplot as plt
from PIL import Image, ImageEnhance
import pytesseract
%matplotlib inline
path = './handwritten_digits.jpg'
img = Image.open(path)
enhancer = ImageEnhance.Contrast(image=img)
img_enhanced = enhancer.enhance(factor=4.0)
# plotting
plt.subplot(121)
plt.title(label='Original', size=12)
plt.imshow(X=img, aspect='equal', origin='upper', cmap='gray')
plt.subplot(122)
plt.title(label='Enhanced Contrast', size=12)
plt.imshow(X=img_enhanced, aspect='equal', origin='upper', cmap='gray')
result = pytesseract.image_to_string(image=img_enhanced)
print(result)
print('-' * 125)
print('Numerals Detected:')
numerals = [int(c) for c in list(result) if c.isdigit()]
for digit in numerals:
print(digit)
print('-' * 125)
print('ACTUAllY BECAME LESS ACCURATE!')
###Output
re — ATW
No BON
TM Mw &
S-—- 49
WANK KT
-----------------------------------------------------------------------------------------------------------------------------
Numerals Detected:
4
9
-----------------------------------------------------------------------------------------------------------------------------
ACTUAllY BECAME LESS ACCURATE!
|
2018/PAN_AA_2018-word.ipynb | ###Markdown
Notebook para o PAN - Atribuição Autoral - 2018
###Code
%matplotlib inline
#python basic libs
from __future__ import print_function
from tempfile import mkdtemp
from shutil import rmtree
import os;
from os.path import join as pathjoin;
import re;
import glob;
import json;
import codecs;
from collections import defaultdict;
import pprint;
import warnings;
from pprint import pprint
from time import time
import logging
#data analysis libs
import numpy as np;
import pandas as pd;
import seaborn as sn;
import matplotlib.pyplot as plt;
import random;
#machine learning libs
#feature extraction
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
#preprocessing and transformation
from sklearn.preprocessing import normalize, MaxAbsScaler, MinMaxScaler;
from sklearn.preprocessing import LabelBinarizer, LabelEncoder;
from sklearn.decomposition import PCA;
from sklearn.metrics.pairwise import cosine_similarity;
from sklearn.base import BaseEstimator, ClassifierMixin
#classifiers
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
#model valuation
from sklearn.model_selection import train_test_split;
from sklearn.metrics import roc_auc_score, f1_score, precision_score, recall_score, accuracy_score;
import seaborn as sns;
sns.set(color_codes=True);
from pandas.plotting import scatter_matrix
import platform; print(platform.platform())
print("NumPy", np.__version__)
import scipy; print("SciPy", scipy.__version__)
import sklearn; print("Scikit-Learn", sklearn.__version__)
###Output
Darwin-17.5.0-x86_64-i386-64bit
NumPy 1.14.2
SciPy 1.0.1
Scikit-Learn 0.19.1
###Markdown
paths configuration
###Code
baseDir = '/Users/joseeleandrocustodio/Dropbox/mestrado/02 - Pesquisa/code';
inputDir= pathjoin(baseDir,'pan18aa');
outputDir= pathjoin(baseDir,'out',"oficial");
if not os.path.exists(outputDir):
os.mkdir(outputDir);
###Output
_____no_output_____
###Markdown
loading the dataset
###Code
def readCollectionsOfProblems(path):
# Reading information about the collection
infocollection = path+os.sep+'collection-info.json'
with open(infocollection, 'r') as f:
problems = [
{
'problem': attrib['problem-name'],
'language': attrib['language'],
'encoding': attrib['encoding'],
}
for attrib in json.load(f)
]
return problems;
def readProblem(path, problem):
# Reading information about the problem
infoproblem = path+os.sep+problem+os.sep+'problem-info.json'
candidates = []
with open(infoproblem, 'r') as f:
fj = json.load(f)
unk_folder = fj['unknown-folder']
for attrib in fj['candidate-authors']:
candidates.append(attrib['author-name'])
return unk_folder, candidates;
def read_files(path,label):
# Reads all text files located in the 'path' and assigns them to 'label' class
files = glob.glob(pathjoin(path,label,'*.txt'))
texts=[]
for i,v in enumerate(files):
f=codecs.open(v,'r',encoding='utf-8')
texts.append((f.read(),label, os.path.basename(v)))
f.close()
return texts
problems = readCollectionsOfProblems(inputDir);
for index,problem in enumerate(problems):
unk_folder, candidates_folder = readProblem(inputDir, problem['problem']);
problem['candidates_folder_count'] = len(candidates_folder);
problem['candidates'] = [];
for candidate in candidates_folder:
problem['candidates'].extend(read_files(pathjoin(inputDir, problem['problem']),candidate));
problem['unknown'] = read_files(pathjoin(inputDir, problem['problem']),unk_folder);
pd.DataFrame(problems)
#*******************************************************************************************************
def eval_measures(gt, pred):
"""Compute macro-averaged F1-scores, macro-averaged precision,
macro-averaged recall, and micro-averaged accuracy according the ad hoc
rules discussed at the top of this file.
Parameters
----------
gt : dict
Ground truth, where keys indicate text file names
(e.g. `unknown00002.txt`), and values represent
author labels (e.g. `candidate00003`)
pred : dict
Predicted attribution, where keys indicate text file names
(e.g. `unknown00002.txt`), and values represent
author labels (e.g. `candidate00003`)
Returns
-------
f1 : float
Macro-averaged F1-score
precision : float
Macro-averaged precision
recall : float
Macro-averaged recall
accuracy : float
Micro-averaged F1-score
"""
actual_authors = list(gt.values())
encoder = LabelEncoder().fit(['<UNK>'] + actual_authors)
text_ids, gold_authors, silver_authors = [], [], []
for text_id in sorted(gt):
text_ids.append(text_id)
gold_authors.append(gt[text_id])
try:
silver_authors.append(pred[text_id])
except KeyError:
# missing attributions get <UNK>:
silver_authors.append('<UNK>')
assert len(text_ids) == len(gold_authors)
assert len(text_ids) == len(silver_authors)
# replace non-existent silver authors with '<UNK>':
silver_authors = [a if a in encoder.classes_ else '<UNK>'
for a in silver_authors]
gold_author_ints = encoder.transform(gold_authors)
silver_author_ints = encoder.transform(silver_authors)
# get F1 for individual classes (and suppress warnings):
with warnings.catch_warnings():
warnings.simplefilter('ignore')
f1 = f1_score(gold_author_ints,
silver_author_ints,
labels=list(set(gold_author_ints)),
average='macro')
precision = precision_score(gold_author_ints,
silver_author_ints,
labels=list(set(gold_author_ints)),
average='macro')
recall = recall_score(gold_author_ints,
silver_author_ints,
labels=list(set(gold_author_ints)),
average='macro')
accuracy = accuracy_score(gold_author_ints,
silver_author_ints)
return f1,precision,recall,accuracy
def evaluate(ground_truth_file,predictions_file):
# Calculates evaluation measures for a single attribution problem
gt = {}
with open(ground_truth_file, 'r') as f:
for attrib in json.load(f)['ground_truth']:
gt[attrib['unknown-text']] = attrib['true-author']
pred = {}
with open(predictions_file, 'r') as f:
for attrib in json.load(f):
if attrib['unknown-text'] not in pred:
pred[attrib['unknown-text']] = attrib['predicted-author']
f1,precision,recall,accuracy = eval_measures(gt,pred)
return f1, precision, recall, accuracy
from sklearn.base import BaseEstimator
from scipy.sparse import issparse
class DenseTransformer(BaseEstimator):
"""Convert a sparse array into a dense array."""
def __init__(self, return_copy=True):
self.return_copy = return_copy
self.is_fitted = False
def transform(self, X, y=None):
if issparse(X):
return X.toarray()
elif self.return_copy:
return X.copy()
else:
return X
def fit(self, X, y=None):
self.is_fitted = True
return self
def fit_transform(self, X, y=None):
return self.transform(X=X, y=y)
def runML(problem):
print ("\nProblem: %s, language: %s, " %(problem['problem'],problem['language']))
train_docs, train_labels, _ = zip(*problem['candidates'])
problem['training_docs_size'] = len(train_docs);
test_docs, _, test_filename = zip(*problem['unknown'])
pipeline = Pipeline([
('vect', TfidfVectorizer(analyzer='word',
norm='l1',
max_df=1.0,
ngram_range=(1,3),
lowercase =True,
sublinear_tf=True)),
('dense', DenseTransformer()),
('scaler', MaxAbsScaler()),
('transf', PCA(0.9999)),
('clf', LogisticRegression(random_state=0,multi_class='multinomial', solver='newton-cg')),
])
# uncommenting more parameters will give better exploring power but will
# increase processing time in a combinatorial way
parameters = {
'vect__min_df':(2,0.01,0.05,0.1)
}
grid_search = GridSearchCV(pipeline,
parameters,
cv=5,
scoring='f1_macro',
n_jobs=-1,
verbose=False
)
print("Performing grid search...")
t0 = time()
grid_search.fit(train_docs, train_labels)
print("done in %0.3fs" % (time() - t0))
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters set:")
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print("\t%s: %r" % (param_name, best_parameters[param_name]))
train_pred=grid_search.predict(train_docs);
test_pred=grid_search.predict(test_docs);
# Writing output file
out_data=[]
for i,v in enumerate(test_pred):
out_data.append({'unknown-text': test_filename[i],'predicted-author': v})
answerFile = pathjoin(outputDir,'answers-'+problem['problem']+'.json');
with open(answerFile, 'w') as f:
json.dump(out_data, f, indent=4)
#allProblems.extend(out_data)
#evaluation train
f1,precision,recall,accuracy=evaluate(
pathjoin(inputDir, problem['problem'], 'ground-truth.json'),
answerFile)
return {
'problem-name' : problem['problem'],
"language" : problem['language'],
'AuthorCount' : len(set(train_labels)),
"train_doc_size": len(train_docs),
"train_caract_per_doc": sum([len(l) for l in train_docs])/len(train_docs),
"test_doc_size" : len(test_docs),
"test_caract_per_doc": sum([len(l) for l in test_docs])/len(test_docs),
'macro-f1' : round(f1,3),
'macro-precision': round(precision,3),
'macro-recall' : round(recall,3),
'micro-accuracy' : round(accuracy,3),
}, grid_search.cv_results_, best_parameters;
###Output
_____no_output_____
###Markdown
examinando o parametro min_df isoladamente
###Code
result = [];
cv_result = [];
best_parameters = [];
for problem in problems:
r, c, b = runML(problem);
result.append(r);
cv_result.append(c);
b['problem'] = problem['problem'];
best_parameters.append(b);
pd.DataFrame(best_parameters)[['problem','vect__min_df']]
###Output
_____no_output_____
###Markdown
analisando os demais parametros
###Code
def runML(problem):
print ("\nProblem: %s, language: %s, " %(problem['problem'],problem['language']))
train_docs, train_labels, _ = zip(*problem['candidates'])
problem['training_docs_size'] = len(train_docs);
test_docs, _, test_filename = zip(*problem['unknown'])
pipeline = Pipeline([
('vect', TfidfVectorizer(analyzer='word',
norm='l1',
min_df=2,
max_df=1.0,
smooth_idf=True,
lowercase =True,
sublinear_tf=True)),
('dense', DenseTransformer()),
('scaler', MaxAbsScaler()),
('transf', PCA()),
('clf', LogisticRegression(random_state=0,multi_class='multinomial', solver='newton-cg')),
])
# uncommenting more parameters will give better exploring power but will
# increase processing time in a combinatorial way
parameters = {
'vect__ngram_range':((1,1),(1,2),(1,3)),
'vect__sublinear_tf':(True, False),
'vect__norm':('l1','l2',None),
'transf__n_components': (0.1,0.25,0.5,0.75,0.9,0.999),
}
grid_search = GridSearchCV(pipeline,
parameters,
cv=5,
scoring='f1_macro',
n_jobs=-1,
verbose=False
)
print("Performing grid search...")
t0 = time()
grid_search.fit(train_docs, train_labels)
print("done in %0.3fs" % (time() - t0))
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters set:")
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print("\t%s: %r" % (param_name, best_parameters[param_name]))
train_pred=grid_search.predict(train_docs);
test_pred=grid_search.predict(test_docs);
# Writing output file
out_data=[]
for i,v in enumerate(test_pred):
out_data.append({'unknown-text': test_filename[i],'predicted-author': v})
answerFile = pathjoin(outputDir,'answers-'+problem['problem']+'.json');
with open(answerFile, 'w') as f:
json.dump(out_data, f, indent=4)
#allProblems.extend(out_data)
#evaluation train
f1,precision,recall,accuracy=evaluate(
pathjoin(inputDir, problem['problem'], 'ground-truth.json'),
answerFile)
return {
'problem-name' : problem['problem'],
"language" : problem['language'],
'AuthorCount' : len(set(train_labels)),
"train_doc_size": len(train_docs),
"train_caract_per_doc": sum([len(l) for l in train_docs])/len(train_docs),
"test_doc_size" : len(test_docs),
"test_caract_per_doc": sum([len(l) for l in test_docs])/len(test_docs),
'macro-f1' : round(f1,3),
'macro-precision': round(precision,3),
'macro-recall' : round(recall,3),
'micro-accuracy' : round(accuracy,3),
}, grid_search.cv_results_, best_parameters;
result = [];
cv_result = [];
best_parameters = [];
for problem in problems:
r, c, b = runML(problem);
result.append(r);
cv_result.append(c);
b['problem'] = problem['problem'];
best_parameters.append(b);
df=pd.DataFrame(result)[['problem-name',
"language",
'AuthorCount',
"train_doc_size","train_caract_per_doc",
"test_doc_size", "test_caract_per_doc",
'macro-f1','macro-precision','macro-recall' ,'micro-accuracy']]
df
print(df[["macro-f1"]].reset_index().to_latex(index=False).replace(" "," "))
pd.DataFrame(result)[['macro-f1']].describe()
languages={
'en':'inglesa',
'sp':'espanhola',
'it':'italiana',
'pl':'polonesa',
'fr':'francesa'
}
cv_result2 = [];
dfCV = pd.DataFrame();
for i, c in enumerate(cv_result):
temp = pd.DataFrame(c);
temp['problem'] = i+1;
temp['language'] = languages[problems[i]['language']]
dfCV = dfCV.append(temp);
for p in ['param_transf__n_components',
'mean_test_score','std_test_score','mean_train_score',
'split0_test_score','split0_train_score',
'split1_test_score','split1_train_score',
'split2_test_score','split2_train_score',
'split3_test_score','split3_train_score',
'split4_test_score','split4_train_score']:
dfCV[p]=dfCV[p].astype(np.float32);
dfCV =dfCV[[
'problem',
'language',
'rank_test_score',
'param_transf__n_components',
'param_vect__ngram_range',
'param_vect__sublinear_tf',
'param_vect__norm',
'mean_test_score',
'std_test_score',
'mean_train_score',
'split0_test_score','split0_train_score',
'split1_test_score','split1_train_score',
'split2_test_score','split2_train_score',
'split3_test_score','split3_train_score',
'split4_test_score','split4_train_score',
'mean_score_time',
'mean_fit_time',
'std_fit_time',
'std_score_time',
'std_train_score',
]];
dfCV.rename(columns={
'param_transf__n_components':'PCA_componentes',
'param_vect__ngram_range':'ngram_range',
'param_vect__sublinear_tf':'sublinear_tf',
'param_vect__smooth_idf':'smooth_idf',
'param_vect__norm':'norm'
},inplace=True);
#print('\',\n\''.join(dfCV.columns))
dfCV.to_csv('PANAA2018_WORD.csv', index=False)
dfCV = pd.read_csv('PANAA2018_WORD.csv')
dfCV.head()
(dfCV[dfCV.rank_test_score == 1])[
['problem',
'language',
'rank_test_score',
'mean_test_score',
'std_test_score',
'ngram_range',
'sublinear_tf',
'PCA_componentes']
].sort_values(by=[
'problem',
'mean_test_score',
'ngram_range',
'sublinear_tf',
'PCA_componentes'
], ascending=[True, False,False,False,False])
dfCV.pivot_table(
index=['problem','language','PCA_componentes'],
columns=['norm','sublinear_tf', 'ngram_range'],
values='mean_test_score'
)
pd.options.display.precision = 3
print(u"\\begin{table}[h]\n\\centering\n\\caption{Medida F1 para os parâmetros }")
print(re.sub(r'[ ]{2,}',' ',dfCV[dfCV.PCA_componentes >= 0.999].pivot_table(
index=['problem','language','sublinear_tf','norm'],
columns=['ngram_range'],
values='mean_test_score'
).to_latex()))
print ("\label{tab:modelocaracter}")
print(r"\end{table}")
d = dfCV[dfCV.PCA_componentes > 0.9].rename(columns={'language':u'Língua', 'sublinear_tf':'TF Sublinear'})
d = d [ d.norm.isna() == False]
d['autorNumber'] = d.problem.map(lambda x: 20 if x % 2==0 else 5)
d.problem = d.apply(lambda x: x[u'Língua'] +" "+ str(x[u'problem']), axis=1)
d.std_test_score =d.std_test_score / d.std_test_score.quantile(0.95) *500;
d.std_test_score +=1;
d.std_test_score = d.std_test_score.astype(np.int64)
g = sns.FacetGrid(d, row='problem', hue='TF Sublinear', col="norm", size=3,palette="Set1")
g.map(plt.scatter, "ngram_range", "mean_test_score", alpha=0.5, s=d.std_test_score.values).add_legend();
g = sns.FacetGrid(d, row='autorNumber', hue='TF Sublinear', col=u"Língua", size=3,palette="Set1")
g.map(plt.scatter, "ngram_range", "mean_test_score", alpha=0.5, s=d.std_test_score.values).add_legend();
import statsmodels.api as sm
d = dfCV[['mean_test_score','problem', 'language','sublinear_tf','norm','ngram_range','PCA_componentes']].copy();
d.sublinear_tf=d.sublinear_tf.apply(lambda x: 1 if x else 0)
d['autorNumber'] = d.problem.map(lambda x: 20 if x % 2==0 else 5)
d.norm.fillna(value='None', inplace=True);
d.PCA_componentes = np.log(d.PCA_componentes);
_, d['ngram_max'] = zip(*d.ngram_range.str.replace(r'[^\d,]','').str.split(',').values.tolist())
#d.ngram_min = d.ngram_min.astype(np.uint8);
d.ngram_max = d.ngram_max.astype(np.uint8);
d.drop(columns=['ngram_range','problem'], inplace=True)
#d['intercept'] = 1;
d=pd.get_dummies(d, columns=['language', 'norm','ngram_max'])
d.describe()
mod = sm.OLS( d.iloc[:,0], d.iloc[:,1:])
res = mod.fit()
res.summary()
sns.distplot(res.predict()-d.iloc[:,0].values, bins=25)
sns.jointplot(x='F1',y='F1-estimated',data=pd.DataFrame({'F1':d.iloc[:,0].values, 'F1-estimated':res.predict()}));
###Output
_____no_output_____
###Markdown
Abordagem desafiante 1
###Code
from gensim.models import Word2Vec;
class NgramSplitter(object):
def __init__(self, text, ngram=(3,3), vocabulary=None):
self.text = text
self.ngram_min = ngram[0]
self.ngram_max = ngram[1];
self.vocabulary = vocabulary;
def text2ngrams(self,text):
vect = [
text[t:t+j]
for t in xrange(len(text)-self.ngram_max+1)
for j in xrange(self.ngram_min, self.ngram_max+1)
]
if self.vocabulary is not None:
return [word for word in vect if word in self.vocabulary];
else:
return [word for word in vect if word]
def __iter__(self):
if isinstance(self.text,list):
for s in self.text:
yield self.text2ngrams(s);
elif isinstance(self.text,str) or isinstance(self.text,unicode):
yield self.text2ngrams(self.text);
class Word2VecClassifier(BaseEstimator, ClassifierMixin):
"""A classifier that uses classes embeddings to classify instances"""
def __init__(
self,
ngram = (3,4),
analyzer = 'char',
min_df = 0.3,
max_df = 1.0,
min_count =2,
embeddingSize =750,
window=10,
algorithm = 0,
iter =10
):
"""
Called when initializing the classifier
"""
self.algorithm = algorithm
self.min_count = min_count
self.embeddingSize = embeddingSize
self.window = window
self.iter = iter
self.analyzer = analyzer
self.vocabulary_ = {}
self.ngram = ngram
self.min_df = min_df
self.max_df = max_df
def _buildVectorModel(self, document):
sentenseGenerator = NgramSplitter(document,self.ngram, self.vocabulary_);
model = Word2Vec(
sentenseGenerator,
sg = self.algorithm,
iter = self.iter,
min_count= self.min_count,
window = self.window,
size = self.embeddingSize,
seed=0
);
return model.wv;
def fit(self, X, y=None):
"""
Sumarize one text per labels and transform the text into word vectors
"""
#creating author profile
profile = defaultdict(unicode);
for text, label in zip(X,y):
profile[label]+=text;
#build a global vocaculary / Using count vectorizer to create a fixed vocabulary
vectorizer = CountVectorizer(
analyzer=self.analyzer,
ngram_range=self.ngram,
min_df=self.min_df,
max_df=self.max_df,
lowercase=False
)
vectorizer.fit(X);
self.vocabulary_ = vectorizer.vocabulary_
# profile vector represent each author in the embedding space
self.profileVectors_ = {y: self._buildVectorModel(profile[y]) for y in y};
return self
def _minmax(self, a):
a = (a - a.min())/(a.max() - a.min());
return a;
def _simpleCosine(self,a, b):
'''
calculates cosine between array a and b.
This function is used because sklearn similiraty function compares all elements vs all elements
what will not be used. So this function becames handy.
'''
a = a / np.sqrt(np.sum(a **2));
b = b / np.sqrt(np.sum(b **2));
cos = np.sum(np.array(a) * np.array(b));
return cos;
def _KLD(self,p, q):
p = self._minmax(p); p = p/p.sum();
q = self._minmax(q); q = q/q.sum();
cond = ((q != 0)&(p != 0));
k1 = np.sum(np.where(cond, p * np.log(p / q), 0));
return k1;
def _manhattan(self,p, q):
p = self._minmax(p); p = p/p.sum();
q = self._minmax(q); q = q/q.sum();
return np.mean(np.abs(p-q));
def _guassian(self, C,D):
cond = C-D !=0;
bc = np.where(cond,(C-D+1)**2/(2*np.maximum(C,D+1)),1);
return np.sum(-np.log(bc));
def score(self, X, y=None):
# counts number of values bigger than mean
return(sum(self.predict(X)))
def _softMax(self,a):
a = self._minmax(a);
a = np.exp(a)/np.sum(np.exp(a))
return a;
def _predict1Doc(self, docVect):
vocabDoc = set(docVect.vocab.keys());
metrics = [];
def c(aa,bb, funct):
voc = set(aa.vocab.keys()) & set(bb.vocab.keys())
f = np.array([
funct(aa[v], bb[v])
for v in voc
]);
f = np.sum(f)
return f;
for label in self.profileVectors_:
labelVocab = set(self.profileVectors_[label].vocab.keys());
intersect = vocabDoc & labelVocab;
union = len(vocabDoc | labelVocab);
jaccard = 1.0*len(intersect) / union;
metrics.append({
'label' : label,
'jaccard' : jaccard,
'lenIntersect': len(intersect),
'lenUnion' : union,
'lenMax' : max(len(labelVocab), len(vocabDoc)),
'similarity' : c(docVect, self.profileVectors_[label], self._simpleCosine),
'KLD' : c(docVect, self.profileVectors_[label], self._KLD),
'manhattan' : c(docVect, self.profileVectors_[label], self._manhattan),
'guassian' : c(docVect, self.profileVectors_[label], self._guassian),
})
#softmax norm
similarity = self._softMax(np.array([c['similarity'] for c in metrics ]));
guassian = self._softMax(np.array([c['guassian'] for c in metrics ]));
manhattan = self._softMax(np.array([c['manhattan'] for c in metrics ]));
#appending normalized sum of distance
for i,c in enumerate(metrics):
c.update({
'similarityNorm': similarity[i],
'guassianNorm': guassian[i],
'manhattanNorm': manhattan[i]
})
return metrics;
def predict(self, X, y=None):
try:
getattr(self, "profileVectors_")
except AttributeError:
raise RuntimeError("You must train classifer before predicting data!")
docVectors = [self._buildVectorModel(x) for x in X];
self.metrics_ = [self._predict1Doc(v) for v in docVectors];
result = [];
for r in self.metrics_:
best = r[0];
best['bestMatch'] = True;
for rr in r:
if rr != best:
rr['bestMatch'] = False;
if rr['similarityNorm'] > best['similarityNorm'] :
best['bestMatch'] = False;
best = rr;
best['bestMatch'] = True;
result.append(best);
self.predited_ = result;
return([r['label'] for r in result])
problem = problems[8];
print ("Problem: %s, language: %s, " %(problem['problem'],problem['language']))
model = Word2VecClassifier();
train_docs, train_labels,_ = zip(*problem['candidates']);
model.fit(train_docs,train_labels);
trainPred = model.predict(train_docs);
trainMetrics = model.metrics_;
df=pd.DataFrame(zip(train_labels,trainPred), columns=['label','pred'])
df.label = df.label.apply(lambda x: int(re.sub(r'\D','',x)));
df.pred = df.pred.apply(lambda x: int(re.sub(r'\D','',x)));
df.plot.scatter(x='label',y='pred');
m = trainMetrics
df = pd.DataFrame([item for s in m for item in s])
df['doc'] = [i for i,s in enumerate(m) for item in s]
df['solution'] = [train_labels[i] for i,s in enumerate(m) for item in s]
df.sort_values(by=['doc','similarityNorm', 'manhattan'], ascending=[True,False,True], inplace=True)
df['distance'] = [i for i in range(len(set(train_labels)))]* len(trainMetrics)
df[df.doc == 55]
df2 = df[df.bestMatch].copy();
df2['correct'] = df2.apply(lambda x: x['label'] == x['solution'], axis=1)
df2[['correct','doc']].groupby(by='correct').count()
model.get_params()
df2 = df[df.bestMatch].copy();
df2['correct'] = df2.apply(lambda x: x['label'] == x['solution'], axis=1)
df2[['correct','doc']].groupby(by='correct').count()
model.get_params()
df[df.solution == df.label].plot.scatter(x='distance', y='manhattanNorm')
df[df.solution == df.label].plot.scatter(x='distance', y='guassianNorm')
df[df.solution == df.label].plot.scatter(x='distance', y='similarityNorm')
df[df.solution == df.label].plot.scatter(x='manhattanNorm', y='guassianNorm', c='distance',colormap='Reds')
###Output
_____no_output_____
###Markdown
test
###Code
#code from baseline
gt = {}
with open(pathjoin(inputDir, problem['problem'], 'ground-truth.json'), 'r') as f:
for attrib in json.load(f)['ground_truth']:
gt[attrib['unknown-text']] = attrib['true-author']
test_docs, _, test_filename = zip(*problem['unknown'])
test_labels = [gt[v] for v in test_filename]
testPred = model.predict(test_docs);
testMetrics = model.metrics_;
m = testMetrics
df = pd.DataFrame([item for s in m for item in s])
df['doc'] = [i for i,s in enumerate(m) for item in s]
df['solution'] = [train_labels[i] for i,s in enumerate(m) for item in s]
df.sort_values(by=['doc','similarityNorm', 'KLD'], ascending=[True,False,True], inplace=True)
df['distance'] = [i for i in range(len(set(train_labels)))]* len(testMetrics)
df[df.doc == 55]
f1,precision,recall,accuracy = eval_measures(gt,{k: v for k,v in zip(test_filename, testPred) })
pd.DataFrame([{
'macro-f1' : round(f1,3),
'macro-precision': round(precision,3),
'macro-recall' : round(recall,3),
'micro-accuracy' : round(accuracy,3)
}])
df2 = df[df.bestMatch].copy();
df2['correct'] = df2.apply(lambda x: x['label'] == x['solution'], axis=1)
df2[['correct','doc']].groupby(by='correct').count()
df[df.solution == df.label].plot.scatter(x='distance', y='guassianNorm')
df[df.solution == df.label].plot.scatter(x='distance', y='manhattanNorm')
df[df.solution == df.label].plot.scatter(x='distance', y='similarityNorm')
df[df.solution == df.label]\
.plot\
.scatter(
x='guassianNorm',
y='similarityNorm',
c='distance',
colormap='Reds',
figsize=(20,5));
###Output
_____no_output_____ |
experiments/VR-inserts/notebooks/.ipynb_checkpoints/7-29-21-pcr-rxns-checkpoint.ipynb | ###Markdown
PCR reaction protocols for pFC8, pFC9 and pFC8tacThese reactions amplifiy regions of pFC8, 9, and 8tac (previously labeled53 tac) for sanger sequencing.
###Code
import pandas as pd
from pydna.amplify import pcr
from pydna.dseqrecord import Dseqrecord
from pydna.readers import read
from pydna.tm import program # pcr program for tac pol
from pydna.primer import Primer
import sys
sys.path.append('experiments/VR-inserts/notebooks')
###Output
_____no_output_____
###Markdown
Read primers.
###Code
primers = pd.read_csv('data_files/Ethan_Oligos-dsDNA_oligos.tsv', sep='\t').set_index('Name', drop=False)
primers
###Output
_____no_output_____
###Markdown
Read template DNA for reactions.
###Code
pFC8 = 'data_files/resources/files/genbank/pFC8.gb'
pFC9 = 'data_files/resources/files/genbank/pFC9.gb'
pFC8tac = 'data_files/resources/files/genbank/pFC8tacT1T2.gb'
template_paths = (pFC8, pFC9, pFC8tac)
templates = [Dseqrecord(read(tp)) for tp in template_paths]
###Output
_____no_output_____
###Markdown
Setup helper PCR functions.
###Code
def pcr_program(primers, template):
product = pcr(primers, template)
return program(product)
def get_primer_seq(name, primer_table):
return Primer(primer_table.loc[name].values[1])
###Output
_____no_output_____
###Markdown
pFC8
###Code
pFC8 = templates[0]
pfc8_primers = get_primer_seq('pFC9_t7_primer_1', primers), get_primer_seq('pFC8_t7_primer_2', primers)
pcr_program(pfc8_primers, pFC8)
###Output
_____no_output_____
###Markdown
pFC9
###Code
pFC9 = templates[1]
pfc9_primers = get_primer_seq('pFC9_t7_primer_1', primers), get_primer_seq('pFC9_t7_primer_2', primers)
pcr_program(pfc9_primers, pFC9)
###Output
_____no_output_____
###Markdown
pFC8tac
###Code
pFC9 = templates[2]
pfc8tac_primers = get_primer_seq('pFC8tac_tac_promoter_Primer_1', primers), get_primer_seq('pFC8tac_tac_promoter_Primer_2', primers)
pcr_program(pfc8tac_primers, pFC8)
###Output
_____no_output_____ |
GNSS_Averaging/.ipynb_checkpoints/GNSS_averaging-checkpoint.ipynb | ###Markdown
Single GNSS position from multiple points**Script prepared by A. Rovere - MARUM, University of Bremen**This script usess a Monte-Carlo approach to calculate the average position (with positioning uncertainties) given a series of GNSS points collected at the same location. It can be used, for example, when several processing options are available for a base station point.
###Code
import geopandas as gpd
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from math import pi
###Output
_____no_output_____
###Markdown
Import csvImport the CSV file containing the different points. See example file for the formatting. Coordinate system for the import file should be EPSG 4326.
###Code
df = pd.read_csv('Example_data.csv')
gdf = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df['Latitude (dec degrees)'], df['Longitude (dec degrees)']))
gdf.crs = 'epsg:4326'
gdf = gdf.to_crs('epsg:3857')
gdf['X (m)']=gdf.geometry.x
gdf['Y (m)']=gdf.geometry.y
gdf
###Output
_____no_output_____
###Markdown
Monte Carlo processOne line from the dataframe above is selected randomly, then a Lat/Lon/Elevation are sampled from a normal distribution. This process is repeated 10.000 times.
###Code
lat=[]
lon=[]
elev=[]
val = np.linspace(0, 10000, num=10001)
#Creates a matrix randomly sampling the sea level data points
for x in val:
#Select a random row
rnd = gdf.sample(n=1)
#Select random time and RSL from normal distribution of age and RSL
lat.append(np.random.normal(rnd['Y (m)'], rnd['Latitude 2-sigma (m)']/2, 1))
lon.append(np.random.normal(rnd['X (m)'], rnd['Longitude 2-sigma (m)']/2, 1))
elev.append(np.random.normal(rnd['Height above ellipsoid (m)'], rnd['Elevation 2-sigma (m)']/2, 1));
#Create the dataframe
rand_coord = pd.DataFrame({'Latitude (EPSG 3857, m)':lat, 'Longitude (EPSG 3857, m)':lon,'Elevation (HAE, m)':elev})
rand_coord['Latitude (EPSG 3857, m)'] = rand_coord['Latitude (EPSG 3857, m)'].astype(float)
rand_coord['Longitude (EPSG 3857, m)'] = rand_coord['Longitude (EPSG 3857, m)'].astype(float)
rand_coord['Elevation (HAE, m)'] = rand_coord['Elevation (HAE, m)'].astype(float)
###Output
_____no_output_____
###Markdown
Calculate average coordinates and elevationWith associated 2-sigma uncertainties, and create geodataframe of results.
###Code
Latavg=np.mean(rand_coord['Latitude (EPSG 3857, m)'])
Lat2sd=np.std(rand_coord['Latitude (EPSG 3857, m)'])*2
Lonavg=np.mean(rand_coord['Longitude (EPSG 3857, m)'])
Lon2sd=np.std(rand_coord['Longitude (EPSG 3857, m)'])*2
Havg=np.mean(rand_coord['Elevation (HAE, m)'])
H2sd=np.std(rand_coord['Elevation (HAE, m)'])
# Create geodataframe with average point values
d={'Processing type': ['Average'],
'Latitude (dec degrees)': [np.nan],
'Longitude (dec degrees)':[np.nan],
'Height above ellipsoid (m)':[Havg],
'Latitude 2-sigma (m)':[Lat2sd],
'Longitude 2-sigma (m)':[Lon2sd],
'Elevation 2-sigma (m)':[H2sd],
'X (m)':[Lonavg],
'Y (m)':[Latavg]}
df1 = pd.DataFrame(data=d)
gdf1 = gpd.GeoDataFrame(df1, geometry=gpd.points_from_xy(df1['X (m)'], df1['Y (m)']))
gdf1.crs='epsg:3857'
gdf1 = gdf1.to_crs('epsg:4326')
gdf1['Latitude (dec degrees)']=gdf1.geometry.x
gdf1['Longitude (dec degrees)']=gdf1.geometry.y
gdf1 = gdf1.to_crs('epsg:3857')
gdf1
f = plt.figure(figsize=(20,10))
ax1= f.add_subplot(121)
ax2 = f.add_subplot(122)
plt.rcParams["axes.labelsize"] = 15
f.suptitle('Average Latitude: {:.9f} decimal degrees +/- {:.3f} m\nAverage Longitude: {:.9f} decimal degrees +/- {:.3f} m\nAverage elevation: : {:.3f} m +/- {:.3f} m'.format(gdf1['Latitude (dec degrees)'][0],gdf1['Latitude 2-sigma (m)'][0],gdf1['Longitude (dec degrees)'][0],gdf1['Longitude 2-sigma (m)'][0],gdf1['Height above ellipsoid (m)'][0],gdf1['Elevation 2-sigma (m)'][0]), fontsize=20)
# Plot the lat/Lon comparison
graph=sns.kdeplot(rand_coord['Longitude (EPSG 3857, m)'], rand_coord['Latitude (EPSG 3857, m)'], kind="kde",fill=True,ax=ax1)
f = np.linspace(0, 2*pi, 100)
for index, row in gdf.iterrows():
Lon=row['X (m)']
Lon_unc=row['Longitude 2-sigma (m)']
Lat=row['Y (m)']
Lat_unc=row['Latitude 2-sigma (m)']
ax1.plot(Lon+Lon_unc*np.cos(f) , Lat+Lat_unc*np.sin(f),color='k')
#Plot the elevation comparison
sns.distplot(rand_coord["Elevation (HAE, m)"], ax=ax2,hist=False)
for index, row in gdf.iterrows():
elev_min=row['Height above ellipsoid (m)']-row['Elevation 2-sigma (m)']
elev_max=row['Height above ellipsoid (m)']+row['Elevation 2-sigma (m)']
ax2.axvspan(xmin=elev_min, xmax=elev_max, alpha=0.1, color='k')
plt.savefig('GNSS_averaged.svg')
print('Average Latitude: {:.9f} decimal degrees +/- {:.3f} m\nAverage Longitude: {:.9f} decimal degrees +/- {:.3f} m\nAverage elevation: : {:.3f} m +/- {:.3f} m'.format(gdf1['Latitude (dec degrees)'][0],gdf1['Latitude 2-sigma (m)'][0],gdf1['Longitude (dec degrees)'][0],gdf1['Longitude 2-sigma (m)'][0],gdf1['Height above ellipsoid (m)'][0],gdf1['Elevation 2-sigma (m)'][0]))
###Output
C:\Users\arovere\AppData\Local\Continuum\anaconda3\lib\site-packages\seaborn\_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
C:\Users\arovere\AppData\Local\Continuum\anaconda3\lib\site-packages\seaborn\distributions.py:1184: UserWarning: The following kwargs were not used by contour: 'kind'
**contour_kws,
C:\Users\arovere\AppData\Local\Continuum\anaconda3\lib\site-packages\seaborn\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `kdeplot` (an axes-level function for kernel density plots).
warnings.warn(msg, FutureWarning)
|
osmUtils_prototype.ipynb | ###Markdown
OSMUtils ExamplesWhen developing, to test newly saved code:- restart kernel- after each save in the command line use: `!pip install -e .` to install osmUtils from local directory- if this imports with no issues, the code it good!- then re-import osmUtils-examples here: https://wiki.openstreetmap.org/wiki/Map_featuresLeisure
###Code
!pip install -e .
from IPython.display import clear_output
clear_output()
import osmUtils as osmu
from osmUtils import utils_geo, utils_osm, utils_map
print(f'osmUtils ver. {osmu.__version__} ready!')
###Output
osmUtils ver. 0.0.1 ready!
###Markdown
Instantiate osmCol Object
###Code
import LMIPy as lmi #Docs https://lmipy.readthedocs.io/en/latest/quickstart.html#From-Political-Boundaries
params={
'iso': 'USA',
'adm1': 33,
'adm2': 32
}
ny_geom = lmi.Geometry(parameters=params)
ny_geom.map()
geom = ny_geom.shape()
col = osmu.CollectionOsm(geometry=geom[0], zoom=5, crs=None, geom_tiles=False)
col
manifest = col.manifest
manifest
len(manifest)
geometry = manifest.iloc[0].geometry
osm_Data_roads = osmu.OsmDownload(
geometry,
osm_type='all_roads'
)
road_df = osm_Data_roads.osm_gdf
road_df.head()
osmu.OsmVisualize(road_df, color='#a7a9ab')
osm_Data_water = osmu.OsmDownload(geometry, osm_type='water_features' )
water_df = osm_Data_water .get_osm_gdf()
water_df.head()
viz = osmu.OsmVisualize(water_df, color = '#4287f5', basemap='cartodbdark_matter')
viz
osm_Data_buildings = osmu.OsmDownload(geometry, osm_type='buildings')
buildings_df = osm_Data_buildings.osm_gdf
osm_Data_buildings.save_gdf_to_file(
filename='osm_data',
driver='ESRI Shapefile',
)
viz = osmu.OsmVisualize(buildings_df, color = '#f5a442')
viz
osm_Data_green = osmu.OsmDownload(geometry, osm_type='parks' )
green_df = osm_Data_green.osm_gdf
viz = osmu.OsmVisualize(green_df, color = '#3e964e')
viz
osmu.SavetoFile(osm_df)
###Output
_____no_output_____
###Markdown
osmVisualise
###Code
viz = osmu.OsmVisualize(osm_df, color='#8c9191')
viz
###Output
_____no_output_____
###Markdown
Library Development
###Code
osm_Data.save_gdf_to_file(filename='osm_data', driver='ESRI Shapefile')
###Output
_____no_output_____
###Markdown
osmVisualise
###Code
viz = osmu.OsmVisualize(osm_df, color='#8c9191')
viz
###Output
_____no_output_____
###Markdown
Library Development
###Code
### Simplification and Precision functions for GeodataFrame
import geopandas as gpd
from shapely.wkt import loads, dumps
def point_round(point, precision):
return loads(dumps(point, rounding_precision=precision))
def reduce_precision(df, precision=5):
_df = df.copy()
_df.geometry.apply(lambda x: point_round(x, precision))
return _df
def simplify(df, tolerance=0.001):
_df = df.copy()
_df.geometry.simplify(tolerance=tolerance, preserve_topology=True)
return _df
_osm_df = simplify(osm_df)
_osm_df.head()
def embed_map(m):
"""Resolves Folium rendering in chrome+Jupyter issue"""
from IPython.display import IFrame
m.save('index.html')
return IFrame('index.html', width='100%', height='750px')
# NOTES: https://python-visualization.github.io/folium/quickstart.html
## Example of taking a geodataframe and visualising via folium.
## Note the optional max_features to slice no of features (subset)
from shapely.geometry import box
import folium
import json
gdf_projected = osm_df.set_crs('EPSG:3857')
gjson_str = gdf_projected.to_json()
gjson = json.loads(gjson_str)
max_features = None
max_index = (max_features and max_features <= len(gjson['features'])) or len(gjson['features'])
features = gjson['features'][:max_index]
bounds = list(osm_df.bounds.iloc[0])
geom = box(bounds[0], bounds[1], bounds[2], bounds[3])
zoom_start=12
basemap='cartodbpositron'
color='#f69'
m = folium.Map(
location=[geom.centroid.y, geom.centroid.x],
zoom_start=zoom_start,
tiles=basemap
)
style_function = lambda x: {'color': color, 'weight':1, 'opacity':1}
folium.GeoJson({
"type": "FeatureCollection",
"features": features
}, style_function=style_function).add_to(m)
embed_map(m)
###Output
_____no_output_____
###Markdown
Example clean OSM pipeline
###Code
# import libraries
import requests
import geopandas as gpd
import shapely.wkb
queryUrl = 'https://api.resourcewatch.org/v1/query/Politcial-Boundaries-GADM-adminitrative-level-1-1490086842541'
#queryParams = {'sql': "select the_geom, name_1 from gadm28_adm1 where iso='USA'"}
queryParams = {'sql': "select * from gadm28_adm1 where name_1='New York'"}
resp = requests.get(queryUrl, queryParams)
for el in data:
geometry = shapely.wkb.loads(el['the_geom'], hex=True)
name = el['name_1']
el['geometry']=geometry
gdf = gpd.GeoDataFrame(data)
gdf.head()
###Output
_____no_output_____
###Markdown
Validate incomming geometry and tiles for manifest:
###Code
#WE'RE GOING TO WORK WITH THE GEOMETRY OF THE ESTATE OF NEW YORK
geometry = gdf['geometry'][0]
geometry.
geometry.to_wkt()
gdf['geojson'].iloc[0]
###Output
_____no_output_____
###Markdown
LIB:
###Code
geometry_df = gpd.GeoDataFrame(geometry)
geometry_df = geometry_df.set_geometry(0)
geometry_df = geometry_df.rename(columns={0:'geometry'})
geometry_df.head()
#import libraries for the generation of the manifest
import mercantile as mt
from shapely.geometry import shape
import pandas as pd
zoom_levels = 6
## Create tiles df
tiles = []
for tile in mt.tiles(-180, -85, 180, 85, zoom_levels, truncate=False):
tile_id = f"{tile.z}_{tile.x}_{tile.y}"
geom = mt.feature(tile)['geometry']
polygon = shape(geom)
tiles.append({
'tile_id': tile_id,
'geometry': polygon
})
# generate geodataframe with tiles
tiles_df = gpd.GeoDataFrame(tiles)
len(f'There are {tiles_df} tiles originally')
#check projection tiles and geometry
default_crs = "EPSG:4326"
if geometry_df.crs is None:
#set crs
geometry_df = geometry_df.set_crs(default_crs)
#check projection of tiles
if tiles_df.crs is None:
tiles_df = tiles_df.set_crs(default_crs)
# keep tiles that intersect with input geometry
# Spatial join land and tiles, then remove rows without intersect
geom_tiles = gpd.sjoin(tiles_df, geometry_df, how='left', op='intersects', lsuffix='tiles', rsuffix='geom')
## Keep only intersecting tile geoms
manifest = geom_tiles[pd.notna(geom_tiles.geometry_geom)]
#add the tracking information
manifest['exclude'] = 0
manifest['exported'] = 0
manifest['uploaded'] = 0
manifest.head()
###Output
_____no_output_____
###Markdown
Work in the lib:
###Code
import mercantile as mt
import geopandas as gpd
import pandas as pd
from shapely.geometry import shape, MultiPolygon, Polygon
#class OsmCollection(object):
# def __init__(self, geometry=None, zoom=[5,7], **kwargs):
# self.geometry = geometry
# self.min_zoom, self.max_zoom = sorted(zoom)
# self.tiles = self.generate_tiles()
# ### Methods
# def generate_tiles(self):
# """Generates tiles and ids"""
# return gdf
# def stage_requests(self, osm_request_config)
# """
# - Iterate through self.tiles
# - optionally is_intersect? operation to remove unneeded tiles
# - intatiate OsmObj class fro each row in self.tiles
# """
# ## obj = OsmObj(geom, tile, osm_request_config)
# self.something = "List of OsmObj objects"
class CollectionOsm:
"""
This is the main CollectionOsm class. This collection class will produce a tile manifest at an especific zoom level
to keep track of the OSM retrieving process.
Parameters
----------
geometry: shapely.geometry.Polygon or shapely.geometry.MultiPolygon
geographic boundaries to fetch geometries within
zoom: int
zoom levels to generate the tiles
crs: str
the starting CRS of the passed-in geometry. if None, it will be set to "EPSG:4326"
manifest_geom: str
geometry to be mantained in the manifest. 'tile' will mantain the tile geoms in the manifest
while 'geom' will mantain the original geom.
"""
def __init__(self, geometry=None, zoom=5, crs=None, tile_geom=True, **kwargs):
self.zoom = zoom
if crs is None:
self.crs = default_crs
else:
self.crs = crs
self.tiles = None
#generate geometry gdf
self.geometry = self.geometry_to_gdf(geometry=geometry, crs=self.crs)
if tile_geom:
self.tiles = self.generate_tiles(crs=self.crs)
#generate manifest
self.manifest = self.generate_manifest(geometry=self.geometry, tiles=self.tiles)
#else:
# #generate manifest for inserted geom
# print('todo')
#def __repr__(self):
# return 'yes'
#
#methods
def generate_tiles(self, crs):
"""
Generate tiles for the manifest.
"""
#generate tiles
tiles = []
for tile in mt.tiles(-180, -85, 180, 85, self.zoom, truncate=False):
tile_id = f"{tile.z}_{tile.x}_{tile.y}"
geom = mt.feature(tile)['geometry']
polygon = shape(geom)
tiles.append({
'tile_id': tile_id,
'geometry': polygon
})
# generate geodataframe with tiles
gdf = gpd.GeoDataFrame(tiles)
#check projection
if gdf.crs is None:
gdf = self.set_crs(gdf, crs)
elif gdf.crs != self.crs:
gdf = self.reproject_gdf(gdf,crs)
return gdf
def geometry_to_gdf(self,geometry, crs):
"""
Create GeoDataFrame from a (multi)polygon.
Parameters
----------
geometry : shapely.geometry.Polygon or shapely.geometry.MultiPolygon
geographic boundaries to fetch geometries within
Returns
-------
gdf : geopandas.GeoDataFrame
"""
#check that incomming geometry is valid
if not geometry.is_valid:
print('The geometry is invalid')
#check that the geometry is a polygon or multipolygon
if not isinstance(geometry, (Polygon, MultiPolygon)):
print('The geometry must be a shapely.geometry.Polygon or shapely.geometry.MultiPolygon')
#create gdf from the incomming geometry
gdf = gpd.GeoDataFrame(geometry)
gdf = gdf.set_geometry(0)
gdf = gdf.rename(columns={0:'geometry'})
if gdf.crs is None:
gdf = self.set_crs(gdf, crs)
elif gdf.crs != self.crs:
gdf = self.reproject_gdf(gdf,crs)
return gdf
def set_crs(self, gdf, crs):
"""
Set CRS in GeoDataFrame when current projection is not defined.
Parameters
----------
gdf : geopandas.GeoDataFrame
the geodataframe to set the projection
Returns
-------
gdf : geopandas.GeoDataFrame
the geodataframe with the projection defined """
gdf = gdf.set_crs(crs)
return gdf
def reproject_gdf(self, gdf,to_crs):
"""Project a GeoDataFrame from its current CRS to another.
Parameters
----------
gdf : geopandas.GeoDataFrame
the GeoDataFrame to be projected
to_crs : string
CRS to project the geodataframe
Returns
----------
gdf_proj : geopandas.GeoDataFrame
the projected GeoDataFrame"""
gdf_proj = gdf.to_crs(epsg=to_crs)
return gdf_proj
def generate_manifest(self, geometry, tiles):
"""Generates a gedodataframe manifest to keep track of the osm retrieving process.
Parameters
----------
geometry : geopandas.GeoDataFrame
the GeoDataFrame to be projected
tiles : geopandas.GeoDataFrame
tiles geodataframe to be intersected with the incomming geometry.
if None, it will produce a manifest just for the incomming geometry.
Returns
----------
manifest : geopandas.GeoDataFrame
manifest geodataframe"""
if tiles is None:
#return manifest for geometry
geom_tiles = geometry
else:
geom_tiles = gpd.sjoin(tiles, geometry, how='left', op='intersects', lsuffix='tiles', rsuffix='geom')
## Keep only intersecting tile geoms
manifest = geom_tiles[pd.notna(geom_tiles.geometry_geom)]
#add the tracking information
manifest['exclude'] = 0
manifest['exported'] = 0
manifest['uploaded'] = 0
return manifest
osm_col = CollectionOsm(
geometry=geometry,
zoom=6,
crs='EPSG:4326',
tile_geom=True)
osm_col
osm_col.manifest
###Output
_____no_output_____ |
notebooks/LEP_Custom_Denom.ipynb | ###Markdown
February 8th Standup Custom denominator for LEP - Denominator columns in output - No more "not_lep" or "not_fb" (foreign born), just affirmative - Custom denominator of just people 5 and over for LEPI chose to put these numbers in a jupyter notebook to walk through as for some reason it's brain melting. I will open the pull request after standup
###Code
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('../utils')
import wd_management
wd_management.set_wd_root()
from aggregate.PUMS.count_PUMS_demographics import PUMSCountDemographics
aggregator = PUMSCountDemographics(limited_PUMA=True)
df = aggregator.aggregated
###Output
_____no_output_____
###Markdown
Start with total_pop. It's easiest
###Code
print(df['total_pop-count'])
print()
print(df['total_pop-fraction'])
print()
print(df['total_pop-fraction-denom'])
###Output
4001 162630.0
3701 110196.0
4101 163980.0
3801 216605.0
3901 164675.0
Name: total_pop-count, dtype: float64
4001 1.0
3701 1.0
4101 1.0
3801 1.0
3901 1.0
Name: total_pop-fraction, dtype: float64
4001 162630.0
3701 110196.0
4101 163980.0
3801 216605.0
3901 164675.0
Name: total_pop-fraction-denom, dtype: float64
###Markdown
Ok that all looks good Foreign BornThis indicator is next easiest as there is only one category and the denom should be total pop
###Code
print(df['fb-count'])
print()
print(df['fb-fraction'])
assert (df['fb-fraction-denom'] == df['total_pop-count']).all()
assert (df['fb-count']/df['fb-fraction-denom'] == df['fb-fraction']).all()
###Output
4001 38266.0
3701 36435.0
4101 62053.0
3801 100086.0
3901 26725.0
Name: fb-count, dtype: float64
4001 0.235295
3701 0.330638
4101 0.378418
3801 0.462067
3901 0.162289
Name: fb-fraction, dtype: float64
###Markdown
Ok great. What about foreign born by race? What should that denom be? Let's take foreign born asian as an example
###Code
print(df['fb-anh-count'])
print()
print(df['fb-anh-fraction'])
print()
print(df['fb-anh-fraction-denom'])
###Output
4001 7615.0
3701 3404.0
4101 17213.0
3801 4375.0
3901 5059.0
Name: fb-anh-count, dtype: float64
4001 0.693597
3701 0.727817
4101 0.673540
3801 0.692905
3901 0.638199
Name: fb-anh-fraction, dtype: float64
4001 10979.0
3701 4677.0
4101 25556.0
3801 6314.0
3901 7927.0
Name: fb-anh-fraction-denom, dtype: float64
###Markdown
The denominator here is the total number of asian non-hispanic people in PUMA 4001 (greenpoint). fb-anh-pop/anh-total pop - 7615/10979 = 69% of the asian non hispanic population in greenpoint is foreign born
###Code
assert (df['fb-anh-fraction-denom'] == df['total_pop-anh-count']).all()
###Output
_____no_output_____
###Markdown
Limited english proficiencyThis is a little more complex as our denominator is smaller than all people
###Code
print(df['lep-count'])
print()
print(df['lep-fraction'])
print()
print(df['lep-fraction-denom'])
assert (df['lep-count']/df['lep-fraction-denom'] == df['lep-fraction']).all()
###Output
4001 31081.0
3701 23174.0
4101 34740.0
3801 74470.0
3901 10719.0
Name: lep-count, dtype: float64
4001 0.208043
3701 0.226510
4101 0.223391
3801 0.361995
3901 0.068847
Name: lep-fraction, dtype: float64
4001 149397.0
3701 102309.0
4101 155512.0
3801 205721.0
3901 155694.0
Name: lep-fraction-denom, dtype: float64
###Markdown
How do denomiator for LEP, total pop compare?
###Code
df['lep-fraction-denom']/df['total_pop-count']
###Output
_____no_output_____
###Markdown
91-94% of people are over age 5, that passes smell test Similar question as above, what is denominator for LEP black non-hispanic?
###Code
print(df['lep-bnh-fraction-denom'])
print()
print(df['lep-bnh-fraction-denom'])
assert (df['lep-bnh-count']/df['lep-bnh-fraction-denom'] == df['lep-bnh-fraction']).all()
###Output
4001 5894.0
3701 12946.0
4101 10317.0
3801 15800.0
3901 1343.0
Name: lep-bnh-fraction-denom, dtype: float64
4001 5894.0
3701 12946.0
4101 10317.0
3801 15800.0
3901 1343.0
Name: lep-bnh-fraction-denom, dtype: float64
###Markdown
That looks good to me Age bucketsFinally look at age buckets, should be all the same pattern but doesn't hurt to take a look. Denominator is supposed to be all people
###Code
print(df['P16t64-count'])
print()
print(df['P16t64-fraction'])
print()
print(df['P16t64-fraction-denom'])
assert (df['P16t64-count']/df['P16t64-fraction-denom'] == df['P16t64-fraction']).all()
###Output
4001 114273.0
3701 66563.0
4101 119440.0
3801 151546.0
3901 104964.0
Name: P16t64-count, dtype: float64
4001 0.702656
3701 0.604042
4101 0.728382
3801 0.699642
3901 0.637401
Name: P16t64-fraction, dtype: float64
4001 162630.0
3701 110196.0
4101 163980.0
3801 216605.0
3901 164675.0
Name: P16t64-fraction-denom, dtype: float64
|
sigmoid.ipynb | ###Markdown
Sigmoid sigmoid take any real value and returns output in range (0 - 1)In notation form:$$f(x) = 1 +\frac{1}{1+e(-x)}$$ sigmoid fuction create 'S' like graphgenerly use for binary classification in logistic regressionIn artificial nuaral network, used as activation fuction
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def sigmoid(data):
sigmoid_score =[1/ float(1+ np.exp(-x)) for x in data]
return sigmoid_score
sigmoid_input =[1.1,-1,1,2,3,4,5,6,1,99,1000]
sigmoid(sigmoid_input)
###Output
_____no_output_____
###Markdown
Graph Plot
###Code
np.sum(sigmoid(sigmoid_input))
def sigmoid_plotter(x,y,x_title,y_title):
plt.figure(figsize=(10,8))
plt.plot(x,y)
plt.xlabel(x_title)
plt.ylabel(y_title)
plt.show()
graph_x = range(0,20)
graph_y =sigmoid(grapg_x)
print("grapg_x reading {}".format(grapg_x))
print("graph_y reading {}".format(graph_y))
sigmoid_plotter(grapg_x,graph_y,'input','sigmoid_score')
###Output
_____no_output_____ |
04B - Working with Datasets.ipynb | ###Markdown
Working with DatasetsIn the previous labs, you used a *datastore* to provide centralized, cloud-based data access. In this lab, you'll explore *datasets*, a further abstraction that makes it easier to work with specific data for experiments and training. Connect to Your WorkspaceThe first thing you need to do is to connect to your workspace using the Azure ML SDK.> **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Prepare DataIn the previous lab, you created a datastore. Datasets are usually (though not always) based on data in datastores.If you did not complete the previous lab, run the following code to upload two local CSV files to the default datastore in your workspace (if you *did* complete the previous lab, this will just overwrite the same files).
###Code
ws.get_default_datastore().upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
###Output
_____no_output_____
###Markdown
Create a Tabular DatasetA dataset is an object that encapsulates a specific data source. Let's create a dataset from the diabetes data you uploaded to the datastore, and view the first 20 records. In this case, the data is in a structured format in a CSV file, so we'll use a *Tabular* dataset.
###Code
from azureml.core import Dataset
# Get the default datastore
default_ds = ws.get_default_datastore()
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Display the first 20 rows as a Pandas dataframe
tab_data_set.take(20).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
As you can see in the code above, it's easy to convert a tabular dataset to a Pandas dataframe, enabling you to work with the data using common python techniques. Create a File DatasetThe dataset you created is a *tabular* dataset that can be read as a dataframe containing all of the data in the structured files that are included in the dataset definition. This works well for tabular data, but in some machine learning scenarios you might need to work with data that is unstructured; or you may simply want to handle reading the data from files in your own code. To accomplish this, you can use a *file* dataset, which creates a list of file paths in a virtual mount point, which you can use to read the data in the files.
###Code
#Create a file dataset from the path on the datastore (this may take a short while)
file_data_set = Dataset.File.from_files(path=(default_ds, 'diabetes-data/*.csv'))
# Get the files in the dataset
for file_path in file_data_set.to_path():
print(file_path)
###Output
_____no_output_____
###Markdown
Register DatasetsNow that you have created datasets that reference the diabetes data, you can register them to make them easily accessible to any experiment being run in the workspace.We'll register the tabular dataset as **diabetes dataset**, and the file dataset as **diabetes files**.
###Code
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
# Register the file dataset
try:
file_data_set = file_data_set.register(workspace=ws,
name='diabetes file dataset',
description='diabetes files',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
print('Datasets registered')
###Output
_____no_output_____
###Markdown
You can view and manage datasets on the **Datasets** page for your workspace in [Azure ML Studio](https://ml.azure.com). You cal also get a list of datasets from the workspace object:
###Code
print("Datasets:")
for dataset_name in list(ws.datasets.keys()):
dataset = Dataset.get_by_name(ws, dataset_name)
print("\t", dataset.name, 'version', dataset.version)
###Output
_____no_output_____
###Markdown
If you completed Labs 2A and 2B, you will see that registered datasets include transformations created using the visual Designer tool. You may also notice that in registering **diabetes dataset** with the same name as the dataset you created using the *Studio* interface in a previous exercise, you are creating a new *version* of the dataset. The ability to version datasets enables you to redefine datasets without breaking existing experiments or pipelines that rely on previous definitions. By default, the latest version of a named dataset is returned, but you can retrieve a specific version of a dataset by specifying the version number, like this:```pythondataset_v1 = Dataset.get_by_name(ws, 'diabetes dataset', version = 1)``` Train a Model from a Tabular DatasetNow that you have datasets, you're ready to start training models from them. You can pass datasets to scripts as *inputs* in the estimator being used to run the script.Run the following two code cells to create:1. A folder named **diabetes_training_from_tab_dataset**2. A script that trains a classification model by using a tabular dataset that is passed to is as an *input*.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_from_tab_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Set regularization hyperparameter (passed as an argument to the script)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
args = parser.parse_args()
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['diabetes'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can create an estimator to run the script, and define a named *input* for the training dataset, which is read by the script.> **Note**: The **Dataset** class is defined in the **azureml-dataprep** package (which is installed with the SDK), and this package includes optional support for **pandas** (which is used by the **to_pandas_dataframe()** method, so you need to include this package in the environment where the training experiment will be run.
###Code
from azureml.train.sklearn import SKLearn
from azureml.core import Experiment
from azureml.widgets import RunDetails
# Set the script parameters
script_params = {
'--regularization': 0.1
}
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create an estimator
estimator = SKLearn(source_directory=experiment_folder,
entry_script='diabetes_training.py',
script_params=script_params,
compute_target = 'local',
inputs=[diabetes_ds.as_named_input('diabetes')], # Pass the Dataset object as an input...
pip_packages=['azureml-dataprep[pandas]'] # ...so you need the dataprep package
)
# Create an experiment
experiment_name = 'diabetes-training'
experiment = Experiment(workspace = ws, name = experiment_name)
# Run the experiment
run = experiment.submit(config=estimator)
# Show the run details while running
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The first time the experiment is run, it may take some time to set up the Python environment - subsequent runs will be quicker.When the experiment has completed, in the widget, view the **azureml-logs/70_driver_log.txt** output log and the metrics generated by the run.As with all experiments, you can view the details of the experiment run in [Azure ML Studio](https://ml.azure.com), and you can write code to retrieve the metrics and files generated:
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
The model we trained is saved as the **diabetes_model.pkl** file in the **outputs** folder, so you can register it.
###Code
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'SKLearn Estimator (tabular dataset)'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Train a Model from a File DatasetYou've seen how to train a model using training data in a *tabular* dataset; but what about a *file* dataset?When you;re using a file dataset, the dataset input passed to the script represents a mount point containing file paths. How you read the data from these files depends on the kind of data in the files and what you want to do with it. In the case of the diabetes CSV files, you can use the Python **glob** module to create a list of files in the virtual mount point defined by the dataset, and read them all into Pandas dataframes that are concatenated into a single dataframe.Run the following two code cells to create:1. A folder named **diabetes_training_from_file_dataset**2. A script that trains a classification model by using a file dataset that is passed to is as an *input*.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_from_file_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Workspace, Dataset, Experiment, Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import glob
# Set regularization hyperparameter (passed as an argument to the script)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
args = parser.parse_args()
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes dataset
print("Loading Data...")
data_path = run.input_datasets['diabetes'] # Get the training data from the estimator input
all_files = glob.glob(data_path + "/*.csv")
diabetes = pd.concat((pd.read_csv(f) for f in all_files))
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Next we need to change the way we pass the dataset to the estimator - it needs to define a mount point from which the script can read the files. For large volumes of data, you'd generally use the **as_mount** method to stream the files directly from the dataset source; but when running on local compute (as we are in this example), you need to use the **as_download** option to download the dataset files to a local folder.Also, since the **Dataset** class is defined in the **azureml-dataprep** package, we need to include that in the experiment environment.
###Code
from azureml.train.sklearn import SKLearn
from azureml.core import Experiment
from azureml.widgets import RunDetails
# Set the script parameters
script_params = {
'--regularization': 0.1
}
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes file dataset")
# Create an estimator
estimator = SKLearn(source_directory=experiment_folder,
entry_script='diabetes_training.py',
script_params=script_params,
compute_target = 'local',
inputs=[diabetes_ds.as_named_input('diabetes').as_download(path_on_compute='diabetes_data')], # Pass the Dataset object as an input
pip_packages=['azureml-dataprep[pandas]'] # so we need the dataprep package
)
# Create an experiment
experiment_name = 'diabetes-training'
experiment = Experiment(workspace = ws, name = experiment_name)
# Run the experiment
run = experiment.submit(config=estimator)
# Show the run details while running
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
When the experiment has completed, in the widget, view the **azureml-logs/70_driver_log.txt** output log to verify that the file dataset was processed and the data files downloaded.As with all experiments, you can view the details of the experiment run in [Azure ML Studio](https://ml.azure.com), and you can write code to retrieve the metrics and files generated:
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Once again, let's register the model that we trained.
###Code
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'SKLearn Estimator (file dataset)'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
处理数据
数据是构建机器学习模型的基础。在云中集中管理数据,并使在多个工作站上运行试验和训练模型的数据科学家团队能够访问这些数据以及计算目标,这是任何专业数据科学解决方案的重要组成部分。
在该笔记本中,你将探索两个用于数据处理的 Azure 机器学习对象: *数据存储* 和 *数据集*。
安装 Azure 机器学习 SDK
Azure 机器学习 SDK 经常更新。运行以下单元格以升级到最新版本,并获取其他包以支持笔记本小组件。
###Code
!pip install --upgrade azureml-sdk azureml-widgets
###Output
_____no_output_____
###Markdown
连接到工作区
安装了最新版本的 SDK 之后,就可以连接到工作区了。
> **备注**:如果尚未与 Azure 订阅建立经过身份验证的会话,则系统将提示你通过执行以下操作进行身份验证:单击链接,输入验证码,然后登录到 Azure。
###Code
import azureml.core
from azureml.core import Workspace
# 从保存的配置文件加载工作区
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
使用数据集
Azure 机器学习以 *数据集* 的形式提供数据的抽象。数据集是对可能要在试验中使用的一组特定数据的版本控制引用。数据集可以采用 *表格* 格式,也可以采用 *文件* 格式。
将数据上传到数据存储
大多数数据集都基于数据存储中的数据,因此让我们上传一些数据作为数据集的基础。
###Code
# 获取默认数据存储
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # 将糖尿病 csv 文件上传到 /data 中
target_path='diabetes-data/', # 将其放在数据存储的文件夹路径中
overwrite=True, # 替换名称相同的现有文件
show_progress=True)
###Output
_____no_output_____
###Markdown
创建表格数据集
接下来根据上传到数据存储的糖尿病数据创建数据集,然后查看前 20 条记录。这种情况下,数据在 CSV 文件中采用结构化格式,因此我们将使用 *表格* 数据集。
###Code
from azureml.core import Dataset
# 获取默认数据存储
default_ds = ws.get_default_datastore()
#从数据存储上的路径创建表格数据集(这可能需要一些时间)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# 将前 20 行显示为 Pandas 数据帧
tab_data_set.take(20).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
如上述代码中所见,可以轻松地将表格数据集转换为 Pandas 数据帧,从而使用常见的 python 技术处理数据。
创建文件数据集
你创建的数据集是 *表格* 数据集,可以在数据集定义所包含的结构化文件中作为包含所有数据的数据帧读取。这对于表格数据非常有效,但在某些机器学习场景中,可能需要使用非结构化数据;或者你可能只想通过自己的代码读取文件中的数据。为此,可以使用文件数据集,该数据集在虚拟装入点创建文件路径列表,用于读取 *文件* 中的数据。
###Code
#从数据存储上的路径创建文件数据集(这可能需要一些时间)
file_data_set = Dataset.File.from_files(path=(default_ds, 'diabetes-data/*.csv'))
# 获取数据集中的文件
for file_path in file_data_set.to_path():
print(file_path)
###Output
_____no_output_____
###Markdown
注册数据集
创建引用糖尿病数据的数据集后,可以将其注册,确保工作区中运行的所有试验可轻松对其进行访问。
我们将表格数据集注册为 **“糖尿病数据集”**,将文件数据集注册为 **“糖尿病文件”**。
###Code
# 注册表格数据集
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
# 注册文件数据集
try:
file_data_set = file_data_set.register(workspace=ws,
name='diabetes file dataset',
description='diabetes files',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
print('Datasets registered')
###Output
_____no_output_____
###Markdown
你可以在 [Azure 机器学习工作室](https://ml.azure.com)中工作区的 **“数据集”** 页面上查看和管理数据集。你还可以从工作区对象获取数据集列表:
###Code
print("Datasets:")
for dataset_name in list(ws.datasets.keys()):
dataset = Dataset.get_by_name(ws, dataset_name)
print("\t", dataset.name, 'version', dataset.version)
###Output
_____no_output_____
###Markdown
通过对数据集进行版本控制,可以重新定义数据集,从而无需破坏依赖先前定义的现有试验或管道。默认返回最新版本的已命名数据集,但可以通过指定版本号检索特定版本的数据集,如下所示:
```python
dataset_v1 = Dataset.get_by_name(ws, 'diabetes dataset', version = 1)
```
从表格数据集训练模型
有数据集后,即可开始从中训练模型。可以在运行脚本的估算器中将数据集作为 *输入* 传递给脚本。
运行以下两个代码单元格,创建以下内容:
1. 名为 **diabetes_training_from_tab_dataset** 的文件夹2. 使用传递给它的表格数据集训练分类模型的脚本作为参数。
###Code
import os
# 为试验文件创建文件夹
experiment_folder = 'diabetes_training_from_tab_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# 导入库
import os
import argparse
from azureml.core import Run, Dataset
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# 获取脚本参数(正则化率和训练数据集 ID)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 设置正则化超参数(作为参数传递给脚本)
reg = args.reg_rate
# 获取试验运行上下文
run = Run.get_context()
# 获取训练数据集
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 分隔特征和标签
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# 将数据拆分为训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# 训练逻辑回归模型
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# 计算精度
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# 计算 AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# 注意,保存在 outputs 文件夹中的文件会自动上传到试验记录
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
> **备注**:在脚本中,数据集作为形参(或实参)传递。对于表格数据集,此参数将包含已注册数据集的 ID,因此,你可以在脚本中编写代码以从运行上下文中获取试验的工作区,然后使用其 ID 获取数据集,如下所示:
>
> ```
> run = Run.get_context()
> ws = run.experiment.workspace
> dataset = Dataset.get_by_id(ws, id=args.training_dataset_id)
> diabetes = dataset.to_pandas_dataframe()
> ```
>
> 但是,Azure 机器学习运行时会自动识别引用命名数据集的参数并将其添加到运行的 input_datasets 集合中,因此你还可以通过指定其 **“易记名称”** 来从该集合检索数据集(稍后你将看到,它在试验的脚本运行配置中的参数定义中指定)。这是上面脚本中采用的方法。
现在,可以试验方式运行脚本,为训练数据集定义一个由脚本读取的参数。
> **备注**:**Dataset** 类取决于 **azureml-dataprep** 包中的某些组件,该组件包括对 **to_pandas_dataframe()** 方法使用的 **Pandas** 的可选支持。因此,需要在将要运行训练试验的环境中包含此包。
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# 创建用于试验的 Python 环境
sklearn_env = Environment("sklearn-env")
# 确保已安装所需的包(我们需要 scikit-learn、Azure ML 默认值和 Azure ML dataprep)
packages = CondaDependencies.create(conda_packages=['scikit-learn','pip'],
pip_packages=['azureml-defaults','azureml-dataprep[pandas]'])
sklearn_env.python.conda_dependencies = packages
# 获取训练数据集
diabetes_ds = ws.datasets.get("diabetes dataset")
# 创建脚本配置
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # 正则化率参数
'--input-data', diabetes_ds.as_named_input('training_data')], # 引用数据集
environment=sklearn_env)
# 提交试验
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
> **备注**:**--input-data** 参数将数据集作为指定 *输入传递*,其中包括该数据集的易 *记名称*,脚本在试验运行中使用该名称从 **input_datasets** 集合读取它。**--input-data** 参数中的字符串值实际上是已注册数据集的 ID。 作为一种替代方法,可以只传递 `diabetes_ds.id`,在这种情况下,脚本可以从脚本参数访问数据集 ID,并使用该 ID 从工作区(而不是从 **input_datasets** 集合)获取数据集。
首次运行试验时,可能需要一些时间来设置 Python 环境 - 后续运行会更快。
试验完成后,在小组件中查看 **azureml-logs/70_driver_log.txt** 输出日志和运行所生成的指标。
注册训练后的模型
与任何训练试验一样,可以检索训练后的模型并在 Azure 机器学习工作区中注册它。
###Code
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Tabular dataset'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
从文件数据集训练模型
你已了解了如何使用 *表格* 数据集中的训练数据来训练模型。但 *文件* 数据集呢?
使用文件数据集时,传递给脚本的数据集参数表示包含文件路径的装入点。从这些文件中读取数据的方式取决于文件中的数据类型及其预期用途。对于糖尿病 CSV 文件,可以使用 Python **glob** 模块在数据集定义的虚拟装入点中创建文件列表,并将其全部读入可联结为单个数据帧的 Pandas 数据帧中。
运行以下两个代码单元格,创建以下内容:
1. 名为 **diabetes_training_from_file_dataset** 的文件夹
2. 使用文件数据集(作为 *输入* 传递给脚本)训练分类模型的脚本。
###Code
import os
# 为试验文件创建文件夹
experiment_folder = 'diabetes_training_from_file_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# 导入库
import os
import argparse
from azureml.core import Dataset, Run
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import glob
# 获取脚本参数(正则化率和文件数据集装入点)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument('--input-data', type=str, dest='dataset_folder', help='data mount point')
args = parser.parse_args()
# 设置正则化超参数(作为参数传递给脚本)
reg = args.reg_rate
# 获取试验运行上下文
run = Run.get_context()
# 加载糖尿病数据集
print("Loading Data...")
data_path = run.input_datasets['training_files'] # 从输入获取训练数据路径
# (如果不想依赖硬编码的易记名称,也可以只使用 args.data_folder)
# 读取文件
all_files = glob.glob(data_path + "/*.csv")
diabetes = pd.concat((pd.read_csv(f) for f in all_files), sort=False)
# 分隔特征和标签
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# 将数据拆分为训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# 训练逻辑回归模型
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# 计算精度
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# 计算 AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# 注意,保存在 outputs 文件夹中的文件会自动上传到试验记录
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
与表格数据集一样,你可以使用其易记名称从 **input_datasets** 集合中检索文件数据集。还可以从脚本参数检索它,对于文件数据集,脚本参数包含文件的安装路径(而不是传递给表格数据集的数据集 ID)。
接下来需要更改将数据集传递到脚本的方式 - 这需要定义脚本可以从中读取文件的路径。可以使用 **as_download** 或 **as_mount** 方法来执行此操作。使用 **as_download** 会将文件数据集中的文件下载到计算机上运行脚本的临时位置,而 **as_mount** 会创建一个装入点,可以从该装入点直接从数据集传输文件。
可以将访问方法与 **as_named_input** 方法结合使用,以在试验运行中将数据集包含在 **input_datasets** 集合中(如果不这样做,例如,通过将参数设置为 `diabetes_ds.as_mount()`,则脚本将能够从脚本参数(而不是从 **input_datasets** 集合)访问数据集装入点)。
###Code
from azureml.core import Experiment
from azureml.widgets import RunDetails
# 获取训练数据集
diabetes_ds = ws.datasets.get("diabetes file dataset")
# 创建脚本配置
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # 正则化率参数
'--input-data', diabetes_ds.as_named_input('training_files').as_download()], # 引用数据集位置
environment=sklearn_env) # 使用先前创建的环境
# 提交试验
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
试验完成后,在小组件中查看 **azureml-logs/70_driver_log.txt** 输出日志,以验证文件数据集中的文件已下载到临时文件夹中,从而使脚本能够读取文件。
注册训练后的模型
同样,可以注册由试验训练的模型。
###Code
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'File dataset'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with DataData is the foundation on which machine learning models are built. Managing data centrally in the cloud, and making it accessible to teams of data scientists who are running experiments and training models on multiple workstations and compute targets is an important part of any professional data science solution.In this notebook, you'll explore two Azure Machine Learning objects for working with data: *datastores*, and *datasets*. Install the Azure Machine Learning SDKThe Azure Machine Learning SDK is updated frequently. Run the following cell to upgrade to the latest release, along with the additional package to support notebook widgets.
###Code
!pip install --upgrade azureml-sdk azureml-widgets
###Output
_____no_output_____
###Markdown
Connect to your workspaceWith the latest version of the SDK installed, now you're ready to connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Work with datasetsAzure Machine Learning provides an abstraction for data in the form of *datasets*. A dataset is a versioned reference to a specific set of data that you may want to use in an experiment. Datasets can be *tabular* or *file*-based. Upload data to a datastoreMost datasets are based on data in a datastore, so let's upload some data on which to base our datasets.
###Code
# Get the default datastore
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
###Output
_____no_output_____
###Markdown
Create a tabular datasetLet's create a dataset from the diabetes data you uploaded to the datastore, and view the first 20 records. In this case, the data is in a structured format in a CSV file, so we'll use a *tabular* dataset.
###Code
from azureml.core import Dataset
# Get the default datastore
default_ds = ws.get_default_datastore()
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Display the first 20 rows as a Pandas dataframe
tab_data_set.take(20).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
As you can see in the code above, it's easy to convert a tabular dataset to a Pandas dataframe, enabling you to work with the data using common python techniques. Create a file DatasetThe dataset you created is a *tabular* dataset that can be read as a dataframe containing all of the data in the structured files that are included in the dataset definition. This works well for tabular data, but in some machine learning scenarios you might need to work with data that is unstructured; or you may simply want to handle reading the data from files in your own code. To accomplish this, you can use a *file* dataset, which creates a list of file paths in a virtual mount point, which you can use to read the data in the files.
###Code
#Create a file dataset from the path on the datastore (this may take a short while)
file_data_set = Dataset.File.from_files(path=(default_ds, 'diabetes-data/*.csv'))
# Get the files in the dataset
for file_path in file_data_set.to_path():
print(file_path)
###Output
_____no_output_____
###Markdown
Register datasetsNow that you have created datasets that reference the diabetes data, you can register them to make them easily accessible to any experiment being run in the workspace.We'll register the tabular dataset as **diabetes dataset**, and the file dataset as **diabetes files**.
###Code
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
# Register the file dataset
try:
file_data_set = file_data_set.register(workspace=ws,
name='diabetes file dataset',
description='diabetes files',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
print('Datasets registered')
###Output
_____no_output_____
###Markdown
You can view and manage datasets on the **Datasets** page for your workspace in [Azure Machine Learning studio](https://ml.azure.com). You cal also get a list of datasets from the workspace object:
###Code
print("Datasets:")
for dataset_name in list(ws.datasets.keys()):
dataset = Dataset.get_by_name(ws, dataset_name)
print("\t", dataset.name, 'version', dataset.version)
###Output
_____no_output_____
###Markdown
The ability to version datasets enables you to redefine datasets without breaking existing experiments or pipelines that rely on previous definitions. By default, the latest version of a named dataset is returned, but you can retrieve a specific version of a dataset by specifying the version number, like this:```pythondataset_v1 = Dataset.get_by_name(ws, 'diabetes dataset', version = 1)``` Train a model from a tabular datasetNow that you have datasets, you're ready to start training models from them. You can pass datasets to scripts as *inputs* in the estimator being used to run the script.Run the following two code cells to create:1. A folder named **diabetes_training_from_tab_dataset**2. A script that trains a classification model by using a tabular dataset that is passed to is as an argument.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_from_tab_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import os
import argparse
from azureml.core import Run, Dataset
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Get the script arguments (regularization rate and training dataset ID)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter (passed as an argument to the script)
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# Get the training dataset
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
> **Note**: In the script, the dataset is passed as a parameter (or argument). In the case of a tabular dataset, this argument will contain the ID of the registered dataset; so you could write code in the script to get the experiment's workspace from the run context, and then get the dataset using its ID; like this:>> ```> run = Run.get_context()> ws = run.experiment.workspace> dataset = Dataset.get_by_id(ws, id=args.training_dataset_id)> diabetes = dataset.to_pandas_dataframe()> ```>> However, Azure Machine Learning runs automatically identify arguments that reference named datasets and add them to the run's **input_datasets** collection, so you can also retrieve the dataset from this collection by specifying its "friendly name" (which as you'll see shortly, is specified in the argument definition in the script run configuration for the experiment). This is the approach taken in the script above.Now you can run a script as an experiment, defining an argument for the training dataset, which is read by the script.> **Note**: The **Dataset** class depends on some components in the **azureml-dataprep** package, which includes optional support for **pandas** that is used by the **to_pandas_dataframe()** method. So you need to include this package in the environment where the training experiment will be run.
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Create a Python environment for the experiment
sklearn_env = Environment("sklearn-env")
# Ensure the required packages are installed (we need scikit-learn, Azure ML defaults, and Azure ML dataprep)
packages = CondaDependencies.create(conda_packages=['scikit-learn','pip'],
pip_packages=['azureml-defaults','azureml-dataprep[pandas]'])
sklearn_env.python.conda_dependencies = packages
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=sklearn_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
> **Note:** The **--input-data** argument passes the dataset as a *named input* that includes a *friendly name* for the dataset, which is used by the script to read it from the **input_datasets** collection in the experiment run. The string value in the **--input-data** argument is actually the registered dataset's ID. As an alternative approach, you could simply pass `diabetes_ds.id`, in which case the script can access the dataset ID from the script arguments and use it to get the dataset from the workspace, but not from the **input_datasets** collection.The first time the experiment is run, it may take some time to set up the Python environment - subsequent runs will be quicker.When the experiment has completed, in the widget, view the **azureml-logs/70_driver_log.txt** output log and the metrics generated by the run. Register the trained modelAs with any training experiment, you can retrieve the trained model and register it in your Azure Machine Learning workspace.
###Code
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Tabular dataset'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Train a model from a file datasetYou've seen how to train a model using training data in a *tabular* dataset; but what about a *file* dataset?When you're using a file dataset, the dataset argument passed to the script represents a mount point containing file paths. How you read the data from these files depends on the kind of data in the files and what you want to do with it. In the case of the diabetes CSV files, you can use the Python **glob** module to create a list of files in the virtual mount point defined by the dataset, and read them all into Pandas dataframes that are concatenated into a single dataframe.Run the following two code cells to create:1. A folder named **diabetes_training_from_file_dataset**2. A script that trains a classification model by using a file dataset that is passed to is as an *input*.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_from_file_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import os
import argparse
from azureml.core import Dataset, Run
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import glob
# Get script arguments (rgularization rate and file dataset mount point)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument('--input-data', type=str, dest='dataset_folder', help='data mount point')
args = parser.parse_args()
# Set regularization hyperparameter (passed as an argument to the script)
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes dataset
print("Loading Data...")
data_path = run.input_datasets['training_files'] # Get the training data path from the input
# (You could also just use args.data_folder if you don't want to rely on a hard-coded friendly name)
# Read the files
all_files = glob.glob(data_path + "/*.csv")
diabetes = pd.concat((pd.read_csv(f) for f in all_files), sort=False)
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Just as with tabular datasets, you can retrieve a file dataset from the **input_datasets** collection by using its friendly name. You can also retrieve it from the script argument, which in the case of a file dataset contains a mount path to the files (rather than the dataset ID passed for a tabular dataset).Next we need to change the way we pass the dataset to the script - it needs to define a path from which the script can read the files. You can use either the **as_download** or **as_mount** method to do this. Using **as_download** causes the files in the file dataset to be downloaded to a temporary location on the compute where the script is being run, while **as_mount** creates a mount point from which the files can be streamed directly from the datasetore.You can combine the access method with the **as_named_input** method to include the dataset in the **input_datasets** collection in the experiment run (if you omit this, for example by setting the argument to `diabetes_ds.as_mount()`, the script will be able to access the dataset mount point from the script arguments, but not from the **input_datasets** collection).
###Code
from azureml.core import Experiment
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes file dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_files').as_download()], # Reference to dataset location
environment=sklearn_env) # Use the environment created previously
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
When the experiment has completed, in the widget, view the **azureml-logs/70_driver_log.txt** output log to verify that the files in the file dataset were downloaded to a temporary folder to enable the script to read the files. Register the trained modelOnce again, you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'File dataset'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Working with DatasetsIn the previous labs, you used a *datastore* to provide centralized, cloud-based data access. In this lab, you'll explore *datasets*, a further abstraction that makes it easier to work with specific data for experiments and training. Connect to Your WorkspaceThe first thing you need to do is to connect to your workspace using the Azure ML SDK.> **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.9.0 to work with Lab01A
###Markdown
Prepare DataIn the previous lab, you created a datastore. Datasets are usually (though not always) based on data in datastores.If you did not complete the previous lab, run the following code to upload two local CSV files to the default datastore in your workspace (if you *did* complete the previous lab, this will just overwrite the same files).
###Code
ws.get_default_datastore().upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
###Output
Uploading an estimated of 2 files
Uploading ./data/diabetes.csv
Uploading ./data/diabetes2.csv
Uploaded ./data/diabetes2.csv, 1 files out of an estimated total of 2
Uploaded ./data/diabetes.csv, 2 files out of an estimated total of 2
Uploaded 2 files
###Markdown
Create a Tabular DatasetA dataset is an object that encapsulates a specific data source. Let's create a dataset from the diabetes data you uploaded to the datastore, and view the first 20 records. In this case, the data is in a structured format in a CSV file, so we'll use a *Tabular* dataset.
###Code
from azureml.core import Dataset
# Get the default datastore
default_ds = ws.get_default_datastore()
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Display the first 20 rows as a Pandas dataframe
tab_data_set.take(20).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
As you can see in the code above, it's easy to convert a tabular dataset to a Pandas dataframe, enabling you to work with the data using common python techniques. Create a File DatasetThe dataset you created is a *tabular* dataset that can be read as a dataframe containing all of the data in the structured files that are included in the dataset definition. This works well for tabular data, but in some machine learning scenarios you might need to work with data that is unstructured; or you may simply want to handle reading the data from files in your own code. To accomplish this, you can use a *file* dataset, which creates a list of file paths in a virtual mount point, which you can use to read the data in the files.
###Code
#Create a file dataset from the path on the datastore (this may take a short while)
file_data_set = Dataset.File.from_files(path=(default_ds, 'diabetes-data/*.csv'))
# Get the files in the dataset
for file_path in file_data_set.to_path():
print(file_path)
###Output
/diabetes.csv
/diabetes2.csv
###Markdown
Register DatasetsNow that you have created datasets that reference the diabetes data, you can register them to make them easily accessible to any experiment being run in the workspace.We'll register the tabular dataset as **diabetes dataset**, and the file dataset as **diabetes files**.
###Code
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
# Register the file dataset
try:
file_data_set = file_data_set.register(workspace=ws,
name='diabetes file dataset',
description='diabetes files',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
print('Datasets registered')
###Output
Datasets registered
###Markdown
You can view and manage datasets on the **Datasets** page for your workspace in [Azure ML Studio](https://ml.azure.com). You cal also get a list of datasets from the workspace object:
###Code
print("Datasets:")
for dataset_name in list(ws.datasets.keys()):
dataset = Dataset.get_by_name(ws, dataset_name)
print("\t", dataset.name, 'version', dataset.version)
###Output
Datasets:
diabetes file dataset version 1
diabetes dataset version 1
TD-LAB02A-Visual_Diabetes_Training-Normalize_Data-Transformation_function-300e87f7 version 1
MD-LAB02A-Visual_Diabetes_Training-Train_Model-Trained_model-acaadfdc version 1
LAB01A-DiabetesDataset version 1
###Markdown
If you completed Labs 2A and 2B, you will see that registered datasets include transformations created using the visual Designer tool. You may also notice that in registering **diabetes dataset** with the same name as the dataset you created using the *Studio* interface in a previous exercise, you are creating a new *version* of the dataset. The ability to version datasets enables you to redefine datasets without breaking existing experiments or pipelines that rely on previous definitions. By default, the latest version of a named dataset is returned, but you can retrieve a specific version of a dataset by specifying the version number, like this:```pythondataset_v1 = Dataset.get_by_name(ws, 'diabetes dataset', version = 1)``` Train a Model from a Tabular DatasetNow that you have datasets, you're ready to start training models from them. You can pass datasets to scripts as *inputs* in the estimator being used to run the script.Run the following two code cells to create:1. A folder named **diabetes_training_from_tab_dataset**2. A script that trains a classification model by using a tabular dataset that is passed to is as an *input*.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_from_tab_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Set regularization hyperparameter (passed as an argument to the script)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
args = parser.parse_args()
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['diabetes'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_from_tab_dataset/diabetes_training.py
###Markdown
Now you can create an estimator to run the script, and define a named *input* for the training dataset, which is read by the script.> **Note**: The **Dataset** class is defined in the **azureml-dataprep** package (which is installed with the SDK), and this package includes optional support for **pandas** (which is used by the **to_pandas_dataframe()** method, so you need to include this package in the environment where the training experiment will be run.
###Code
from azureml.train.sklearn import SKLearn
from azureml.core import Experiment
from azureml.widgets import RunDetails
# Set the script parameters
script_params = {
'--regularization': 0.1
}
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create an estimator
estimator = SKLearn(source_directory=experiment_folder,
entry_script='diabetes_training.py',
script_params=script_params,
compute_target = 'local',
inputs=[diabetes_ds.as_named_input('diabetes')], # Pass the Dataset object as an input...
pip_packages=['azureml-dataprep[pandas]'] # ...so you need the dataprep package
)
# Create an experiment
experiment_name = 'diabetes-training'
experiment = Experiment(workspace = ws, name = experiment_name)
# Run the experiment
run = experiment.submit(config=estimator)
# Show the run details while running
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The first time the experiment is run, it may take some time to set up the Python environment - subsequent runs will be quicker.When the experiment has completed, in the widget, view the **azureml-logs/70_driver_log.txt** output log and the metrics generated by the run.As with all experiments, you can view the details of the experiment run in [Azure ML Studio](https://ml.azure.com), and you can write code to retrieve the metrics and files generated:
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
Accuracy 0.7893333333333333
AUC 0.8568632924585982
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/8_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
logs/azureml/dataprep/engine_spans_l_ee0060f7-5745-46c6-aa60-1e81fc561958.jsonl
logs/azureml/dataprep/python_span_l_ee0060f7-5745-46c6-aa60-1e81fc561958.jsonl
outputs/diabetes_model.pkl
###Markdown
The model we trained is saved as the **diabetes_model.pkl** file in the **outputs** folder, so you can register it.
###Code
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'SKLearn Estimator (tabular dataset)'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 4
Training context : SKLearn Estimator (tabular dataset)
AUC : 0.8568632924585982
Accuracy : 0.7893333333333333
diabetes_model version: 3
Training context : Using Datastore
AUC : 0.846851712258014
Accuracy : 0.7788888888888889
diabetes_model version: 2
Training context : Parameterized SKLearn Estimator
AUC : 0.8483904671874223
Accuracy : 0.7736666666666666
diabetes_model version: 1
Training context : Estimator
AUC : 0.8483377282451863
Accuracy : 0.774
amlstudio-lab02b-predict-diabe version: 1
CreatedByAMLStudio : true
###Markdown
Train a Model from a File DatasetYou've seen how to train a model using training data in a *tabular* dataset; but what about a *file* dataset?When you;re using a file dataset, the dataset input passed to the script represents a mount point containing file paths. How you read the data from these files depends on the kind of data in the files and what you want to do with it. In the case of the diabetes CSV files, you can use the Python **glob** module to create a list of files in the virtual mount point defined by the dataset, and read them all into Pandas dataframes that are concatenated into a single dataframe.Run the following two code cells to create:1. A folder named **diabetes_training_from_file_dataset**2. A script that trains a classification model by using a file dataset that is passed to is as an *input*.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_from_file_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Workspace, Dataset, Experiment, Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import glob
# Set regularization hyperparameter (passed as an argument to the script)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
args = parser.parse_args()
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes dataset
print("Loading Data...")
data_path = run.input_datasets['diabetes'] # Get the training data from the estimator input
all_files = glob.glob(data_path + "/*.csv")
diabetes = pd.concat((pd.read_csv(f) for f in all_files))
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_from_file_dataset/diabetes_training.py
###Markdown
Next we need to change the way we pass the dataset to the estimator - it needs to define a mount point from which the script can read the files. For large volumes of data, you'd generally use the **as_mount** method to stream the files directly from the dataset source; but when running on local compute (as we are in this example), you need to use the **as_download** option to download the dataset files to a local folder.Also, since the **Dataset** class is defined in the **azureml-dataprep** package, we need to include that in the experiment environment.
###Code
from azureml.train.sklearn import SKLearn
from azureml.core import Experiment
from azureml.widgets import RunDetails
# Set the script parameters
script_params = {
'--regularization': 0.1
}
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes file dataset")
# Create an estimator
estimator = SKLearn(source_directory=experiment_folder,
entry_script='diabetes_training.py',
script_params=script_params,
compute_target = 'local',
inputs=[diabetes_ds.as_named_input('diabetes').as_download(path_on_compute='diabetes_data')], # Pass the Dataset object as an input
pip_packages=['azureml-dataprep[pandas]'] # so we need the dataprep package
)
# Create an experiment
experiment_name = 'diabetes-training'
experiment = Experiment(workspace = ws, name = experiment_name)
# Run the experiment
run = experiment.submit(config=estimator)
# Show the run details while running
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
When the experiment has completed, in the widget, view the **azureml-logs/70_driver_log.txt** output log to verify that the file dataset was processed and the data files downloaded.As with all experiments, you can view the details of the experiment run in [Azure ML Studio](https://ml.azure.com), and you can write code to retrieve the metrics and files generated:
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
Accuracy 0.7788888888888889
AUC 0.846851712258014
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/8_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
logs/azureml/dataprep/engine_spans_l_849f00d8-4045-4f8c-8e1d-8c878b556946.jsonl
logs/azureml/dataprep/python_span_l_849f00d8-4045-4f8c-8e1d-8c878b556946.jsonl
outputs/diabetes_model.pkl
###Markdown
Once again, let's register the model that we trained.
###Code
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'SKLearn Estimator (file dataset)'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 5
Training context : SKLearn Estimator (file dataset)
AUC : 0.846851712258014
Accuracy : 0.7788888888888889
diabetes_model version: 4
Training context : SKLearn Estimator (tabular dataset)
AUC : 0.8568632924585982
Accuracy : 0.7893333333333333
diabetes_model version: 3
Training context : Using Datastore
AUC : 0.846851712258014
Accuracy : 0.7788888888888889
diabetes_model version: 2
Training context : Parameterized SKLearn Estimator
AUC : 0.8483904671874223
Accuracy : 0.7736666666666666
diabetes_model version: 1
Training context : Estimator
AUC : 0.8483377282451863
Accuracy : 0.774
amlstudio-lab02b-predict-diabe version: 1
CreatedByAMLStudio : true
###Markdown
データを操作するデータは、機械学習モデルが構築される基盤です。クラウドで一元的にデータを管理し、複数のワークステーションとコンピューティング先で実験を実行してモデルをトレーニングしているデータ サイエンティストのチームがアクセスできるようにすることは、プロフェッショナルなデータ サイエンス ソリューションでは重要です。このノートブックでは、データを操作するための 2 つの Azure Machine Learning オブジェクト、*データストア* と *データセット* について学びます。 Azure Machine Learning SDK をインストールするAzure Machine Learning SDK は頻繁に更新されます。以下のセルを実行し、ノートブック ウィジェットをサポートする追加パッケージとともに最新のリリースにアップグレードします。
###Code
!pip install --upgrade azureml-sdk azureml-widgets
###Output
_____no_output_____
###Markdown
ワークスペースに接続する最新バージョンの SDK がインストールされているため、ワークスペースに接続できます。> **注**: Azure サブスクリプションでまだ認証済みのセッションを確立していない場合は、リンクをクリックして認証コードを入力し、Azure にサインインして認証するよう指示されます。
###Code
import azureml.core
from azureml.core import Workspace
# 保存した構成ファイルからワークスペースを読み込む
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
データセットを操作するAzure Machine Learning は *データセット* というかたちでデータの抽象化を提供します。データセットは、実験で使用したい特定のデータのセットへの参照で、バージョン管理されています。データセットは*表形式 *または* ファイル*ベースのいずれかになります。 データストアにデータをアップロードするほとんどのデータセットは、データストアのデータに基づいているため、データセットの基盤となるデータをいくつかアップロードしましょう。
###Code
# 既定のデータストアを取得する
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # 糖尿病 CSV ファイルを /data にアップロードする
target_path='diabetes-data/', # データストアのフォルダー パスに入れる
overwrite=True, # 同じ名前の既存のファイルを置き換える
show_progress=True)
###Output
_____no_output_____
###Markdown
表形式データセットを作成するデータストアにアップロードした糖尿病データからデータセットを作成し、最初の 20 件のレコードを表示してみましょう。この場合、データは CSV ファイル内の構造化された形式なので、*表形式* のデータセットを使用します。
###Code
from azureml.core import Dataset
# 既定のデータストアを取得する
default_ds = ws.get_default_datastore()
#データストア上のパスから表形式のデータセットを作成する (しばらく時間がかかる場合があります)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# 最初の 20 行を Pandas データフレームとして表示する
tab_data_set.take(20).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
上のコードでわかるように、表形式のデータセットを Pandas データフレームに変換するのは簡単で、一般的な Python の手法を使用してデータを操作できます。 ファイル データセットを作成する作成したデータセットは、データセット定義に含まれる構造化ファイル内のすべてのデータを含むデータフレームとして読み取ることができる *表形式* のデータセットです。これは表形式のデータに適していますが、機械学習のシナリオによっては、非構造化データの操作が必要となる場合があります。または、単に自分のコード内のファイルからデータの読み取り処理を行うことが必要となる場合もあります。これを実現するには、*ファイル* データセットを使用して、ファイルのデータを読み取るために使用できる仮想マウント ポイント内のファイル パスのリストを作成します。
###Code
#データストア上のパスからファイル データセットを作成する (しばらく時間がかかる場合があります)
file_data_set = Dataset.File.from_files(path=(default_ds, 'diabetes-data/*.csv'))
# データセット内のファイルを取得する
for file_path in file_data_set.to_path():
print(file_path)
###Output
_____no_output_____
###Markdown
データセットを登録するこれで、糖尿病データを参照するデータセットを作成したので、それらを登録して、ワークスペースで実行されている実験に簡単にアクセスできるようにすることができます。表形式のデータセットを **糖尿病データセット**、ファイル データセットを **糖尿病ファイル** として登録します。
###Code
# 表形式のデータセットを登録する
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
# ファイル データセットを登録する
try:
file_data_set = file_data_set.register(workspace=ws,
name='diabetes file dataset',
description='diabetes files',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
print('Datasets registered')
###Output
_____no_output_____
###Markdown
[Azure Machine Learning Studio](https://ml.azure.com) のワークスペースに関する 「**データセット**」 ページでデータセットを表示して管理できます。ワークスペース オブジェクトからもデータセットのリストを取得します。
###Code
print("Datasets:")
for dataset_name in list(ws.datasets.keys()):
dataset = Dataset.get_by_name(ws, dataset_name)
print("\t", dataset.name, 'version', dataset.version)
###Output
_____no_output_____
###Markdown
データセットをバージョン管理できるため、以前の定義に依存する既存の実験やパイプラインを壊すことなくデータセットを再定義できます。既定では、名前付きデータセットの最新バージョンが返されますが、次のようにバージョン番号を指定することで、特定のバージョンのデータセットを取得できます。```pythondataset_v1 = Dataset.get_by_name(ws, 'diabetes dataset', version = 1)``` 表形式データセットからモデルをトレーニングするデータセットができたので、そこからモデルのトレーニングを開始する準備が整いました。データセットは、スクリプトの実行に使用される Estimator で、*入力* としてスクリプトに渡すことができます。次の 2 つのコード セルを実行して作成します。1. **diabetes_training_from_tab_dataset** という名前のフォルダー2. 引数として渡される表形式のデータセットを使用して分類モデルをトレーニングするスクリプト。
###Code
import os
# 実験ファイル用フォルダーを作成する
experiment_folder = 'diabetes_training_from_tab_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# ライブラリをインポートする
import os
import argparse
from azureml.core import Run, Dataset
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# スクリプト引数を取得する (正規化率とトレーニング データセット ID)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 正規化ハイパーパラメーターを設定する (スクリプトに引数として渡される)
reg = args.reg_rate
# 実験実行コンテキストを取得する
run = Run.get_context()
# トレーニング データセットを取得する
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# ロジスティック回帰モデルのトレーニング
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# 正確さを計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# 出力フォルダーに保存されたファイルは、自動的に実験レコードにアップロードされます
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
> **注**: スクリプトでは、データセットはパラメーター (または引数) として渡されます。表形式のデータセットの場合、この引数には登録済みのデータセットの ID が含まれます。このため、スクリプトにコードを書き込んで実験のワークスペースを実行コンテキストから取得した後、以下のような ID を利用してデータセットを取得できます。>> ```> run = Run.get_context()> ws = run.experiment.workspace> dataset = Dataset.get_by_id(ws, id=args.training_dataset_id)> diabetes = dataset.to_pandas_dataframe()> ```>> ただし、Azure Machine Learning 実行では、名前の付いたデータセットを参照して、これを実験の **input_datasets** コレクションに追加する引数が自動的に識別されます。このため、「フレンドリ名」を指定して、このコレクションからデータセットを取得することもできます (後述しますが、この名前は実験のスクリプト実行構成の引数定義で指定されます)。このアプローチは上記のスクリプトで使用されています。これでスクリプトを実験として実行し、スクリプトで読み取られるトレーニング データセットの引数を定義できます。> **注**: **Dataset** クラスは **azureml-dataprep** パッケージの一部のコンポーネントによって異なります。この中には、**to_pandas_dataframe()** メソッドで使用される **pandas** のサポートもオプションで含まれています。このため、トレーニング実験を実行する環境にこのパッケージを含める必要があります。
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# 実験用 Python 環境を作成する
sklearn_env = Environment("sklearn-env")
# 必要なパッケージがインストールされていることを確認する (scikit-learn、Azure ML defaults、Azure ML dataprep が必要)
packages = CondaDependencies.create(conda_packages=['scikit-learn','pip'],
pip_packages=['azureml-defaults','azureml-dataprep[pandas]'])
sklearn_env.python.conda_dependencies = packages
# トレーニング データセットを取得する
diabetes_ds = ws.datasets.get("diabetes dataset")
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # 正則化率パラメーター
'--input-data', diabetes_ds.as_named_input('training_data')], # データセットへの参照
environment=sklearn_env)
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
> **注:** **--Input-data** 引数は*名前の付いた入力*としてデータセットを渡します。この中には、スクリプトが実験実行の **input_datasets** コレクションから読み取るために使用する、データセットの*フレンドリ名*が含まれます。 **--Input-data** 引数の文字列値は実際には登録済みのデータセットの ID です。 代替アプローチとして、`diabetes_ds.id` を渡すこともできます。この場合、スクリプトはスクリプト引数からデータセット ID にアクセスし、これを使用してデータセットをワークスペースから取得できますが、**input_datasets** コレクションからは取得できません。初めて実験を実行すると、Python 環境のセットアップに時間がかかる場合があります。以降の実行はより高速になります。実験が完了したら、ウィジェットで、**azureml-logs/70_driver_log.txt** 出力ログと実行によって生成されたメトリックを表示します。 トレーニングされたモデルを登録する他のトレーニング実験と同様、Azure Machine Learning ワークスペースではトレーニングされたモデルを取得して登録することができます。
###Code
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Tabular dataset'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
ファイル データセットからモデルをトレーニングする*表形式* データセットでトレーニング データを使用してモデルをトレーニングする方法を見てきましたが、*ファイル* データセットについてはどうでしょうか。ファイル データセットを使用する場合、スクリプトに渡されるデータセット引数は、ファイル パスを含むマウント ポイントを表します。これらのファイルからデータを読み取る方法は、ファイル内のデータの種類と、そのデータを使用して何を行うかによって異なります。糖尿病 CSV ファイルの場合、Python **glob** モジュールを使用して、データセットによって定義された仮想マウント ポイント内のファイルのリストを作成し、それらをすべて単一のデータフレームに連結された Pandas データフレームに読み込むことができます。次の 2 つのコード セルを実行して作成します。1. **diabetes_training_from_file_dataset** という名前のフォルダー2. *入力* として渡されるファイル データセットを使用して分類モデルをトレーニングするスクリプト。
###Code
import os
# 実験ファイル用フォルダーを作成する
experiment_folder = 'diabetes_training_from_file_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# ライブラリをインポートする
import os
import argparse
from azureml.core import Dataset, Run
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import glob
# スクリプト引数を取得する (正規化率とファイル データセット マウント ポイント)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument('--input-data', type=str, dest='dataset_folder', help='data mount point')
args = parser.parse_args()
# 正規化ハイパーパラメーターを設定する (スクリプトに引数として渡される)
reg = args.reg_rate
# 実験実行コンテキストを取得する
run = Run.get_context()
# 糖尿病データセットを読み込む
print("Loading Data...")
data_path = run.input_datasets['training_files'] # 入力からトレーニング データ パスを取得する
# (ハードコードされた覚えやすい名前を利用したくない場合は、args.data_folder を使用することも可能)
# ファイルを読み取る
all_files = glob.glob(data_path + "/*.csv")
diabetes = pd.concat((pd.read_csv(f) for f in all_files), sort=False)
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# ロジスティック回帰モデルのトレーニング
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# 正確さを計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# 出力フォルダーに保存されたファイルは、自動的に実験レコードにアップロードされます
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
表形式のデータセットと同様、ファイル データセットもフレンドリ名を使用して **input_datasets** コレクションから取得できます。また、スクリプト引数から取得することも可能です。この場合、ファイル データセットにはファイルへのマウント パスが含まれます (表形式のデータセットで渡されるデータセット ID とは異なります)。次に、データセットをスクリプトに渡す方法を変更する必要があります。スクリプトがファイルを読み取るパスを定義しなくてはなりません。これを行うには、**as_download** または **as_mount** メソッドのいずれかを使用できます。**as_download** を使用すると、ファイル データセットのファイルは、スクリプトを実行しているコンピューティングの一時的な場所にダウンロードされます。一方、**as_mount** は、ファイルを直接、データストアからストリームできるマウント ポイントを作成します。アクセス メソッドを **as_named_input** メソッドと組み合わせ、実験実行で **input_datasets** コレクションにデータセットを含めることができます (引数を `diabetes_ds.as_mount()` に設定するなどしてこれを省略すると、スクリプトはスクリプト引数からデータセット マウント ポイントにアクセスできますが、**input_datasets** コレクションからはアクセスできません)。
###Code
from azureml.core import Experiment
from azureml.widgets import RunDetails
# トレーニング データセットを取得する
diabetes_ds = ws.datasets.get("diabetes file dataset")
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # 正則化率パラメーター
'--input-data', diabetes_ds.as_named_input('training_files').as_download()], # データセットの場所への参照
environment=sklearn_env) # 以前に作成した環境を使用する
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
実験が完了したら、ウィジェットで **azureml-logs/70_driver_log.txt** 出力ログを表示し、ファイル データセットのファイルが一時的なフォルダーにダウンロードされており、スクリプトがファイルを読み取れることを確認します。 トレーニングされたモデルを登録するもう一度、実験によってトレーニングされたモデルを登録できます。
###Code
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'File dataset'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Working with DatasetsIn the previous labs, you used a *datastore* to provide centralized, cloud-based data access. In this lab, you'll explore *datasets*, a further abstraction that makes it easier to work with specific data for experiments and training. Connect to Your WorkspaceThe first thing you need to do is to connect to your workspace using the Azure ML SDK.> **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Prepare DataIn the previous lab, you created a datastore. Datasets are usually (though not always) based on data in datastores.If you did not complete the previous lab, run the following code to upload two local CSV files to the default datastore in your workspace (if you *did* complete the previous lab, this will just overwrite the same files).
###Code
ws.get_default_datastore().upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
###Output
_____no_output_____
###Markdown
Create a Tabular DatasetA dataset is an object that encapsulates a specific data source. Let's create a dataset from the diabetes data you uploaded to the datastore, and view the first 20 records. In this case, the data is in a structured format in a CSV file, so we'll use a *Tabular* dataset.
###Code
from azureml.core import Dataset
# Get the default datastore
default_ds = ws.get_default_datastore()
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Display the first 20 rows as a Pandas dataframe
tab_data_set.take(20).to_pandas_dataframe()
###Output
_____no_output_____
###Markdown
As you can see in the code above, it's easy to convert a tabular dataset to a Pandas dataframe, enabling you to work with the data using common python techniques. Create a File DatasetThe dataset you created is a *tabular* dataset that can be read as a dataframe containing all of the data in the structured files that are included in the dataset definition. This works well for tabular data, but in some machine learning scenarios you might need to work with data that is unstructured; or you may simply want to handle reading the data from files in your own code. To accomplish this, you can use a *file* dataset, which creates a list of file paths in a virtual mount point, which you can use to read the data in the files.
###Code
#Create a file dataset from the path on the datastore (this may take a short while)
file_data_set = Dataset.File.from_files(path=(default_ds, 'diabetes-data/*.csv'))
# Get the files in the dataset
for file_path in file_data_set.to_path():
print(file_path)
###Output
_____no_output_____
###Markdown
Register DatasetsNow that you have created datasets that reference the diabetes data, you can register them to make them easily accessible to any experiment being run in the workspace.We'll register the tabular dataset as **diabetes dataset**, and the file dataset as **diabetes files**.
###Code
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
# Register the file dataset
try:
file_data_set = file_data_set.register(workspace=ws,
name='diabetes file dataset',
description='diabetes files',
tags = {'format':'CSV'},
create_new_version=True)
except Exception as ex:
print(ex)
print('Datasets registered')
###Output
_____no_output_____
###Markdown
You can view and manage datasets on the **Datasets** page for your workspace in [Azure ML Studio](https://ml.azure.com). You cal also get a list of datasets from the workspace object:
###Code
print("Datasets:")
for dataset_name in list(ws.datasets.keys()):
dataset = Dataset.get_by_name(ws, dataset_name)
print("\t", dataset.name, 'version', dataset.version)
###Output
_____no_output_____
###Markdown
If you completed Labs 2A and 2B, you will see that registered datasets include transformations created using the visual Designer tool. You may also notice that in registering **diabetes dataset** with the same name as the dataset you created using the *Studio* interface in a previous exercise, you are creating a new *version* of the dataset. The ability to version datasets enables you to redefine datasets without breaking existing experiments or pipelines that rely on previous definitions. By default, the latest version of a named dataset is returned, but you can retrieve a specific version of a dataset by specifying the version number, like this:```pythondataset_v1 = Dataset.get_by_name(ws, 'diabetes dataset', version = 1)``` Train a Model from a Tabular DatasetNow that you have datasets, you're ready to start training models from them. You can pass datasets to scripts as *inputs* in the estimator being used to run the script.Run the following two code cells to create:1. A folder named **diabetes_training_from_tab_dataset**2. A script that trains a classification model by using a tabular dataset that is passed to is as an *input*.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_from_tab_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Set regularization hyperparameter (passed as an argument to the script)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
args = parser.parse_args()
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['diabetes'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can create an estimator to run the script, and define a named *input* for the training dataset, which is read by the script.> **Note**: The **Dataset** class is defined in the **azureml-dataprep** package (which is installed with the SDK), and this package includes optional support for **pandas** (which is used by the **to_pandas_dataframe()** method, so you need to include this package in the environment where the training experiment will be run.
###Code
from azureml.train.sklearn import SKLearn
from azureml.core import Experiment
from azureml.widgets import RunDetails
# Set the script parameters
script_params = {
'--regularization': 0.1
}
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create an estimator
estimator = SKLearn(source_directory=experiment_folder,
entry_script='diabetes_training.py',
script_params=script_params,
compute_target = 'local',
inputs=[diabetes_ds.as_named_input('diabetes')], # Pass the Dataset object as an input...
pip_packages=['azureml-dataprep[pandas]'] # ...so you need the dataprep package
)
# Create an experiment
experiment_name = 'diabetes-training'
experiment = Experiment(workspace = ws, name = experiment_name)
# Run the experiment
run = experiment.submit(config=estimator)
# Show the run details while running
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The first time the experiment is run, it may take some time to set up the Python environment - subsequent runs will be quicker.When the experiment has completed, in the widget, view the **azureml-logs/70_driver_log.txt** output log and the metrics generated by the run.As with all experiments, you can view the details of the experiment run in [Azure ML Studio](https://ml.azure.com), and you can write code to retrieve the metrics and files generated:
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
The model we trained is saved as the **diabetes_model.pkl** file in the **outputs** folder, so you can register it.
###Code
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'SKLearn Estimator (tabular dataset)'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Train a Model from a File DatasetYou've seen how to train a model using training data in a *tabular* dataset; but what about a *file* dataset?When you;re using a file dataset, the dataset input passed to the script represents a mount point containing file paths. How you read the data from these files depends on the kind of data in the files and what you want to do with it. In the case of the diabetes CSV files, you can use the Python **glob** module to create a list of files in the virtual mount point defined by the dataset, and read them all into Pandas dataframes that are concatenated into a single dataframe.Run the following two code cells to create:1. A folder named **diabetes_training_from_file_dataset**2. A script that trains a classification model by using a file dataset that is passed to is as an *input*.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_from_file_dataset'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Workspace, Dataset, Experiment, Run
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import glob
# Set regularization hyperparameter (passed as an argument to the script)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
args = parser.parse_args()
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes dataset
print("Loading Data...")
data_path = run.input_datasets['diabetes'] # Get the training data from the estimator input
all_files = glob.glob(data_path + "/*.csv")
diabetes = pd.concat((pd.read_csv(f) for f in all_files))
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Next we need to change the way we pass the dataset to the estimator - it needs to define a mount point from which the script can read the files. For large volumes of data, you'd generally use the **as_mount** method to stream the files directly from the dataset source; but when running on local compute (as we are in this example), you need to use the **as_download** option to download the dataset files to a local folder.Also, since the **Dataset** class is defined in the **azureml-dataprep** package, we need to include that in the experiment environment.
###Code
from azureml.train.sklearn import SKLearn
from azureml.core import Experiment
from azureml.widgets import RunDetails
# Set the script parameters
script_params = {
'--regularization': 0.1
}
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes file dataset")
# Create an estimator
estimator = SKLearn(source_directory=experiment_folder,
entry_script='diabetes_training.py',
script_params=script_params,
compute_target = 'local',
inputs=[diabetes_ds.as_named_input('diabetes').as_download(path_on_compute='diabetes_data')], # Pass the Dataset object as an input
pip_packages=['azureml-dataprep[pandas]'] # so we need the dataprep package
)
# Create an experiment
experiment_name = 'diabetes-training'
experiment = Experiment(workspace = ws, name = experiment_name)
# Run the experiment
run = experiment.submit(config=estimator)
# Show the run details while running
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
When the experiment has completed, in the widget, view the **azureml-logs/70_driver_log.txt** output log to verify that the file dataset was processed and the data files downloaded.As with all experiments, you can view the details of the experiment run in [Azure ML Studio](https://ml.azure.com), and you can write code to retrieve the metrics and files generated:
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Once again, let's register the model that we trained.
###Code
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'SKLearn Estimator (file dataset)'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____ |
nbs/02-02-recherche-documents/02-02-A1-solution.ipynb | ###Markdown
**420-A58-SF - Algorithmes d'apprentissage non supervisé - Été 2021 - Spécialisation technique en Intelligence Artificielle**MIT License - Copyright (c) 2021 Mikaël Swawola![Travaux Pratiques - Recherche de documents](static/02-02-A1-banner.png)**Objectif: Lors de l'exploration d'un jeu de données constitué de documents textes - tels que des pages Wikipedia, des articles de presse, StackOverflow, etc., il est courant de chercher à trouver quels sont les documents similaires. L'objectif de cet exercice est de mettre en oeuvre les techniques de recherche adaptées (ici les plus proches voisins) à ce type de données. Les documents utilisés sont les pages Wikipedia de personnalités.**
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import pandas as pd
# Le reste des modules sera importé au fur et à mesure des exercices ...
###Output
_____no_output_____
###Markdown
L'archive `people.zip` contient 4 fichiers:* **people_wiki.csv**: jeu de données consituté des pages Wikipedia de personnalités* **people_wiki_map_index_to_word.json**: mapping entre les mots et les indices* **people_wiki_word_count.npz**: vecteurs d'occurence des mots (word count / sacs de mot) pour chaque document* **people_wiki_tf_idf.npz**: vecteurs TF-IDF pour chaque documentDans l'énoncé de ce TP, les mots "article" et "document" sont interchangeables. 1 - Chargement du jeu de données **Exercice 1-1 - À l'aide de la librairie Pandas, lire le fichier de données `people/people_wiki.csv`. Afin de permettre les opérations de type `join` effectuées plus loin dans le TP, nommez l'index de la trame de donnée `id`**
###Code
# Compléter cette cellule ~ 2 lignes de code
wiki = pd.read_csv('../../data/people/people_wiki.csv')
wiki.index.name = 'id'
###Output
_____no_output_____
###Markdown
**Exercice 1-2 - Afficher les 5 premières lignes de la trame de données. Quelles informations contiennent les colonnes ?**
###Code
# Compléter cette cellule ~ 1 ligne de code
wiki.head()
###Output
_____no_output_____
###Markdown
2 - Extraction du nombre de mots Les vecteurs d'occurence des mots (**word count**) du jeu de données ont été préalablement extrait dans le fichier `people/people_wiki_word_count.npz`. Ces vecteurs sont regroupés dans une matrice diluée (sparse), où la i-ème ligne donne le vecteur d'occurence des mots pour le i-ème document. Chaque colonne correspond à un mot unique apparaissant dans le jeu de données. Le mapping entre les mots et les indices est donné dans `people/people_wiki_map_index_to_word.json`La fonction suivante permet le chargement des vecteurs d'occurence des mots:
###Code
from scipy.sparse import csr_matrix
def load_sparse_csr(filename):
loader = np.load(filename)
data = loader['data']
indices = loader['indices']
indptr = loader['indptr']
shape = loader['shape']
return csr_matrix( (data, indices, indptr), shape)
###Output
_____no_output_____
###Markdown
La fonction ci-dessus utilise `csr_matrix` de la bibliothèque SciPy:[class scipy.sparse.csr_matrix(arg1, shape=None, dtype=None, copy=False)](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html) **Exercice 2-1 - À l'aide de la fonction ci-dessus, charger le ficher contenant les vecteurs d'occurence des mots**
###Code
# Compléter cette cellule ~ 1 ligne de code
from scipy.sparse import csr_matrix
word_count = load_sparse_csr('../../data/people/people_wiki_word_count.npz')
###Output
_____no_output_____
###Markdown
**Exercice 2-2 - En vous référant à la documentation de la fonction `csr_matrix`, convertissez la matrice `word_count` en tableau NumPy. Que constatez-vous ?**
###Code
# Compléter cette cellule ~ 1 ligne de code
word_count
59071*547979
word_count.toarray()
###Output
_____no_output_____
###Markdown
**Exercice 2-3 - À l'aide du module json ou de la librairie Pandas, charger le ficher contenant le mapping entre les mots et les indices. Combien y a-t-il de mots dans le dictionnaire ?**
###Code
# Compléter cette cellule ~ 2-3 lignes de code
import json
with open('../../data/people/people_wiki_map_index_to_word.json') as f:
map_index_to_word = json.load(f)
len(map_index_to_word)
###Output
_____no_output_____
###Markdown
**Exercice 2-4 (optionnel) - Extraire par vous-même les vecteurs d'occurence des mots. Un bon point de départ est la fonction `sklearn.CountVectorizer`**
###Code
# Compléter cette cellule
###Output
_____no_output_____
###Markdown
3 - Recherche des plus proches voisins avec représentation word count Commençons par trouver les voisins les plus proches de la page Wikipedia de **Barack Obama**. Les vecteurs d'occurence des mots (**word count**) seront utilisés pour représenter les articles et la **distance euclidienne** pour mesurer la similarité. [class sklearn.neighbors.NearestNeighbors(*, n_neighbors=5, radius=1.0, algorithm='auto', leaf_size=30, metric='minkowski', p=2, metric_params=None, n_jobs=None)](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.htmlsklearn.neighbors.NearestNeighbors) **Exercice 3-1 - Quel est l'id correspondant à la page Wikipedia de barack Obama ?**
###Code
# Compléter cette cellule ~ 1 ligne de code
wiki['name'] == 'Barack Obama'
#wiki[wiki['name'] == 'Barack Obama']
###Output
_____no_output_____
###Markdown
**Exercice 3-2 - À l'aide de scikit-learn, rechercher les 10 pages Wikipedia de personnalités les plus similaires à la page de Barack Obama. Affichez les distances et noms de personalités dans une même trame de données**
###Code
# Compléter cette cellule ~ 5-6 lignes de code
from sklearn.neighbors import NearestNeighbors
model = NearestNeighbors(metric='euclidean', algorithm='brute').fit(word_count)
distances, indices = model.kneighbors(word_count[35817], n_neighbors=10)
indices
neighbors = pd.DataFrame({'distance':distances.flatten(), 'id':indices.flatten()}).set_index('id')
wiki.join(neighbors, on='id', how="right").sort_values(by='distance')[['name','distance']]
###Output
_____no_output_____
###Markdown
**Exercice 3-3 - Interprétez les résultats ci-dessus**
###Code
# Compléter cette cellule
###Output
_____no_output_____
###Markdown
Les 10 personnalités sont toutes des politiciens, mais à peu près la moitié d'entre elles ont des liens assez ténus avec Obama, outre le fait qu'ils sont des politiciens.* Francisco Barrio est un homme politique mexicain et ancien gouverneur de Chihuahua.* Walter Mondale et Don Bonker sont des démocrates qui ont fait carrière à la fin des années 1970.* Wynn Normington Hugh-Jones est un ancien diplomate britannique et fonctionnaire du Parti libéral.* Andy Anstett est un ancien politicien au Manitoba, au Canada. **Exercice 3-4 - Affichez les mots les plus fréquents des pages de Barack Obama et Francisco Barrio** Afin de pouvoir reconnaître rapidement les mots d'une grande importance, la fonction suivante permettant d'obtenir la colonne `word_count` est fournie.
###Code
def unpack_dict(matrix, map_index_to_word):
table = sorted(map_index_to_word, key=map_index_to_word.get)
data = matrix.data
indices = matrix.indices
indptr = matrix.indptr
num_doc = matrix.shape[0]
return [{k:v for k,v in zip([table[word_id] for word_id in indices[indptr[i]:indptr[i+1]] ],
data[indptr[i]:indptr[i+1]].tolist())} for i in range(num_doc) ]
# Compléter cette cellule ~ 2 lignes de code
wiki['word_count'] = unpack_dict(word_count, map_index_to_word)
wiki['word_count']
###Output
_____no_output_____
###Markdown
**Exercice 3-5 - Créer une fonction `top_words`, permattant d'afficher les mots les plus fréquents d'une page donnée**
###Code
# Compléter cette cellule ~ 10 lignes de code
def top_words(name):
"""
Retourne la table des mots les plus fréquents d'une page Wikipedia du jeu de données.
"""
row = wiki[wiki['name'] == name]
word_count_df = pd.DataFrame(row['word_count'].apply(pd.Series).stack(), columns=["count"]).droplevel(0)
word_count_df.index.name = 'word'
return word_count_df.sort_values(by='count', ascending=False)
obama_words = top_words('Barack Obama')
barrio_words = top_words('Francisco Barrio')
combined_words = obama_words.join(barrio_words, on='word', how="inner", lsuffix='_obama', rsuffix='_barrio')
combined_words.head(10)
###Output
_____no_output_____
###Markdown
4 - Recherche des plus proches voisins avec représentation TF-IDF **Exercice 4 - Répétez les étapes des exercices de la partie 3 en utilisant cette fois-ci la représentation TF-IDF. Comparez avec les résultats obtenu par la représentation word count**
###Code
# Compléter cette cellule ~ 14-20 lignes de code
# Chargement des représentations TF-IDF
tf_idf = load_sparse_csr('../../data/people/people_wiki_tf_idf.npz')
# Recherche des 10 plus proches voisins
model_tf_idf = NearestNeighbors(metric='euclidean', algorithm='brute').fit(tf_idf)
distances, indices = model_tf_idf.kneighbors(tf_idf[35817], n_neighbors=10)
# Préparation de la trame de données des résultats
neighbors = pd.DataFrame({'distance':distances.flatten(), 'id':indices.flatten()}).set_index('id')
wiki.join(neighbors, on='id', how='right').sort_values(by='distance')[['name','distance']]
# Affichage des mots les plus significatifs des deux pages
wiki['tf_idf'] = unpack_dict(tf_idf, map_index_to_word)
def top_words_tf_idf(name):
row = wiki[wiki['name'] == name]
tf_idf_df = pd.DataFrame(row['tf_idf'].apply(pd.Series).stack(), columns=["weight"]).droplevel(0)
tf_idf_df.index.name = 'word'
return tf_idf_df.sort_values(by='weight', ascending=False)
obama_words = top_words_tf_idf('Barack Obama')
barrio_words = top_words_tf_idf('Francisco Barrio')
combined_words = obama_words.join(barrio_words, on='word', how="inner", lsuffix='_obama', rsuffix='_barrio')
combined_words.head(10)
###Output
_____no_output_____ |
CS/CS_Unit_1_S3.ipynb | ###Markdown
Mod 1 balancedBinaryTree
You are given a binary tree and you need to write a function that can determine if it is height-balanced.
A height-balanced tree can be defined as a binary tree in which the left and right subtrees of every node differ in height by a maximum of 1.
Example 1:
Given the following tree [5,10,25,None,None,12,3]:
5
/ \
10 25
/ \
12 3
return True.
Example 2:
Given the following tree [5,6,6,7,7,None,None,8,8]:
5
/ \
6 6
/ \
7 7
/ \
8 8
return False.
[execution time limit] 4 seconds (py3)
[input] tree.integer root
[output] boolean
###Code
# Binary trees are already defined with this interface:
# class Tree(object):
# def __init__(self, x):
# self.value = x
# self.left = None
# self.right = None
def depth(root):
if not root:
return 0
return max(depth(root.left), depth(root.right)) + 1
def balancedBinaryTree(root):
if not root:
return True
left_depth = depth(root.left)
right_depth = depth(root.right)
return (abs(left_depth - right_depth) <= 1) and balancedBinaryTree(root.left) and balancedBinaryTree(root.right)
###Output
_____no_output_____
###Markdown
minimumDepthBinaryTree
You are given a binary tree and you are asked to write a function that finds its minimum depth. The minimum depth can be defined as the number of nodes along the shortest path from the root down to the nearest leaf node. As a reminder, a leaf node is a node with no children.
Example:
Given the binary tree [5,7,22,None,None,17,9],
5
/ \
7 22
/ \
17 9
your function should return its minimum depth = 2.
[execution time limit] 4 seconds (py3)
[input] tree.integer root
[output] integer
###Code
# Binary trees are already defined with this interface:
# class Tree(object):
# def __init__(self, x):
# self.value = x
# self.left = None
# self.right = None
def minimumDepthBinaryTree(root):
if root is None:
return 0
if not root.left and not root.right:
return 1
if not root.left: #call again on right
return minimumDepthBinaryTree(root.right)+1
if not root.right: #call again on left
return minminimumDepthBinaryTreeDepth(root.left) +1
# if none of these are true anymore, exit recursion & return the minimum
return min(minimumDepthBinaryTree(root.left), minimumDepthBinaryTree(root.right))+1
###Output
_____no_output_____
###Markdown
Mod 2
traverseTree
Given a binary tree of integers t, return its node values in the following format:
The first element should be the value of the tree root;
The next elements should be the values of the nodes at height 1 (i.e. the root children), ordered from the leftmost to the rightmost one;
The elements after that should be the values of the nodes at height 2 (i.e. the children of the nodes at height 1) ordered in the same way;
Etc.
Example
For
t = {
"value": 1,
"left": {
"value": 2,
"left": null,
"right": {
"value": 3,
"left": null,
"right": null
}
},
"right": {
"value": 4,
"left": {
"value": 5,
"left": null,
"right": null
},
"right": null
}
}
the output should be
traverseTree(t) = [1, 2, 4, 3, 5].
This t looks like this:
1
/ \
2 4
\ /
3 5
###Code
def traverseTree(t):
if not t:
return []
result = []
queue = []
queue.append(t)
while len(queue) != 0:
node = queue.pop(0)
result.append(node.value)
if node.left:
queue.append(node.left)
if node.right:
queue.append(node.right)
return result
###Output
_____no_output_____
###Markdown
binaryTreeInOrderTraversal
You are given a binary tree. Write a function that returns the binary tree's node values using an in-order traversal.
Example:
Input: [2,None,3,4]
2
\
3
/
4
Output: [2,4,3]
###Code
def helper(root, res):
if not root:
return
helper(root.left, res)
res.append(root.value)
helper(root.right, res)
def binaryTreeInOrderTraversal(root):
result = []
helper(root, result)
return result
###Output
_____no_output_____
###Markdown
treePaths
Given a binary tree of integers, return all the paths from the tree's root to its leaves as an array of strings. The strings should have the following format:
"root->node1->node2->...->noden", representing the path from root to noden, where root is the value stored in the root and node1,node2,...,noden are the values stored in the 1st, 2nd,..., and nth nodes in the path respectively (noden representing the leaf).
Example
For
t = {
"value": 5,
"left": {
"value": 2,
"left": {
"value": 10,
"left": null,
"right": null
},
"right": {
"value": 4,
"left": null,
"right": null
}
},
"right": {
"value": -3,
"left": null,
"right": null
}
}
The given tree looks like this:
5
/ \
2 -3
/ \
10 4
###Code
# Binary trees are already defined with this interface:
# class Tree(object):
# def __init__(self, x):
# self.value = x
# self.left = None
# self.right = None
def treePaths(t):
# list to store path
path = []
result = []
getPath(t, path, 0, result)
return result
def getPath(t, path, pathLen, result):
if t is None:
return
if(len(path) > pathLen):
# replace element in list
path[pathLen] = t.value
else:
#add to end of list
path.append(t.value)
pathLen = pathLen + 1
if t.left is None and t.right is None:
addString(path, result, pathLen)
else:
getPath(t.left, path, pathLen, result)
getPath(t.right, path, pathLen, result)
def addString(ints,res, pathLen):
s = ""
for i in range(pathLen):
s+=(str(ints[i])+"->")
res.append(s[:-2])
return res
###Output
_____no_output_____
###Markdown
Lecture
Recursive Max Depth
###Code
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, val=0, left=None, right=None):
# self.val = val
# self.left = left
# self.right = right
### https://leetcode.com/problems/maximum-depth-of-binary-tree/
class Solution:
def maxDepth(self, root: TreeNode) -> int:
self.maxDepth = 0
self.maxDepthHelper(root, 1)
return self.maxDepth
def maxDepthHelper(self, root, currDepth):
if root.left == None and root.right == None:
if currDepth > self.maxDepth:
self.maxDepth = currDepth
return
self.maxDepthHelper(root.left, currDepth+1)
self.maxDepthHelper(root.right, currDepth+1)
###Output
_____no_output_____
###Markdown
Iterative Max Depth
###Code
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, val=0, left=None, right=None):
# self.val = val
# self.left = left
# self.right = right
from collections import deque
class Solution:
def maxDepth(self, root: TreeNode) -> int:
if root == None:
return 0
stack = deque()
stack.append((root, 1))
maxDepthFound = 1
while len(stack) > 0:
curr = stack.pop()
currNode, currDepth = curr[0], curr[1]
if currNode.left == None and currNode.right == None:
if currDepth > maxDepthFound:
maxDepthFound = currDepth
if currNode.left != None:
stack.append((currNode.left, currDepth + 1))
if currNode.right != None:
stack.append((currNode.right, currDepth + 1))
return maxDepthFound
###Output
_____no_output_____
###Markdown
Mod 3
Graph Class code
###Code
class Vertex:
def __init__(self, value):
self.value = value
self.connections = {}
def __str__(self):
return str(self.value) + ' connections: '+str([x.value for x in self.connections])
def add_connection(self, vert, weight = 0):
self.connections[vert] = weight
def get_connections(self):
return self.connections.keys()
def get_value(self):
return self.value
def get_weight(self, vert):
return self.connections[vert]
class Graph:
def __init__(self):
self.vertices = {}
self.count = 0
def __contains__(self, vert):
return vert in self.vertices
def __iter__(self):
return iter(self.vertices.values())
def add_vertex(self, value):
self.count += 1
new_vert = Vertex(value)
self.vertices[value] = new_vert
return new_vert
def add_edge(self, v1, v2, weight = 0):
if v1 not in self.vertices:
self.add_vertex(v1)
if v2 not in self.vertices:
self.add_vertex(v2)
self.vertices[v1].add_connection(self.vertices[v2], weight)
def get_vertices(self):
return self.vertices.keys()
if 5 not in [1,3]:
print('not')
else : print('in')
###Output
not
###Markdown
You are given a directed acyclic graph (DAG) that contains N nodes.
Write a function that can find all the possible paths from node 0 to node N - 1.
graph[a] is a list of all nodes b for which the edge a -> b exists.
Example:
Input: graph = [[1, 2],[3],[3],[4],[]]
Output: [[0,1,3,4], [0,2,3,4]]
Note: The results must be returned in sorted order. You can use any built-in sort method on the results array at the end of your function before returning.
###Code
from collections import defaultdict, deque # defaultdict was my downfall.
import random # lessons learned, more practice
# needed with graphs and deque
# def append_value(dict_obj, key, value):
# # Check if key exist in dict or not
# if key in dict_obj:
# # Key exist in dict.
# # Check if type of value of key is list or not
# if not isinstance(dict_obj[key], list):
# # If type is not list then make it list
# dict_obj[key] = [dict_obj[key]]
# # Append the value in list
# dict_obj[key].append(value)
# else:
# # As key is not in dict,
# # so, add key-value pair
# dict_obj[key] = value
# def convert(a):
# '''
# converts a (oddly formatted) graph into an adjancency matrix
# '''
# adjList = defaultdict(set)
# for i in range(len(a)):
# for j in a[i]:
# adjList[j].add(i)
# return adjList
# visited = set()
# '''
# initializes global var visited
# '''
# def dftRecursive(start, graph, result):
# visited.add(start)
# for path in start:
# if path not in visited:
# dftRecursive(path, graph, result)
# result.append(visited)
# def csFindAllPathsFromAToB(graph):
# result = []
# aList = convert(graph)
# start = aList[0]
# dftRecursive(start, aList, result)
# return result
def csFindAllPathsFromAToB(graph):
stack = deque()
stack.append((0, [0])) #Starts stack with the starting node and no path
res = []
destinationNode = len(graph) - 1 #the index of the last element
while len(stack) > 0:
curr = stack.pop() #rmoves/assigns most recent node added to curr
currNode = curr[0] #assigns 1st element in node (value)
currPath = curr[1] #assigns 2nd element in node (path)
for neighbor in graph[currNode]: #iterates over list of neighboring nodes
newPath = currPath.copy() # makes a copy of the path so additional
newPath.append(neighbor) # neighbors can be added for each path
# while not changing the path for the other
# neighbors... so [0,1,2] can become
# [0,1,2,3] or [0,1,2,4] for the next
# iteration
if neighbor == destinationNode: # when reaching the emd
res.append(newPath) #add path constructed to resulting array
else:
stack.append((neighbor, newPath)) # continue looping by pushing new
# path additions and neighbor
# value to the stack
res.sort()
return res
###Output
_____no_output_____
###Markdown
Mod 4 csFriendCircles
There are N students in a baking class together. Some of them are friends, while some are not friends. The students' friendship can be considered transitive. This means that if Ami is a direct friend of Bill, and Bill is a direct friend of Casey, Ami is an indirect friend of Casey. A friend circle is a group of students who are either direct or indirect friends.
Given a N*N matrix M representing the friend relationships between students in the class. If M[i][j] = 1, then the ith and jth students are direct friends with each other, otherwise not.
You need to write a function that can output the total number of friend circles among all the students.
Example 1:
Input:
[[1,1,0],
[1,1,0],
[0,0,1]]
Output: 2
Explanation: The 0th and 1st students are direct friends, so they are in a friend circle.
The 2nd student himself is in a friend circle. So return 2.
Input:
[[1,1,0],
[1,1,0],
[0,0,1]]
Output: 2
Explanation: The 0th and 1st students are direct friends, so they are in a friend circle.
The 2nd student himself is in a friend circle. So return 2.
Example 2:
Input:
[[1,1,0],
[1,1,1],
[0,1,1]]
Output: 1
Explanation: The 0th and 1st students are direct friends, the 1st and 2nd students are direct friends,
so the 0th and 2nd students are indirect friends. All of them are in the same friend circle, so return 1.
working solution
###Code
def adjList(matrix):
cur = 0
res = []
adjList = []
for i in range(len(matrix)):
res.append([] * len(matrix))
#creates indexes to fill later with values instead of 1's
while cur < len(matrix):
for i in range(len(matrix[cur])):
if matrix[cur][i] == 1:
res[cur].append(i)
cur += 1
return res
def hlp(aList, n, visited, i):
for x in aList:
if x not in visited:
visited.append(x)
hlp(aList, n, visited, x)
def csFriendCircles(friendships):
n = len(friendships)
aList = adjList(friendships)
visited = []
res = 0
for i in range(n): # the outer loop ensures unconnected nodes get traversed
if i not in visited:
visited.append(i)
hlp(aList, n, visited, i)
res += 1
return res
#whew.. completed
###Output
_____no_output_____
###Markdown
without conversion to adjList
###Code
def hlp(friendships, visited, i):
for x in range(len(friendships[i])):
if x not in visited and friendships[i][x] == 1:
visited.append(x)
hlp(friendships, visited, x)
def csFriendCircles(friendships):
n = len(friendships)
visited = []
res = 0
for i in range(n):
if i not in visited:
visited.append(i)
hlp(friendships, visited, i)
res += 1
return res
sum([10, 5, 15])
###Output
_____no_output_____
###Markdown
Mod 1 balancedBinaryTree
You are given a binary tree and you need to write a function that can determine if it is height-balanced.
A height-balanced tree can be defined as a binary tree in which the left and right subtrees of every node differ in height by a maximum of 1.
Example 1:
Given the following tree [5,10,25,None,None,12,3]:
5
/ \
10 25
/ \
12 3
return True.
Example 2:
Given the following tree [5,6,6,7,7,None,None,8,8]:
5
/ \
6 6
/ \
7 7
/ \
8 8
return False.
[execution time limit] 4 seconds (py3)
[input] tree.integer root
[output] boolean
###Code
# Binary trees are already defined with this interface:
# class Tree(object):
# def __init__(self, x):
# self.value = x
# self.left = None
# self.right = None
def depth(root):
if not root:
return 0
return max(depth(root.left), depth(root.right)) + 1
def balancedBinaryTree(root):
if not root:
return True
left_depth = depth(root.left)
right_depth = depth(root.right)
return (abs(left_depth - right_depth) <= 1) and balancedBinaryTree(root.left) and balancedBinaryTree(root.right)
###Output
_____no_output_____
###Markdown
minimumDepthBinaryTree
You are given a binary tree and you are asked to write a function that finds its minimum depth. The minimum depth can be defined as the number of nodes along the shortest path from the root down to the nearest leaf node. As a reminder, a leaf node is a node with no children.
Example:
Given the binary tree [5,7,22,None,None,17,9],
5
/ \
7 22
/ \
17 9
your function should return its minimum depth = 2.
[execution time limit] 4 seconds (py3)
[input] tree.integer root
[output] integer
###Code
# Binary trees are already defined with this interface:
# class Tree(object):
# def __init__(self, x):
# self.value = x
# self.left = None
# self.right = None
def minimumDepthBinaryTree(root):
if root is None:
return 0
if not root.left and not root.right:
return 1
if not root.left: #call again on right
return minimumDepthBinaryTree(root.right)+1
if not root.right: #call again on left
return minminimumDepthBinaryTreeDepth(root.left) +1
# if none of these are true anymore, exit recursion & return the minimum
return min(minimumDepthBinaryTree(root.left), minimumDepthBinaryTree(root.right))+1
###Output
_____no_output_____
###Markdown
Mod 2
traverseTree
Given a binary tree of integers t, return its node values in the following format:
The first element should be the value of the tree root;
The next elements should be the values of the nodes at height 1 (i.e. the root children), ordered from the leftmost to the rightmost one;
The elements after that should be the values of the nodes at height 2 (i.e. the children of the nodes at height 1) ordered in the same way;
Etc.
Example
For
t = {
"value": 1,
"left": {
"value": 2,
"left": null,
"right": {
"value": 3,
"left": null,
"right": null
}
},
"right": {
"value": 4,
"left": {
"value": 5,
"left": null,
"right": null
},
"right": null
}
}
the output should be
traverseTree(t) = [1, 2, 4, 3, 5].
This t looks like this:
1
/ \
2 4
\ /
3 5
###Code
def traverseTree(t):
if not t:
return []
result = []
queue = []
queue.append(t)
while len(queue) != 0:
node = queue.pop(0)
result.append(node.value)
if node.left:
queue.append(node.left)
if node.right:
queue.append(node.right)
return result
###Output
_____no_output_____
###Markdown
binaryTreeInOrderTraversal
You are given a binary tree. Write a function that returns the binary tree's node values using an in-order traversal.
Example:
Input: [2,None,3,4]
2
\
3
/
4
Output: [2,4,3]
###Code
def helper(root, res):
if not root:
return
helper(root.left, res)
res.append(root.value)
helper(root.right, res)
def binaryTreeInOrderTraversal(root):
result = []
helper(root, result)
return result
###Output
_____no_output_____
###Markdown
treePaths
Given a binary tree of integers, return all the paths from the tree's root to its leaves as an array of strings. The strings should have the following format:
"root->node1->node2->...->noden", representing the path from root to noden, where root is the value stored in the root and node1,node2,...,noden are the values stored in the 1st, 2nd,..., and nth nodes in the path respectively (noden representing the leaf).
Example
For
t = {
"value": 5,
"left": {
"value": 2,
"left": {
"value": 10,
"left": null,
"right": null
},
"right": {
"value": 4,
"left": null,
"right": null
}
},
"right": {
"value": -3,
"left": null,
"right": null
}
}
The given tree looks like this:
5
/ \
2 -3
/ \
10 4
###Code
# Binary trees are already defined with this interface:
# class Tree(object):
# def __init__(self, x):
# self.value = x
# self.left = None
# self.right = None
def treePaths(t):
# list to store path
path = []
result = []
getPath(t, path, 0, result)
return result
def getPath(t, path, pathLen, result):
if t is None:
return
if(len(path) > pathLen):
# replace element in list
path[pathLen] = t.value
else:
#add to end of list
path.append(t.value)
pathLen = pathLen + 1
if t.left is None and t.right is None:
addString(path, result, pathLen)
else:
getPath(t.left, path, pathLen, result)
getPath(t.right, path, pathLen, result)
def addString(ints,res, pathLen):
s = ""
for i in range(pathLen):
s+=(str(ints[i])+"->")
res.append(s[:-2])
return res
###Output
_____no_output_____
###Markdown
Lecture
Recursive Max Depth
###Code
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, val=0, left=None, right=None):
# self.val = val
# self.left = left
# self.right = right
### https://leetcode.com/problems/maximum-depth-of-binary-tree/
class Solution:
def maxDepth(self, root: TreeNode) -> int:
self.maxDepth = 0
self.maxDepthHelper(root, 1)
return self.maxDepth
def maxDepthHelper(self, root, currDepth):
if root.left == None and root.right == None:
if currDepth > self.maxDepth:
self.maxDepth = currDepth
return
self.maxDepthHelper(root.left, currDepth+1)
self.maxDepthHelper(root.right, currDepth+1)
###Output
_____no_output_____
###Markdown
Iterative Max Depth
###Code
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, val=0, left=None, right=None):
# self.val = val
# self.left = left
# self.right = right
from collections import deque
class Solution:
def maxDepth(self, root: TreeNode) -> int:
if root == None:
return 0
stack = deque()
stack.append((root, 1))
maxDepthFound = 1
while len(stack) > 0:
curr = stack.pop()
currNode, currDepth = curr[0], curr[1]
if currNode.left == None and currNode.right == None:
if currDepth > maxDepthFound:
maxDepthFound = currDepth
if currNode.left != None:
stack.append((currNode.left, currDepth + 1))
if currNode.right != None:
stack.append((currNode.right, currDepth + 1))
return maxDepthFound
###Output
_____no_output_____
###Markdown
Mod 3
Graph Class code
###Code
class Vertex:
def __init__(self, value):
self.value = value
self.connections = {}
def __str__(self):
return str(self.value) + ' connections: '+str([x.value for x in self.connections])
def add_connection(self, vert, weight = 0):
self.connections[vert] = weight
def get_connections(self):
return self.connections.keys()
def get_value(self):
return self.value
def get_weight(self, vert):
return self.connections[vert]
class Graph:
def __init__(self):
self.vertices = {}
self.count = 0
def __contains__(self, vert):
return vert in self.vertices
def __iter__(self):
return iter(self.vertices.values())
def add_vertex(self, value):
self.count += 1
new_vert = Vertex(value)
self.vertices[value] = new_vert
return new_vert
def add_edge(self, v1, v2, weight = 0):
if v1 not in self.vertices:
self.add_vertex(v1)
if v2 not in self.vertices:
self.add_vertex(v2)
self.vertices[v1].add_connection(self.vertices[v2], weight)
def get_vertices(self):
return self.vertices.keys()
if 5 not in [1,3]:
print('not')
else : print('in')
###Output
not
###Markdown
You are given a directed acyclic graph (DAG) that contains N nodes.
Write a function that can find all the possible paths from node 0 to node N - 1.
graph[a] is a list of all nodes b for which the edge a -> b exists.
Example:
Input: graph = [[1, 2],[3],[3],[4],[]]
Output: [[0,1,3,4], [0,2,3,4]]
Note: The results must be returned in sorted order. You can use any built-in sort method on the results array at the end of your function before returning.
###Code
# Dictionary of strings and ints
word_freq = {
"Hello": 56,
"at": 23,
"test": {43},
"this": 43
}
word_freq.update({'before': 23})
word_freq
word_freq.update({'test': 23})
word_freq
from collections import defaultdict, deque # defaultdict was my downfall.
import random # lessons learned, more practice
# needed with graphs and deque
# def append_value(dict_obj, key, value):
# # Check if key exist in dict or not
# if key in dict_obj:
# # Key exist in dict.
# # Check if type of value of key is list or not
# if not isinstance(dict_obj[key], list):
# # If type is not list then make it list
# dict_obj[key] = [dict_obj[key]]
# # Append the value in list
# dict_obj[key].append(value)
# else:
# # As key is not in dict,
# # so, add key-value pair
# dict_obj[key] = value
# def convert(a):
# '''
# converts a (oddly formatted) graph into an adjancency matrix
# '''
# adjList = defaultdict(set)
# for i in range(len(a)):
# for j in a[i]:
# adjList[j].add(i)
# return adjList
# visited = set()
# '''
# initializes global var visited
# '''
# def dftRecursive(start, graph, result):
# visited.add(start)
# for path in start:
# if path not in visited:
# dftRecursive(path, graph, result)
# result.append(visited)
# def csFindAllPathsFromAToB(graph):
# result = []
# aList = convert(graph)
# start = aList[0]
# dftRecursive(start, aList, result)
# return result
def csFindAllPathsFromAToB(graph):
stack = deque() #never used in an assignment, but I will have to get
stack.append((0, [0])) #2 Dummy node... also never occured to me to try
res = []
destinationNode = len(graph) - 1 #1 This part I would not have found alone
while len(stack) > 0:
curr = stack.pop()
currNode, currPath = curr[0], curr[1]
for neighbor in graph[currNode]:
newPath = currPath.copy()
newPath.append(neighbor)
if neighbor == destinationNode:
res.append(newPath)
else:
stack.append((neighbor, newPath))
res.sort()
return res
###Output
_____no_output_____ |
notebooks/Module3-Pyhton-Programming-Fundamentals/PY0101EN-3.1_notebook_quizz_Conditions_and_Branching.ipynb | ###Markdown
Comparison operations Find the value of i that produces a True
###Code
i = 5
i!=0
###Output
_____no_output_____
###Markdown
Click here for the solution```pythoni = 1 any value other than 0 will produce output as True``` Branching Find the value of x that prints the statement: "this is a"
###Code
x = 'a'
if(x=='a'):
print("this is a")
else:
print("this is not a")
###Output
this is a
###Markdown
Click here for the solution```pythonx = 'a'``` Logic Operators find the value of y that produces a True statement
###Code
y = 8
x=1
x>0 and y<10
###Output
_____no_output_____ |
AGU_and_EM6/Electric Dipole WholeSpace.ipynb | ###Markdown
Compare against an electric dipole in a wholespace
###Code
from SimPEG.EM import Analytics
csx, ncx, npadx = 0.4, 20, 42
csz, ncz, npadz = 0.4, 6, 40
hx = Utils.meshTensor([(csx, ncx), (csx, npadx, 1.2)])
hz = Utils.meshTensor([(csz, npadz, -1.2), (csz, ncz), (csz, npadz, 1.2)])
mesh = Mesh.CylMesh([hx, 1., hz], x0='00C')
mesh.plotGrid()
src_ind = (
(mesh.gridFz[:,0] < csx) &
(mesh.gridFz[:,2] <= csz*3) &
(mesh.gridFz[:,2] >= -csz*3)
)
src_vecz = np.zeros(mesh.vnF[2], dtype=complex)
src_vecz[src_ind] = 1.
src_vec = np.hstack([
np.zeros(mesh.vnF[0], dtype=complex),
np.zeros(mesh.vnF[1], dtype=complex),
src_vecz
])
fig, ax = plt.subplots(1,1)
mesh.plotGrid(ax=ax)
ax.plot(mesh.gridFz[src_ind, 0], mesh.gridFz[src_ind, 2], 'rd')
ax.set_xlim([0., 5.])
ax.set_ylim([-20, 20.])
freq = 1.
# mesh.getFaceInnerProduct(invMat=True) * src_vec
# src_vec / mesh.area
# src = FDEM.Src.RawVec_e([], freq, mesh.getFaceInnerProduct(invMat=True) * src_vec)
src = FDEM.Src.RawVec_e([], freq, (src_vec / mesh.area))
prob = FDEM.Problem3D_h(mesh, sigmaMap=Maps.IdentityMap(mesh), mu=mu_0)
prob.solver = Solver
survey = FDEM.Survey([src])
prob.pair(survey)
sigma = 0.6
print('skin depth {}'.format(500/np.sqrt(sigma*freq)))
fields = prob.fields(sigma*np.ones(mesh.nC))
plotCurrentDensity(mesh, fields[src, 'j'], xmax = 15., zmin=10, zmax=-10, csz=0.5, csx=0.5)
# pick a line and compare to electric dipole analytic
jx = fields[src, 'j'][:mesh.nFx].reshape(mesh.vnFx[0], mesh.vnFx[2], order='F')
jz = fields[src, 'j'][mesh.nFx:].reshape(mesh.vnFz[0], mesh.vnFz[2], order='F')
length = mesh.gridFz[src_ind,2]
length = length.max() - length.min() + mesh.hz.min()
# Look at Jz
x_ind = 40
XYZ = Utils.ndgrid([np.r_[mesh.vectorCCx[x_ind]], np.r_[1], mesh.vectorNz])
print XYZ.shape
# solve the analytic
jana_x, jana_y, jana_z = Analytics.E_from_ElectricDipoleWholeSpace(XYZ, np.r_[0., 0., 0.], sig=sigma, f=np.r_[freq], current=1., length=length, orientation='Z')
jana_x, jana_y, jana_z = sigma*jana_x, sigma*jana_y, sigma*jana_z,
# plt.plot()
fig, ax = plt.subplots(3, 1, figsize=(10,8))
ax[0].plot(mesh.vectorNz, jana_z.real)
ax[0].plot(mesh.vectorNz, jz[x_ind, :].real)
ax[0].legend(['ana', 'numeric'])
ax[1].plot(mesh.vectorNz, jz[x_ind, :].real - jana_z.real)
ax[2].plot(mesh.vectorNz, jz[x_ind, :].real / jana_z.real)
ax[2].set_ylim([0, 2])
print(np.linalg.norm(jz[x_ind, :].real - jana_z.real)/np.linalg.norm(jana_z.real))
print(np.linalg.norm(jz[x_ind, :].real)/np.linalg.norm(jana_z.real))
fig, ax = plt.subplots(3, 1, figsize=(10,8))
ax[0].plot(mesh.vectorNz, jana_z.imag)
ax[0].plot(mesh.vectorNz, jz[x_ind, :].imag)
ax[0].legend(['ana', 'numeric'])
ax[1].plot(mesh.vectorNz, jz[x_ind, :].imag - jana_z.imag)
ax[2].plot(mesh.vectorNz, jz[x_ind, :].imag / jana_z.imag)
ax[2].set_ylim([0, 2])
print(np.linalg.norm(jz[x_ind, :].imag - jana_z.imag)/np.linalg.norm(jana_z.imag))
print(np.linalg.norm(jz[x_ind, :].imag)/np.linalg.norm(jana_z.imag))
# Look at Jx
z_ind = 15
print np.r_[mesh.vectorCCz[z_ind]]
XYZ = Utils.ndgrid([mesh.vectorNx, np.r_[1], np.r_[mesh.vectorCCz[z_ind]]])
print XYZ.shape
# solve the analytic
jana_x, jana_y, jana_z = Analytics.E_from_ElectricDipoleWholeSpace(
XYZ, np.r_[0., 0., 0.], sig=sigma, f=np.r_[freq], current=1., length=length, orientation='Z'
)
jana_x, jana_y, jana_z = sigma*jana_x, sigma*jana_y, sigma*jana_z,
# plt.plot()
fig, ax = plt.subplots(3, 1, figsize=(10,8))
ax[0].plot(mesh.vectorNx, jana_x.real)
ax[0].plot(mesh.vectorNx, jx[:, z_ind].real)
ax[0].legend('ana', 'numeric')
ax[1].plot(mesh.vectorNx, jx[:, z_ind].real - jana_x.real)
ax[2].plot(mesh.vectorNx, jx[:, z_ind].real / jana_x.real)
ax[2].set_ylim([0, 2])
print(np.linalg.norm(jx[:, z_ind].real - jana_x.real)/np.linalg.norm(jana_x.real))
print(np.linalg.norm(jx[:, z_ind].real)/np.linalg.norm(jana_x.real))
fig, ax = plt.subplots(3, 1, figsize=(10,8))
ax[0].plot(mesh.vectorNx, jana_x.imag)
ax[0].plot(mesh.vectorNx, jx[:, z_ind].imag)
ax[0].legend('ana', 'numeric')
ax[1].plot(mesh.vectorNx, jx[:, z_ind].imag - jana_x.imag)
ax[2].plot(mesh.vectorNx, jx[:, z_ind].imag / jana_x.imag)
ax[2].set_ylim([0, 2])
print(np.linalg.norm(jx[:, z_ind].imag - jana_x.imag)/np.linalg.norm(jana_x.imag))
print(np.linalg.norm(jx[:, z_ind].imag)/np.linalg.norm(jana_x.imag))
###Output
0.0958413824666
1.09544912566
|
nlp_sentiment-svm.ipynb | ###Markdown
Sentiment analysis with SVM(support vector machines)In this notebook, we will revisit a learning task that we encountered earlier in the course: predicting the *sentiment* (positive or negative) of a single sentence taken from a review of a movie, restaurant, or product. The data set consists of 3000 labeled sentences, which we divide into a training set of size 2500 and a test set of size 500. Previously we found a logistic regression classifier. Today we will use a support vector machine.Before starting on this notebook, make sure the folder `sentiment_labelled_sentences` (containing the data file `full_set.txt`) is in the same directory. Recall that the data can be downloaded from https://archive.ics.uci.edu/ml/datasets/Sentiment+Labelled+Sentences. 1. Loading and preprocessing the data Here we follow exactly the same steps as we did earlier.
###Code
%matplotlib inline
import string
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rc('xtick', labelsize=14)
matplotlib.rc('ytick', labelsize=14)
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import svm
# Find file
filename = 'full_set.txt'
!find . | grep '$filename'
###Output
_____no_output_____
###Markdown
Import data
###Code
## Read in the data set.
with open("../../data/sentiment_labelled_sentences/full_set.txt") as f:
content = f.readlines()
###Output
_____no_output_____
###Markdown
Split data and label
###Code
## Remove leading and trailing white space
content = [x.strip() for x in content]
## Separate the sentences from the labels
sentences = [x.split("\t")[0] for x in content]
labels = [x.split("\t")[1] for x in content]
## Transform the labels from '0 v.s. 1' to '-1 v.s. 1'
y = np.array(labels, dtype='int8')
y = 2*y - 1
###Output
_____no_output_____
###Markdown
Clean and preprocess data
###Code
## full_remove takes a string x and a list of characters removal_list
## returns x with all the characters in removal_list replaced by ' '
def full_remove(x, removal_list):
for w in removal_list:
x = x.replace(w, ' ')
return x
## Remove digits
digits = [str(x) for x in range(10)]
digit_less = [full_remove(x, digits) for x in sentences]
## Remove punctuation
punc_less = [full_remove(x, list(string.punctuation)) for x in digit_less]
## Make everything lower-case
sents_lower = [x.lower() for x in punc_less]
## Define our stop words
stop_set = set(['the', 'a', 'an', 'i', 'he', 'she', 'they', 'to', 'of', 'it', 'from'])
## Remove stop words
sents_split = [x.split() for x in sents_lower]
sents_processed = [" ".join(list(filter(lambda a: a not in stop_set, x))) for x in sents_split]
###Output
_____no_output_____
###Markdown
CountVectorizer
###Code
## Transform to bag of words representation.
vectorizer = CountVectorizer(analyzer = "word", tokenizer=None, preprocessor=None, stop_words=None, max_features=4500)
data_features = vectorizer.fit_transform(sents_processed)
## Append '1' to the end of each vector.
data_mat = data_features.toarray()
# TODO
###Output
_____no_output_____
###Markdown
Split train and test sets
###Code
## Split the data into testing and training sets
np.random.seed(0)
test_inds = np.append(np.random.choice((np.where(y==-1))[0], 250, replace=False), np.random.choice((np.where(y==1))[0], 250, replace=False))
train_inds = list(set(range(len(labels))) - set(test_inds))
train_data = data_mat[train_inds,]
train_labels = y[train_inds]
test_data = data_mat[test_inds,]
test_labels = y[test_inds]
print("train data: ", train_data.shape)
print("test data: ", test_data.shape)
###Output
_____no_output_____
###Markdown
2. Fitting SVM to the dataIn support vector machines, we are given a set of examples $(x_1, y_1), \ldots, (x_n, y_n)$ and we want to find a weight vector $w \in \mathbb{R}^d$ that solves the following optimization problem:$$ \min_{w \in \mathbb{R}^d} \| w \|^2 + C \sum_{i=1}^n \xi_i $$$$ \text{subject to } y_i \langle w, x_i \rangle \geq 1 - \xi_i \text{ for all } i=1,\ldots, n$$`scikit-learn` provides an SVM solver that we will use. The following routine takes as input the constant `C` (from the above optimization problem) and returns the training and test error of the resulting SVM model. It is invoked as follows:* `training_error, test_error = fit_classifier(C)`The default value for parameter `C` is 1.0.
###Code
def fit_classifier(C_value=1.0):
clf = svm.LinearSVC(C=C_value, loss='hinge').fit(train_data,train_labels)
## Get predictions on training data
train_preds = clf.predict(train_data)
train_error = float(np.sum((train_preds > 0.0) != (train_labels > 0.0)))/len(train_labels)
## Get predictions on test data
test_preds = clf.predict(test_data)
test_error = float(np.sum((test_preds > 0.0) != (test_labels > 0.0)))/len(test_labels)
return train_error, test_error
cvals = [0.01, 0.1, 1.0, 10.0, 100.0, 1000.0, 10000.0]
for c in cvals:
train_error, test_error = fit_classifier(c)
print("Error rate for C = %0.2f: train %0.3f test %0.3f" % (c, train_error, test_error))
###Output
_____no_output_____
###Markdown
3. Evaluating C by k-fold cross-validationAs we can see, the choice of `C` has a very significant effect on the performance of the SVM classifier. We were able to assess this because we have a separate test set. In general, however, this is a luxury we won't possess. How can we choose `C` based only on the training set?A reasonable way to estimate the error associated with a specific value of `C` is by **`k-fold cross validation`**:* Partition the training set `S` into `k` equal-sized sized subsets `S_1, S_2, ..., S_k`.* For `i=1,2,...,k`, train a classifier with parameter `C` on `S - S_i` (all the training data except `S_i`) and test it on `S_i` to get error estimate `e_i`.* Average the errors: `(e_1 + ... + e_k)/k`The following procedure, **cross_validation_error**, does exactly this. It takes as input:* the training set `x,y`* the value of `C` to be evaluated* the integer `k`and it returns the estimated error of the classifier for that particular setting of `C`. Look over the code carefully to understand exactly what it is doing.
###Code
def cross_validation_error(x, y, C_value, k):
n = len(y)
## Randomly shuffle indices
indices = np.random.permutation(n)
## Initialize error
err = 0.0
## Iterate over partitions
for i in range(k):
## Partition indices
test_indices = indices[int(i*(n/k)):int((i+1)*(n/k) - 1)]
train_indices = np.setdiff1d(indices, test_indices)
## Train classifier with parameter c
clf = svm.LinearSVC(C=C_value, loss='hinge')
clf.fit(x[train_indices], y[train_indices])
## Get predictions on test partition
preds = clf.predict(x[test_indices])
## Compute error
err += float(np.sum((preds > 0.0) != (y[test_indices] > 0.0)))/len(test_indices)
return err/k
###Output
_____no_output_____
###Markdown
4. Picking a value of C The procedure **cross_validation_error** (above) evaluates a single candidate value of `C`. We need to use it repeatedly to identify a good `C`. **For you to do:** Write a function to choose `C`. It will be invoked as follows:* `c, err = choose_parameter(x,y,k)`where* `x,y` is the training data* `k` is the number of folds of cross-validation* `c` is chosen value of the parameter `C`* `err` is the cross-validation error estimate at `c`Note: This is a tricky business because a priori, even the order of magnitude of `C` is unknown. Should it be 0.0001 or 10000? You might want to think about trying multiple values that are arranged in a geometric progression (such as powers of ten). *In addition to returning a specific value of `C`, your function should **plot** the cross-validation errors for all the values of `C` it tried out (possibly using a log-scale for the `C`-axis).*
###Code
plot_data = []
def zoom_range(c, err, low, hi, x, y, k):
if hi - low < 0.05:
# print('found in: [{:.3f} < {:.3f} < {:.3f}]'.format(low, err, hi))
fig, ax = plt.figure(), plt.gca()
ax.scatter([x for x, y in plot_data], [y for x, y in plot_data], linewidth=2, color='green')
ax.set_xscale('log')
plt.xlabel('C')
plt.ylabel('Error')
plt.show()
return (c, err)
c_space = np.linspace(low, hi, 5)
err_space = np.zeros(5)
for i, c in enumerate(c_space):
err_space[i] = cross_validation_error(x, y, c, k)
plot_data.append([c, err_space[i]])
# print('index: {}, error: {:.3f}, C: {:.3f} [{:.3f} - {:.3f}]'.format(i, err_space[i], c, low, hi))
if np.argmin(err_space) == 0:
return zoom_range(c_space[0], err_space[0], c_space[0]/4, c_space[0]*2, x, y, k)
elif np.argmin(err_space) == 4:
return zoom_range(c_space[4], err_space[4], c_space[4]/2, c_space[4]*4, x, y, k)
else:
return zoom_range(c_space[np.argmin(err_space)], err_space[np.argmin(err_space)],
c_space[np.argmin(err_space)-1], c_space[np.argmin(err_space)+1], x, y, k)
def choose_parameter(x, y, k):
return zoom_range(0, 1, 0.1, 10, x, y, k)
###Output
_____no_output_____
###Markdown
Now let's try out your routine!
###Code
c, err = choose_parameter(train_data, train_labels, 10)
print("Choice of C: ", c)
print("Cross-validation error estimate: ", err)
## Train it and test it
clf = svm.LinearSVC(C=c, loss='hinge')
clf.fit(train_data, train_labels)
preds = clf.predict(test_data)
error = float(np.sum((preds > 0.0) != (test_labels > 0.0)))/len(test_labels)
print("Test error: ", error)
###Output
_____no_output_____ |
notebooks/velMap_nestedSampler.ipynb | ###Markdown
Sample galaxy properties
###Code
gal_ID = '7443-12705'
#gal_ID = '8486-12701'
manga_plate, manga_IFU = gal_ID.split('-')
gal_filename = VEL_MAP_FOLDER + manga_plate + '/' + manga_IFU + '/manga-' + gal_ID + '-MAPS-HYB10-GAU-MILESHC.fits.gz'
Ha_vel, Ha_vel_ivar, Ha_vel_mask, r_band, r_band_ivar = extract_data(gal_filename)
mr_band = ma.array(r_band, mask=Ha_vel_mask)
mHa_vel = ma.array(Ha_vel, mask=Ha_vel_mask)
mHa_vel_ivar = ma.array(Ha_vel_ivar, mask=Ha_vel_mask)
oneD_fit_file = '../spirals/DRPall-master_file_30.txt'
oneD_fit_parameters = QTable.read(oneD_fit_file, format='ascii.ecsv')
gal_oneD_fit_parameters_boolean = np.logical_and(oneD_fit_parameters['MaNGA_plate'] == int(manga_plate),
oneD_fit_parameters['MaNGA_IFU'] == int(manga_IFU))
gal_oneD_fit_parameters_row = oneD_fit_parameters[gal_oneD_fit_parameters_boolean]
i_angle = np.arccos(gal_oneD_fit_parameters_row['ba'][0])
center = np.unravel_index(ma.argmax(mr_band), mr_band.shape)
v_sys = mHa_vel[center]
phi = gal_oneD_fit_parameters_row['phi'][0].value*np.pi/180
v_max = gal_oneD_fit_parameters_row['avg_v_max'][0].value
r_turn = gal_oneD_fit_parameters_row['avg_r_turn'][0].value
alpha = gal_oneD_fit_parameters_row['avg_alpha'][0]
# Find spaxel along semi-major axis
delta_x = int(center[1]*0.5)
delta_y = int(delta_x/np.tan(phi))
semi_major_axis_spaxel = tuple(np.subtract(center, (-delta_y, delta_x)))
# Check value along semi-major axis
if mHa_vel[semi_major_axis_spaxel] < 0:
phi_guess = phi + np.pi
else:
phi_guess = phi
pos_params = [v_sys, i_angle, center[0], center[1], phi_guess]
vel_params = [v_max, r_turn, alpha]
best_fit_params = pos_params + vel_params
'''
best_fit_values = {'v_sys':v_sys,
'ba':gal_oneD_fit_parameters_row['ba'][0],
'x0':center[0],
'y0':center[1],
'phi':phi_guess,
'r_turn':r_turn,
'v_max':v_max,
'alpha':alpha}
''';
print(best_fit_params)
map_shape = mHa_vel.shape
H_0 = 100 # Hubble's Constant in units of h km/s/Mpc
c = 299792.458 # Speed of light in units of km/s
MANGA_FIBER_DIAMETER = 2*(1/60)*(1/60)*(np.pi/180) # angular fiber diameter (2") in radians
MANGA_SPAXEL_SIZE = 0.5*(1/60)*(1/60)*(np.pi/180) # spaxel size (0.5") in radians
dist_to_galaxy_Mpc = c*gal_oneD_fit_parameters_row['redshift'][0]/H_0
dist_to_galaxy_kpc = dist_to_galaxy_Mpc*1000
pix_scale_factor = dist_to_galaxy_kpc*np.tan(MANGA_SPAXEL_SIZE)
###Output
_____no_output_____
###Markdown
Functions for dynesty sampler
###Code
def uniform(a, b, u):
"""Given u in [0,1], return a uniform number in [a,b]."""
return a + (b-a)*u
def jeffreys(a, b, u):
"""Given u in [0,1], return a Jeffreys random number in [a,b]."""
return a**(1-u) * b**u
def prior_xforBB(u):
"""
Priors for the parameters of the BB velocity curve model.
Required by the dynesty sampler.
Parameters
----------
u : ndarray
Array of uniform random numbers between 0 and 1.
Returns
-------
priors : ndarray
Transformed random numbers giving prior ranges on model parameters.
"""
v_sys = uniform(-300, 300, u[0])
i_angle = uniform(0, np.pi, u[1])
i_center = jeffreys(0, 74, u[2])
j_center = jeffreys(0, 74, u[3])
phi = uniform(-np.pi, np.pi, u[4])
v_max = uniform(1., 1e5, u[5])
r_turn = uniform(0.1, 100., u[6])
alpha = uniform(np.nextafter(0, 1), 100., u[7])
return v_sys, i_angle, i_center, j_center, phi, v_max, r_turn, alpha
def prior_xforBB_vel(u):
"""
Priors for the parameters of the BB velocity curve model.
Required by the dynesty sampler.
Parameters
----------
u : ndarray
Array of uniform random numbers between 0 and 1.
Returns
-------
priors : ndarray
Transformed random numbers giving prior ranges on model parameters.
"""
v_max = uniform(1., 1e5, u[0])
r_turn = uniform(0.1, 100., u[1])
alpha = uniform(np.nextafter(0, 1), 100., u[2])
return v_max, r_turn, alpha
###Output
_____no_output_____
###Markdown
Nested sampler
###Code
dsampler = dynesty.DynamicNestedSampler(vel_logL_BB, prior_xforBB_vel, ndim=3,
logl_args=(pos_params, pix_scale_factor, mHa_vel, mHa_vel_ivar),
nlive=2000,
bound='multi',
sample='auto')
dsampler.run_nested()
dres1 = dsampler.results
###Output
0it [00:00, ?it/s]Traceback (most recent call last):
File "/Users/kellydouglass/opt/anaconda3/lib/python3.8/site-packages/dynesty/dynesty.py", line 939, in __call__
return self.func(x, *self.args, **self.kwargs)
File "/Users/kellydouglass/Documents/Research/Rotation_curves/RotationCurves/spirals/DRP_vel_map_functions.py", line 539, in vel_logL_BB
lambda1 = model_vel_map(params, vel_map.shape, pix_scale, 'BB')
File "/Users/kellydouglass/Documents/Research/Rotation_curves/RotationCurves/spirals/DRP_vel_map_functions.py", line 234, in model_vel_map
r, theta[i,j] = deproject_spaxel((i,j), center, phi, i_angle)
KeyboardInterrupt
0it [00:23, ?it/s] |
RateRegression.ipynb | ###Markdown
Loading Packages
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import datetime
import matplotlib.dates as mdates
import matplotlib.ticker
%matplotlib notebook
#Linear Regression
from sklearn import linear_model
#used for 3D plot
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
#igore warning
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Reading data and preprocessing
###Code
df = pd.read_csv('swapLiborData.csv')
#convert number to datatime format
for i in range(df.shape[0]):
df.loc[i,'Date'] = pd.to_datetime('1899-12-30') + pd.to_timedelta(df.loc[i,'Date'],'D')
df.head(5)
###Output
_____no_output_____
###Markdown
regress 5-yr against 2-yr
###Code
len1 = len(df)
t1 = 0
t2 = int(np.ceil(len1/2))
xX = df.iloc[t1:t2,6:7]
yY = df.iloc[t1:t2,8:9]
regr = linear_model.LinearRegression()
regr.fit(xX, yY)
B = regr.coef_
b_0 = regr.intercept_
rSquared = regr.score(xX, yY)
print(b_0, B)
print(rSquared)
yHat = b_0 + df.iloc[t1:t2,6:7] @ B.T
#plot data
plt.figure(figsize=(8,5)) # set the figure size
plt.plot(df.iloc[t1:t2,0], yHat)
plt.plot(df.iloc[t1:t2,0], df.iloc[t1:t2,8:9])
#adjust display setting
#plt.figure(figsize=(8,5)) # set the figure size
plt.autoscale(enable=True, axis='x', tight=True)
plt.autoscale(enable=True, axis='y', tight=True)
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%d/%m/%y'))
plt.gca().xaxis.set_major_locator(mdates.MonthLocator(interval=6))
plt.xticks(rotation='horizontal',horizontalalignment='center')
plt.xlabel('date')
plt.ylabel('5-yr swap rate')
plt.title('original vs. constructed')
plt.legend(labels = ['constructed 5-yr','original 5-yr'], loc='best')
plt.show()
###Output
[1.8079227] [[-0.2480387]]
0.04318013965625678
###Markdown
regress 5-yr against 2-yr
###Code
len1 = len(df)
t1 = int(np.ceil(len1/2))
t2 = len1
xX = df.iloc[t1:t2,6:7]
yY = df.iloc[t1:t2,8:9]
regr = linear_model.LinearRegression()
regr.fit(xX, yY)
B = regr.coef_
b_0 = regr.intercept_
rSquared = regr.score(xX, yY)
print(b_0, B)
print(rSquared)
yHat = b_0 + df.iloc[t1:t2,6:7] @ B.T
#plot data
plt.figure(figsize=(8,5)) # set the figure size
plt.plot(df.iloc[t1:t2,0], yHat)
plt.plot(df.iloc[t1:t2,0], df.iloc[t1:t2,8:9])
#adjust display setting
plt.autoscale(enable=True, axis='x', tight=True)
plt.autoscale(enable=True, axis='y', tight=True)
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%d/%m/%y'))
plt.gca().xaxis.set_major_locator(mdates.MonthLocator(interval=6))
plt.xticks(rotation='horizontal',horizontalalignment='center')
plt.xlabel('date')
plt.ylabel('5-yr swap rate')
plt.title('original vs. constructed')
plt.legend(labels = ['constructed 5-yr','original 5-yr'], loc='best')
plt.show()
###Output
[ 0.43706762] [[ 0.90311227]]
0.96822828184
###Markdown
regress 5-yr against 2-yr
###Code
len1 = len(df)
t1 = 0
t2 = len1
xX = df.iloc[t1:t2,6:7]
yY = df.iloc[t1:t2,8:9]
regr = linear_model.LinearRegression()
regr.fit(xX, yY)
B = regr.coef_
b_0 = regr.intercept_
rSquared = regr.score(xX, yY)
print(b_0, B)
print(rSquared)
yHat = b_0 + df.iloc[t1:t2,6:7] @ B.T
#plot data
plt.figure(figsize=(8,5)) # set the figure size
plt.plot(df.iloc[t1:t2,0], yHat)
plt.plot(df.iloc[t1:t2,0], df.iloc[t1:t2,8:9])
#adjust display setting
plt.autoscale(enable=True, axis='x', tight=True)
plt.autoscale(enable=True, axis='y', tight=True)
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%d/%m/%y'))
plt.gca().xaxis.set_major_locator(mdates.YearLocator(base=1))
plt.xticks(rotation='horizontal',horizontalalignment='center')
plt.xlabel('date')
plt.ylabel('5-yr swap rate')
plt.title('original vs. constructed')
plt.legend(labels = ['constructed 5-yr','original 5-yr'], loc='best')
plt.show()
###Output
[ 1.03876557] [[ 0.62619905]]
0.768824964502
###Markdown
regress 30-yr against 15-yr
###Code
len1 = len(df)
t1 = 0
t2 = int(np.ceil(len1/2))
xX = df.iloc[t1:t2,11:12]
yY = df.iloc[t1:t2,12:13]
regr = linear_model.LinearRegression()
regr.fit(xX, yY)
B = regr.coef_
b_0 = regr.intercept_
rSquared = regr.score(xX, yY)
print(b_0, B)
print(rSquared)
yHat = b_0 + df.iloc[t1:t2,11:12] @ B.T
#plot data
plt.figure(figsize=(8,5)) # set the figure size
plt.plot(df.iloc[t1:t2,0], yHat)
plt.plot(df.iloc[t1:t2,0], df.iloc[t1:t2,12:13])
#adjust display setting
plt.autoscale(enable=True, axis='x', tight=True)
plt.autoscale(enable=True, axis='y', tight=True)
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%d/%m/%y'))
plt.gca().xaxis.set_major_locator(mdates.MonthLocator(interval=6))
plt.xticks(rotation='horizontal',horizontalalignment='center')
plt.xlabel('date')
plt.ylabel('30-yr swap rate')
plt.title('original vs. constructed')
plt.legend(labels = ['constructed 30-yr','original 30-yr'], loc='best')
plt.show()
###Output
[ 0.02430841] [[ 1.08162197]]
0.994786634158
###Markdown
regress 30-yr against 15-yr
###Code
len1 = len(df)
t1 = int(np.ceil(len1/2))
t2 = len1
xX = df.iloc[t1:t2,11:12]
yY = df.iloc[t1:t2,12:13]
regr = linear_model.LinearRegression()
regr.fit(xX, yY)
B = regr.coef_
b_0 = regr.intercept_
rSquared = regr.score(xX, yY)
print(b_0, B)
print(rSquared)
yHat = b_0 + df.iloc[t1:t2,11:12] @ B.T
#plot data
plt.figure(figsize=(8,5)) # set the figure size
plt.plot(df.iloc[t1:t2,0], yHat)
plt.plot(df.iloc[t1:t2,0], df.iloc[t1:t2,12:13])
#adjust display setting
plt.autoscale(enable=True, axis='x', tight=True)
plt.autoscale(enable=True, axis='y', tight=True)
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%d/%m/%y'))
plt.gca().xaxis.set_major_locator(mdates.MonthLocator(interval=6))
plt.xticks(rotation='horizontal',horizontalalignment='center')
plt.xlabel('date')
plt.ylabel('30-yr swap rate')
plt.title('original vs. constructed')
plt.legend(labels = ['constructed 30-yr','original 30-yr'], loc='best')
plt.show()
###Output
[ 0.44771641] [[ 0.85177058]]
0.995169312527
###Markdown
regress 30-yr against 15-yr
###Code
len1 = len(df)
t1 = 0
t2 = len1
xX = df.iloc[t1:t2,11:12]
yY = df.iloc[t1:t2,12:13]
regr = linear_model.LinearRegression()
regr.fit(xX, yY)
B = regr.coef_
b_0 = regr.intercept_
rSquared = regr.score(xX, yY)
print(b_0, B)
print(rSquared)
yHat = b_0 + df.iloc[t1:t2,11:12] @ B.T
#plot data
plt.figure(figsize=(8,5)) # set the figure size
plt.plot(df.iloc[t1:t2,0], yHat)
plt.plot(df.iloc[t1:t2,0], df.iloc[t1:t2,12:13])
#adjust display setting
plt.autoscale(enable=True, axis='x', tight=True)
plt.autoscale(enable=True, axis='y', tight=True)
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%d/%m/%y'))
plt.gca().xaxis.set_major_locator(mdates.MonthLocator(interval=6))
plt.xticks(rotation='horizontal',horizontalalignment='center')
plt.xlabel('date')
plt.ylabel('30-yr swap rate')
plt.title('original vs. constructed')
plt.legend(labels = ['constructed 30-yr','original 30-yr'], loc='best')
plt.show()
###Output
[ 0.1938514] [[ 0.98636103]]
0.953895616954
###Markdown
regress 30-yr against 2-yr, 5-yr, and 10-yr
###Code
len1 = len(df)
t1 = 0
t2 = int(np.ceil(len1/2))
xX = df.iloc[t1:t2,[6,8,10]]
yY = df.iloc[t1:t2,12:13]
regr = linear_model.LinearRegression()
regr.fit(xX, yY)
B = regr.coef_
b_0 = regr.intercept_
rSquared = regr.score(xX, yY)
print(b_0, B)
print(rSquared)
yHat = b_0 + df.iloc[t1:t2,[6,8,10]] @ B.T
#plot data
plt.figure(figsize=(8,5)) # set the figure size
plt.plot(df.iloc[t1:t2,0], yHat)
plt.plot(df.iloc[t1:t2,0], df.iloc[t1:t2,12:13])
#adjust display setting
plt.autoscale(enable=True, axis='x', tight=True)
plt.autoscale(enable=True, axis='y', tight=True)
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%d/%m/%y'))
plt.gca().xaxis.set_major_locator(mdates.MonthLocator(interval=6))
plt.xticks(rotation='horizontal',horizontalalignment='center')
plt.xlabel('date')
plt.ylabel('30-yr swap rate')
plt.title('original vs. constructed')
plt.legend(labels = ['constructed 30-yr','original 30-yr'], loc='best')
plt.show()
###Output
[ 0.35040478] [[ 0.01235598 -0.76618625 1.61493626]]
0.993819498302
|