path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebooks/MLencoding.ipynb | ###Markdown
How to use the MLencoding classThis is a tutorial of how to use our MLencoding package to build encoding models a predict spikes.
###Code
import warnings
import numpy as np
import pandas as pd
import scipy.io
###Output
_____no_output_____
###Markdown
Load encoding package
###Code
from mlencoding import *
###Output
Using Theano backend.
###Markdown
1. DataBelow we load a dataset available on CRCNS: a [Macaque M1](http://crcns.org/data-sets/movements/dream/downloading-dream) (from [Stevenston et al. 2011](http://jn.physiology.org/content/106/2/764.short)).The data has been organized in Matlab into neat arrays for easy loading here.We will soon want a single numpy array representing the external covariates, and a single numpy vector representing the neural response. The data array X will be of dimensions (n, p), where n is the number of time bins and p is the number of covariates, and the response y will be of dimensions (n, ) . We use pandas as an intermediate tool for data organizing, but it's really not necessary - if using your own data just wrangle it into numpy arrays of proper dimension. Load data
###Code
m1_imported = scipy.io.loadmat('../data/m1_stevenson_2011.mat')
###Output
_____no_output_____
###Markdown
1.1 CovariatesPull into pandas dataframe. This allows us to easily access covariates by name.
###Code
data = pd.DataFrame()
data['time'] = m1_imported['time'][0]
data['handPos_x'] = m1_imported['handPos'][0]
data['handPos_y'] = m1_imported['handPos'][1]
data['handVel_x'] = m1_imported['handVel'][0]
data['handVel_y'] = m1_imported['handVel'][1]
#### Compute more covariates/features
#These will be used as the 'engineered' features for improving the GLM's performance.
data['velDir'] = np.arctan2(data['handVel_y'], data['handVel_x'])
data['cos_velDir'] = np.cos(data['velDir'])
data['sin_velDir'] = np.sin(data['velDir'])
data['speed'] = np.sqrt(data['handVel_x'].values**2+data['handVel_y'].values**2)
r = np.arctan2(data['handPos_y'], data['handPos_x'])
data['cos_PosDir'] = np.cos(r)
data['sin_PosDir'] = np.sin(r)
data['radial_Pos'] = np.sqrt(data['handPos_x'].values**2+data['handPos_y'].values**2)
data.head()
###Output
_____no_output_____
###Markdown
2. Making an encoding modelWe instantiate the object like this:
###Code
glm_model = MLencoding(tunemodel = 'glm')
###Output
_____no_output_____
###Markdown
We can then train it on some data. Let's go for 3/4 of the data we have for some neuron.
###Code
neuron_n = 1
X = data[['handPos_x','handPos_y','handVel_x','handVel_y']].values
y = m1_imported['spikes'][neuron_n]
n_samples = X.shape[0]
threefourths = int(n_samples*3/4)
X_train = X[:threefourths,:]
y_train = y[:threefourths]
# Now we train the model
glm_model.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Let's predict the neural response on the training set.
###Code
X_test = X[threefourths:,:]
y_test = y[threefourths:]
y_hat = glm_model.predict(X_test)
###Output
_____no_output_____
###Markdown
How did we do? We can score this prediction with the class's internal function 'poisson_pseudoR2'.
###Code
# The 'null model' we measure against is the mean of the train dataset.
y_null = np.mean(y_train)
pr2_glm = glm_model.poisson_pseudoR2(y_test, y_hat, y_null)
print(pr2_glm)
###Output
0.0625913964434
###Markdown
Cross-validationLet's now obtain the predictions and scores of 10-fold cross-validation for a GLM.
###Code
Y_hat, PR2s = glm_model.fit_cv(X,y, n_cv = 10, verbose = 2)
###Output
...runnning cv-fold 1 of 10
pR2: 0.0488023178838
...runnning cv-fold 2 of 10
pR2: 0.0434830590622
...runnning cv-fold 3 of 10
pR2: 0.0513488923378
...runnning cv-fold 4 of 10
pR2: 0.0521074580784
...runnning cv-fold 5 of 10
pR2: 0.0449312912574
...runnning cv-fold 6 of 10
pR2: 0.062685886475
...runnning cv-fold 7 of 10
pR2: 0.0459586387009
...runnning cv-fold 8 of 10
pR2: 0.0578141187789
...runnning cv-fold 9 of 10
pR2: 0.0523027349251
...runnning cv-fold 10 of 10
pR2: 0.0496125678667
pR2_cv: 0.050905 (+/- 0.001765)
###Markdown
Other methods: neural networks, random forest, XGBoostUsing other encoding models is as simple as this:
###Code
nn_model = MLencoding(tunemodel='feedforward_nn')
Y_hat, PR2s = nn_model.fit_cv(X,y, n_cv = 10, verbose = 2)
###Output
...runnning cv-fold 1 of 10
###Markdown
Predicting spikes using spike or covariate historyMLencoding supports models that also use previous covariate values to predict the current spike rate. Spike history is also supported.When you instantiate a model with the `spike_history=True` or `cov_history=True` keywords, all future calls to `fit`, `predict`, and `fit_cv` will automatically construct a new covariate matrix with additional columns. These columns represent the covariate history. This matrix is then used for fitting.Currently, covariate history columns are raised cosine basis functions. You can define how many temporal basis you want with `n_filters`, which will span the interval [0, `max_time`]. Times are measured in milliseconds. In order to perform this calculation, the model needs to know how many milliseconds are in each time bin. (Set this with `window`).
###Code
xgb_history = MLencoding(tunemodel = 'xgboost',
cov_history = False, spike_history=True, # We can choose!
window = 50, #this dataset has 50ms time bins
n_filters = 2,
max_time = 250 )
xgb_history.fit_cv(X,y, verbose = 2, continuous_folds = True);
###Output
...runnning cv-fold 0 of 10
pR2: 0.172896381616
...runnning cv-fold 1 of 10
pR2: 0.151629755677
...runnning cv-fold 2 of 10
pR2: 0.183958679349
...runnning cv-fold 3 of 10
pR2: 0.149697611433
...runnning cv-fold 4 of 10
pR2: 0.127944114605
...runnning cv-fold 5 of 10
pR2: 0.146583568384
...runnning cv-fold 6 of 10
pR2: 0.227747587776
...runnning cv-fold 7 of 10
pR2: 0.265500709309
...runnning cv-fold 8 of 10
pR2: 0.275622248323
...runnning cv-fold 9 of 10
pR2: 0.266005528721
pR2_cv: 0.196759 (+/- 0.017011)
###Markdown
Here is a version that uses spike history with random folds.
###Code
# First we need to set n_every > max_time/window.
xgb_history_rand = MLencoding(tunemodel = 'xgboost',
cov_history = False, spike_history=True,
window = 50,
n_filters = 2,
max_time = 250, n_every = 6 )
xgb_history_rand.fit_cv(X,y, verbose = 2, continuous_folds = False);
###Output
...runnning cv-fold 1 of 10
pR2: 0.172660664684
...runnning cv-fold 2 of 10
pR2: 0.201177093824
...runnning cv-fold 3 of 10
pR2: 0.181089793866
...runnning cv-fold 4 of 10
pR2: 0.148885335305
...runnning cv-fold 5 of 10
pR2: 0.183087263289
...runnning cv-fold 6 of 10
pR2: 0.17288721494
...runnning cv-fold 7 of 10
pR2: 0.130874947193
...runnning cv-fold 8 of 10
pR2: 0.175744079298
...runnning cv-fold 9 of 10
pR2: 0.149755921527
...runnning cv-fold 10 of 10
pR2: 0.0676141844825
pR2_cv: 0.158378 (+/- 0.011328)
###Markdown
Fitting an LSTMThere's nothing special about fitting an LSTM in our implementation. Just be sure to set `spike_history=True` and `cov_history = True`, and to use continuous CV folds.
###Code
lstm = MLencoding(tunemodel = 'lstm',
cov_history = True, spike_history=True, # We can choose!
window = 50, #this dataset has 50ms time bins
n_filters = 4,
max_time = 250 )
lstm.fit_cv(X,y, verbose = 2, continuous_folds = True);
###Output
...runnning cv-fold 0 of 10
pR2: 0.178190232035
...runnning cv-fold 1 of 10
pR2: 0.169885240103
...runnning cv-fold 2 of 10
pR2: 0.176461553019
...runnning cv-fold 3 of 10
pR2: 0.161520848555
...runnning cv-fold 4 of 10
pR2: 0.132098223238
...runnning cv-fold 5 of 10
pR2: 0.149307415463
...runnning cv-fold 6 of 10
pR2: 0.246421820011
...runnning cv-fold 7 of 10
pR2: 0.269467384959
...runnning cv-fold 8 of 10
pR2: 0.276576452182
...runnning cv-fold 9 of 10
pR2: 0.281433609311
pR2_cv: 0.204136 (+/- 0.017292)
###Markdown
Getting and setting model parametersTo get the current set of parameters, we can either run:
###Code
nn_model.params
# or nn_model.get_params()
###Output
_____no_output_____
###Markdown
We can set the parameters with the `set_params` method. This method takes a dictionary, which update the current set of parameters used.
###Code
nn_model.set_params({'dropout':0.3})
nn_model.params
###Output
_____no_output_____
###Markdown
Hyperparameter optimization using hyperoptWe might not want the default parameters. Here's how to set some better ones
###Code
from hyperopt import fmin, hp, Trials, tpe, STATUS_OK
# Makes sure these are in nn_models.params, otherwise you'll get a key error
space4rf = {
'dropout': hp.uniform('dropout', 0., 0.6),
'n1': hp.uniform('n1', 2,128),
'n2': hp.uniform('n2', 1,15),
}
#object that holds iteration results
trials = Trials()
#define model
nn_model = MLencoding(tunemodel='feedforward_nn')
#function to minimize
def fnc(params):
# make sure parameters are integers that need to be.
params['n1'] = int(params['n1'])
params['n2'] = int(params['n2'])
nn_model.set_params(params)
# Remember that X and y have been defined above.
Y_hat, PR2s = nn_model.fit_cv(X,y, n_cv = 5, verbose = 0)
# return negative since hyperopt always minimizes the function
return -np.mean(pseudo_R2)
###Output
_____no_output_____
###Markdown
Let's assume that our neuron 1 is a held-out neuron for parameter optimization. Let's optimize:
###Code
hyperoptBest = fmin(fnc, space4rf, algo=tpe.suggest, max_evals=50, trials=trials)
###Output
_____no_output_____
###Markdown
Defining your own modelsThe `MLencoding` class is flexible and can be used with predefined models as long as they have `fit` and `predict` methods.Let's build a different type of neural network, for example.
###Code
my_model = Sequential()
my_model.add(Dense(100, input_dim=np.shape(X)[1], init='glorot_normal',
activation='relu',))
my_model.add(Dense(1,activation='softplus'))
optim = Nadam()
my_model.compile(loss='poisson', optimizer=optim,)
my_enc = MLencoding(tunemodel = my_model)
my_enc.fit_cv(X,y,n_cv=5,verbose=2);
###Output
...runnning cv-fold 1 of 5
pR2: -0.00401729001754
...runnning cv-fold 2 of 5
pR2: -0.00440856722819
...runnning cv-fold 3 of 5
pR2: -0.00344133554292
...runnning cv-fold 4 of 5
pR2: -0.000698628352245
...runnning cv-fold 5 of 5
pR2: -0.00209311949187
pR2_cv: -0.002932 (+/- 0.000610)
###Markdown
How to use the MLencoding classThis is a tutorial of how to use our MLencoding package to build encoding models a predict spikes.
###Code
import warnings
import numpy as np
import pandas as pd
import scipy.io
###Output
_____no_output_____
###Markdown
Load encoding package
###Code
from mlencoding import *
###Output
Using Theano backend.
###Markdown
1. DataBelow we load a dataset available on CRCNS: a [Macaque M1](http://crcns.org/data-sets/movements/dream/downloading-dream) (from [Stevenston et al. 2011](http://jn.physiology.org/content/106/2/764.short)).The data has been organized in Matlab into neat arrays for easy loading here.We will soon want a single numpy array representing the external covariates, and a single numpy vector representing the neural response. The data array X will be of dimensions (n, p), where n is the number of time bins and p is the number of covariates, and the response y will be of dimensions (n, ) . We use pandas as an intermediate tool for data organizing, but it's really not necessary - if using your own data just wrangle it into numpy arrays of proper dimension. Load data
###Code
m1_imported = scipy.io.loadmat('../data/m1_stevenson_2011.mat')
###Output
_____no_output_____
###Markdown
1.1 CovariatesPull into pandas dataframe. This allows us to easily access covariates by name.
###Code
data = pd.DataFrame()
data['time'] = m1_imported['time'][0]
data['handPos_x'] = m1_imported['handPos'][0]
data['handPos_y'] = m1_imported['handPos'][1]
data['handVel_x'] = m1_imported['handVel'][0]
data['handVel_y'] = m1_imported['handVel'][1]
#### Compute more covariates/features
#These will be used as the 'engineered' features for improving the GLM's performance.
data['velDir'] = np.arctan2(data['handVel_y'], data['handVel_x'])
data['cos_velDir'] = np.cos(data['velDir'])
data['sin_velDir'] = np.sin(data['velDir'])
data['speed'] = np.sqrt(data['handVel_x'].values**2+data['handVel_y'].values**2)
r = np.arctan2(data['handPos_y'], data['handPos_x'])
data['cos_PosDir'] = np.cos(r)
data['sin_PosDir'] = np.sin(r)
data['radial_Pos'] = np.sqrt(data['handPos_x'].values**2+data['handPos_y'].values**2)
data.head()
###Output
_____no_output_____
###Markdown
2. Making an encoding modelWe instantiate the object like this:
###Code
glm_model = MLencoding(tunemodel = 'glm')
###Output
_____no_output_____
###Markdown
We can then train it on some data. Let's go for 3/4 of the data we have for some neuron.
###Code
neuron_n = 1
X = data[['handPos_x','handPos_y','handVel_x','handVel_y']].values
y = m1_imported['spikes'][neuron_n]
n_samples = X.shape[0]
threefourths = int(n_samples*3/4)
X_train = X[:threefourths,:]
y_train = y[:threefourths]
# Now we train the model
glm_model.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Let's predict the neural response on the training set.
###Code
X_test = X[threefourths:,:]
y_test = y[threefourths:]
y_hat = glm_model.predict(X_test)
###Output
_____no_output_____
###Markdown
How did we do? We can score this prediction with the class's internal function 'poisson_pseudoR2'.
###Code
# The 'null model' we measure against is the mean of the train dataset.
y_null = np.mean(y_train)
pr2_glm = glm_model.poisson_pseudoR2(y_test, y_hat, y_null)
print(pr2_glm)
###Output
0.0625913964434
###Markdown
Cross-validationLet's now obtain the predictions and scores of 10-fold cross-validation for a GLM.
###Code
Y_hat, PR2s = glm_model.fit_cv(X,y, n_cv = 10, verbose = 2)
###Output
...runnning cv-fold 1 of 10
pR2: 0.0488023178838
...runnning cv-fold 2 of 10
pR2: 0.0434830590622
...runnning cv-fold 3 of 10
pR2: 0.0513488923378
...runnning cv-fold 4 of 10
pR2: 0.0521074580784
...runnning cv-fold 5 of 10
pR2: 0.0449312912574
...runnning cv-fold 6 of 10
pR2: 0.062685886475
...runnning cv-fold 7 of 10
pR2: 0.0459586387009
...runnning cv-fold 8 of 10
pR2: 0.0578141187789
...runnning cv-fold 9 of 10
pR2: 0.0523027349251
...runnning cv-fold 10 of 10
pR2: 0.0496125678667
pR2_cv: 0.050905 (+/- 0.001765)
###Markdown
Other methods: neural networks, random forest, XGBoostUsing other encoding models is as simple as this:
###Code
nn_model = MLencoding(tunemodel='feedforward_nn')
Y_hat, PR2s = nn_model.fit_cv(X,y, n_cv = 10, verbose = 2)
###Output
...runnning cv-fold 1 of 10
###Markdown
Predicting spikes using spike or covariate historyMLencoding supports models that also use previous covariate values to predict the current spike rate. Spike history is also supported.When you instantiate a model with the `spike_history=True` or `cov_history=True` keywords, all future calls to `fit`, `predict`, and `fit_cv` will automatically construct a new covariate matrix with additional columns. These columns represent the covariate history. This matrix is then used for fitting.Currently, covariate history columns are raised cosine basis functions. You can define how many temporal basis you want with `n_filters`, which will span the interval [0, `max_time`]. Times are measured in milliseconds. In order to perform this calculation, the model needs to know how many milliseconds are in each time bin. (Set this with `window`).
###Code
xgb_history = MLencoding(tunemodel = 'xgboost',
cov_history = False, spike_history=True, # We can choose!
window = 50, #this dataset has 50ms time bins
n_filters = 2,
max_time = 250 )
xgb_history.fit_cv(X,y, verbose = 2, continuous_folds = True);
###Output
...runnning cv-fold 0 of 10
pR2: 0.172896381616
...runnning cv-fold 1 of 10
pR2: 0.151629755677
...runnning cv-fold 2 of 10
pR2: 0.183958679349
...runnning cv-fold 3 of 10
pR2: 0.149697611433
...runnning cv-fold 4 of 10
pR2: 0.127944114605
...runnning cv-fold 5 of 10
pR2: 0.146583568384
...runnning cv-fold 6 of 10
pR2: 0.227747587776
...runnning cv-fold 7 of 10
pR2: 0.265500709309
...runnning cv-fold 8 of 10
pR2: 0.275622248323
...runnning cv-fold 9 of 10
pR2: 0.266005528721
pR2_cv: 0.196759 (+/- 0.017011)
###Markdown
Here is a version that uses spike history with random folds.
###Code
# First we need to set n_every > max_time/window.
xgb_history_rand = MLencoding(tunemodel = 'xgboost',
cov_history = False, spike_history=True,
window = 50,
n_filters = 2,
max_time = 250, n_every = 6 )
xgb_history_rand.fit_cv(X,y, verbose = 2, continuous_folds = False);
###Output
...runnning cv-fold 1 of 10
pR2: 0.172660664684
...runnning cv-fold 2 of 10
pR2: 0.201177093824
...runnning cv-fold 3 of 10
pR2: 0.181089793866
...runnning cv-fold 4 of 10
pR2: 0.148885335305
...runnning cv-fold 5 of 10
pR2: 0.183087263289
...runnning cv-fold 6 of 10
pR2: 0.17288721494
...runnning cv-fold 7 of 10
pR2: 0.130874947193
...runnning cv-fold 8 of 10
pR2: 0.175744079298
...runnning cv-fold 9 of 10
pR2: 0.149755921527
...runnning cv-fold 10 of 10
pR2: 0.0676141844825
pR2_cv: 0.158378 (+/- 0.011328)
###Markdown
Fitting an LSTMThere's nothing special about fitting an LSTM in our implementation. Just be sure to set `spike_history=True` and `cov_history = True`, and to use continuous CV folds.
###Code
lstm = MLencoding(tunemodel = 'lstm',
cov_history = True, spike_history=True, # We can choose!
window = 50, #this dataset has 50ms time bins
n_filters = 4,
max_time = 250 )
lstm.fit_cv(X,y, verbose = 2, continuous_folds = True);
###Output
...runnning cv-fold 0 of 10
pR2: 0.178190232035
...runnning cv-fold 1 of 10
pR2: 0.169885240103
...runnning cv-fold 2 of 10
pR2: 0.176461553019
...runnning cv-fold 3 of 10
pR2: 0.161520848555
...runnning cv-fold 4 of 10
pR2: 0.132098223238
...runnning cv-fold 5 of 10
pR2: 0.149307415463
...runnning cv-fold 6 of 10
pR2: 0.246421820011
...runnning cv-fold 7 of 10
pR2: 0.269467384959
...runnning cv-fold 8 of 10
pR2: 0.276576452182
...runnning cv-fold 9 of 10
pR2: 0.281433609311
pR2_cv: 0.204136 (+/- 0.017292)
###Markdown
Getting and setting model parametersTo get the current set of parameters, we can either run:
###Code
nn_model.params
# or nn_model.get_params()
###Output
_____no_output_____
###Markdown
We can set the parameters with the `set_params` method. This method takes a dictionary, which update the current set of parameters used.
###Code
nn_model.set_params({'dropout':0.3})
nn_model.params
###Output
_____no_output_____
###Markdown
Hyperparameter optimization using hyperoptWe might not want the default parameters. Here's how to set some better ones
###Code
from hyperopt import fmin, hp, Trials, tpe, STATUS_OK
# Makes sure these are in nn_models.params, otherwise you'll get a key error
space4rf = {
'dropout': hp.uniform('dropout', 0., 0.6),
'n1': hp.uniform('n1', 2,128),
'n2': hp.uniform('n2', 1,15),
}
#object that holds iteration results
trials = Trials()
#define model
nn_model = MLencoding(tunemodel='feedforward_nn')
#function to minimize
def fnc(params):
# make sure parameters are integers that need to be.
params['n1'] = int(params['n1'])
params['n2'] = int(params['n2'])
nn_model.set_params(params)
# Remember that X and y have been defined above.
Y_hat, PR2s = nn_model.fit_cv(X,y, n_cv = 5, verbose = 0)
# return negative since hyperopt always minimizes the function
return -np.mean(pseudo_R2)
###Output
_____no_output_____
###Markdown
Let's assume that our neuron 1 is a held-out neuron for parameter optimization. Let's optimize:
###Code
hyperoptBest = fmin(fnc, space4rf, algo=tpe.suggest, max_evals=50, trials=trials)
###Output
_____no_output_____
###Markdown
Defining your own modelsThe `MLencoding` class is flexible and can be used with predefined models as long as they have `fit` and `predict` methods.Let's build a different type of neural network, for example.
###Code
my_model = Sequential()
my_model.add(Dense(100, input_dim=np.shape(X)[1], init='glorot_normal',
activation='relu',))
my_model.add(Dense(1,activation='softplus'))
optim = Nadam()
my_model.compile(loss='poisson', optimizer=optim,)
my_enc = MLencoding(tunemodel = my_model)
my_enc.fit_cv(X,y,n_cv=5,verbose=2);
###Output
...runnning cv-fold 1 of 5
pR2: -0.00401729001754
...runnning cv-fold 2 of 5
pR2: -0.00440856722819
...runnning cv-fold 3 of 5
pR2: -0.00344133554292
...runnning cv-fold 4 of 5
pR2: -0.000698628352245
...runnning cv-fold 5 of 5
pR2: -0.00209311949187
pR2_cv: -0.002932 (+/- 0.000610)
|
starter_notebook_reverse_training_Setswana2.ipynb | ###Markdown
Masakhane - Reverse Machine Translation for African Languages (Using JoeyNMT) > NB> - The purpose of this Notebook is to build models that translate African languages(target language) *into* English(source language). This will allow us to in future be able to make translations from one African language to the other. If you'd like to translate *from* English, please use [this](https://github.com/masakhane-io/masakhane-mt/blob/master/starter_notebook.ipynb) starter notebook instead.> - We call this reverse training because normally we build models that make translations from the source language(English) to the target language. But in this case we are doing the reverse; building models that make translations from the target language to the source(English) Note before beginning: - The idea is that you should be able to make minimal changes to this in order to get SOME result for your own translation corpus. - The tl;dr: Go to the **"TODO"** comments which will tell you what to update to get up and running - If you actually want to have a clue what you're doing, read the text and peek at the links - With 100 epochs, it should take around 7 hours to run in Google Colab - Once you've gotten a result for your language, please attach and email your notebook that generated it to [email protected] - If you care enough and get a chance, doing a brief background on your language would be amazing. See examples in [(Martinus, 2019)](https://arxiv.org/abs/1906.05685) Retrieve your data & make a parallel corpusIf you are wanting to use the JW300 data referenced on the Masakhane website or in our GitHub repo, you can use `opus-tools` to convert the data into a convenient format. `opus_read` from that package provides a convenient tool for reading the native aligned XML files and to convert them to TMX format. The tool can also be used to fetch relevant files from OPUS on the fly and to filter the data as necessary. [Read the documentation](https://pypi.org/project/opustools-pkg/) for more details.Once you have your corpus files in TMX format (an xml structure which will include the sentences in your target language and your source language in a single file), we recommend reading them into a pandas dataframe. Thankfully, Jade wrote a silly `tmx2dataframe` package which converts your tmx file to a pandas dataframe. Submitted by Tebello Lebesa 2388016Sumbitted by Korstiaan Wapenaar 1492459
###Code
from google.colab import drive
drive.mount('/content/drive')
# TODO: Set your source and target languages. Keep in mind, these traditionally use language codes as found here:
# These will also become the suffix's of all vocab and corpus files used throughout
import os
source_language = "en"
target_language = "tn"
lc = False # If True, lowercase the data.
seed = 42 # Random seed for shuffling.
tag = "baseline" # Give a unique name to your folder - this is to ensure you don't rewrite any models you've already submitted
os.environ["src"] = source_language # Sets them in bash as well, since we often use bash scripts
os.environ["tgt"] = target_language
os.environ["tag"] = tag
# This will save it to a folder in our gdrive instead!
!mkdir -p "/content/drive/My Drive/masakhane/$tgt-$src-$tag"
os.environ["gdrive_path"] = "/content/drive/My Drive/masakhane/%s-%s-%s" % (target_language, source_language, tag)
!echo $gdrive_path
# Install opus-tools
! pip install opustools-pkg
# Downloading our corpus
! opus_read -d JW300 -s $src -t $tgt -wm moses -w jw300.$src jw300.$tgt -q
# extract the corpus file
! gunzip JW300_latest_xml_$src-$tgt.xml.gz
# Download the global test set.
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en
# And the specific test set for this language pair.
os.environ["trg"] = target_language
os.environ["src"] = source_language
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.en
! mv test.en-$trg.en test.en
! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.$trg
! mv test.en-$trg.$trg test.$trg
# Read the test data to filter from train and dev splits.
# Store english portion in set for quick filtering checks.
en_test_sents = set()
filter_test_sents = "test.en-any.en"
j = 0
with open(filter_test_sents) as f:
for line in f:
en_test_sents.add(line.strip())
j += 1
print('Loaded {} global test sentences to filter from the training/dev data.'.format(j))
import pandas as pd
# TMX file to dataframe
source_file = 'jw300.' + source_language
target_file = 'jw300.' + target_language
source = []
target = []
skip_lines = [] # Collect the line numbers of the source portion to skip the same lines for the target portion.
with open(source_file) as f:
for i, line in enumerate(f):
# Skip sentences that are contained in the test set.
if line.strip() not in en_test_sents:
source.append(line.strip())
else:
skip_lines.append(i)
with open(target_file) as f:
for j, line in enumerate(f):
# Only add to corpus if corresponding source was not skipped.
if j not in skip_lines:
target.append(line.strip())
print('Loaded data and skipped {}/{} lines since contained in test set.'.format(len(skip_lines), i))
df = pd.DataFrame(zip(source, target), columns=['source_sentence', 'target_sentence'])
# if you get TypeError: data argument can't be an iterator is because of your zip version run this below
#df = pd.DataFrame(list(zip(source, target)), columns=['source_sentence', 'target_sentence'])
df.head(3)
###Output
Loaded data and skipped 5241/909627 lines since contained in test set.
###Markdown
Pre-processing and exportIt is generally a good idea to remove duplicate translations and conflicting translations from the corpus. In practice, these public corpora include some number of these that need to be cleaned.In addition we will split our data into dev/test/train and export to the filesystem.
###Code
# drop duplicate translations
df_pp = df.drop_duplicates()
# drop conflicting translations
# (this is optional and something that you might want to comment out
# depending on the size of your corpus)
df_pp.drop_duplicates(subset='source_sentence', inplace=True)
df_pp.drop_duplicates(subset='target_sentence', inplace=True)
# Shuffle the data to remove bias in dev set selection.
df_pp = df_pp.sample(frac=1, random_state=seed).reset_index(drop=True)
# Install fuzzy wuzzy to remove "almost duplicate" sentences in the
# test and training sets.
! pip install fuzzywuzzy
! pip install python-Levenshtein
import time
from fuzzywuzzy import process
import numpy as np
from os import cpu_count
from functools import partial
from multiprocessing import Pool
# reset the index of the training set after previous filtering
df_pp.reset_index(drop=False, inplace=True)
# Remove samples from the training data set if they "almost overlap" with the
# samples in the test set.
# Filtering function. Adjust pad to narrow down the candidate matches to
# within a certain length of characters of the given sample.
def fuzzfilter(sample, candidates, pad):
candidates = [x for x in candidates if len(x) <= len(sample)+pad and len(x) >= len(sample)-pad]
if len(candidates) > 0:
return process.extractOne(sample, candidates)[1]
else:
return np.nan
# start_time = time.time()
# ### iterating over pandas dataframe rows is not recomended, let use multi processing to apply the function
# with Pool(cpu_count()-1) as pool:
# scores = pool.map(partial(fuzzfilter, candidates=list(en_test_sents), pad=5), df_pp['source_sentence'])
# hours, rem = divmod(time.time() - start_time, 3600)
# minutes, seconds = divmod(rem, 60)
# print("done in {}h:{}min:{}seconds".format(hours, minutes, seconds))
# # Filter out "almost overlapping samples"
# df_pp = df_pp.assign(scores=scores)
# df_pp = df_pp[df_pp['scores'] < 95]
# This section does the split between train/dev for the parallel corpora then saves them as separate files
# We use 1000 dev test and the given test set.
import csv
# Do the split between dev/train and create parallel corpora
num_dev_patterns = 1000
# Optional: lower case the corpora - this will make it easier to generalize, but without proper casing.
if lc: # Julia: making lowercasing optional
df_pp["source_sentence"] = df_pp["source_sentence"].str.lower()
df_pp["target_sentence"] = df_pp["target_sentence"].str.lower()
# Julia: test sets are already generated
dev = df_pp.tail(num_dev_patterns) # Herman: Error in original
stripped = df_pp.drop(df_pp.tail(num_dev_patterns).index)
with open("train."+source_language, "w") as src_file, open("train."+target_language, "w") as trg_file:
for index, row in stripped.iterrows():
src_file.write(row["source_sentence"]+"\n")
trg_file.write(row["target_sentence"]+"\n")
with open("dev."+source_language, "w") as src_file, open("dev."+target_language, "w") as trg_file:
for index, row in dev.iterrows():
src_file.write(row["source_sentence"]+"\n")
trg_file.write(row["target_sentence"]+"\n")
#stripped[["source_sentence"]].to_csv("train."+source_language, header=False, index=False) # Herman: Added `header=False` everywhere
#stripped[["target_sentence"]].to_csv("train."+target_language, header=False, index=False) # Julia: Problematic handling of quotation marks.
#dev[["source_sentence"]].to_csv("dev."+source_language, header=False, index=False)
#dev[["target_sentence"]].to_csv("dev."+target_language, header=False, index=False)
# Doublecheck the format below. There should be no extra quotation marks or weird characters.
! head train.*
! head dev.*
###Output
==> train.en <==
When a Christian dies , his material wealth is of no more value to him than Jesus ’ garment was to him when he died .
“ When I think of the five years I spent in the bush , killing people and being shot at , I feel pretty stupid , ” said the fighter .
The two females , named Owalla and Durga , were introduced to the Pilanesberg Reserve of Bophuthatswana in 1982 .
We deliberately allowed our children to see that just as they were struggling with the anxieties of youth , we were struggling with the anxieties of adults . ”
( Matthew 24 : 3-14 ; Luke 21 : 11 ) — 12 / 15 , page 11 .
She was baptized in 1982 .
A romantic attraction to someone outside the marriage could be an indication that a husband and a wife are not attentive to each other ’ s needs .
Jesus established the limit to the honor that ought to be rendered to others when he told his disciples : “ Do not you be called Rabbi , for one is your teacher , whereas all you are brothers .
“ If sin , sickness , and death were understood as nothingness , they would disappear . ” — Science and Health With Key to the Scriptures .
Create an atmosphere that will allow your child to talk about death and its meaning .
==> train.st <==
Ha Mokreste a e - shoa , leruo la hae ha e be la bohlokoa joalokaha seaparo sa Jesu e sa ka ea e - ba sa bohlokoa ho eena ha a e - shoa .
Lesole leo le re : “ Ha ke nahana ka lilemo tse hlano tseo ke li qetileng morung , ke bolaea batho ’ me le ’ na ke thunngoa , ke ikutloa ke le sethoto sa lithoto .
Litlou tse peli tse tšehali tse bitsoang Owalla le Durga li ile tsa isoa Pilanesberg Reserve ea Bophuthatswana ka 1982 .
Ka morero re ile ra lumella hore bana ba rōna ba bone hore joalokaha ba ne ba loana le matšoenyeho a bocha , le rōna re ne re loana le matšoenyeho a batho ba baholo . ”
( Matheu 24 : 3 - 14 ; Luka 21 : 11 ) — 12 / 15 , leqepheng la 11 .
O ile a kolobetsoa ka 1982 .
Ho rata motho e mong ka ntle ho molekane oa hao e ka ’ na ea e - ba pontšo ea hore monna le mosali ha ba hlokomelane .
Jesu o ile a bontša hore na tlhompho e fuoang ba bang e lokela ho fella hokae ha a re ho barutuoa ba hae : “ Le se ke la bitsoa Rabi , kaha mosuoe oa lōna o mong , athe lōna bohle le bara ba motho .
Haeba sebe , ho kula le lefu li ne li nkoa e se letho , li ne li tla fela . ” — Science and Health With Key to the Scriptures .
Etsang hore bana ba lōna ba phutholohe ha ba bua ka lefu .
==> dev.en <==
( Luke 23 : 43 ) And they will never need to die at all !
In the future , after the war of Armageddon , anointed Christians will become Christ ’ s bride .
As with most aspects of child training , example is an effective teacher .
I was pioneering in Melbourne at the time and living at the Society ’ s literature depot .
A Miracle in New York ?
Thus , they may expect to survive the most calamitous time of distress ever to strike the nations. — Daniel 12 : 1 ; Matthew 24 : 13 , 21 , 22 .
They were not to look back , but Lot ’ s wife did so , perhaps longing for the material things left behind .
Since ancient times , people observed these changes and attributed great meaning to them .
Then , discuss what your teenager would do .
9 Whatever the topics , our conversations will build others up if they adhere to the apostle Paul ’ s admonition to the congregation in Philippi .
==> dev.st <==
( Luka 23 : 43 ) ’ Me ba ke ke ba hlola ba e - shoa le ka mohla !
Nakong e tlang , ka mor’a ntoa ea Armagedone , Bakreste ba tlotsitsoeng e tla ba monyaluoa oa Kreste .
Tsela e atlehang ka ho fetisisa ea ho koetlisa bana ke ha batsoali ba ba behela mohlala o motle .
Ke ne ke bula maliboho Melbourne ka nako eo ’ me ke phela motebong oa libuka oa Mokhatlo .
Ho Etsahetse Mohlolo New York ?
Kahoo , li ka lebella ho pholoha nako e mahlonoko ka ho fetisisa ea tlokotsi e kileng ea oela lichaba . — Daniele 12 : 1 ; Mattheu 24 : 13 , 21 , 22 .
Ba ne ba sa tlameha ho hetla , empa mosali oa Lota o ile a hetla , mohlomong a laba - labela lintho tse bonahalang tse setseng morao .
Ho tloha mehleng ea boholo - holo , batho ba ne ba ithuta liphetoho tsena ebe ba re ho na le ntho e khōlō eo li e bolelang .
Mo botse hore na eena o ne a ka etsa’ng .
9 Ho sa tsotellehe litaba tseo re buang ka tsona , meqoqo ea rōna e tla haha ba bang haeba e lumellana le keletso ea moapostola Pauluse e eang phuthehong ea Filipi .
###Markdown
--- Installation of JoeyNMTJoeyNMT is a simple, minimalist NMT package which is useful for learning and teaching. Check out the documentation for JoeyNMT [here](https://joeynmt.readthedocs.io)
###Code
# Install JoeyNMT
! git clone https://github.com/joeynmt/joeynmt.git
! cd joeynmt; pip3 install .
# Install Pytorch with GPU support v1.7.1.
! pip install torch==1.9.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
###Output
Cloning into 'joeynmt'...
remote: Enumerating objects: 3224, done.[K
remote: Counting objects: 100% (273/273), done.[K
remote: Compressing objects: 100% (139/139), done.[K
remote: Total 3224 (delta 157), reused 206 (delta 134), pack-reused 2951[K
Receiving objects: 100% (3224/3224), 8.17 MiB | 15.02 MiB/s, done.
Resolving deltas: 100% (2186/2186), done.
Processing /content/joeynmt
[33m DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.
pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555.[0m
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from joeynmt==1.3) (0.16.0)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from joeynmt==1.3) (7.1.2)
Requirement already satisfied: numpy>=1.19.5 in /usr/local/lib/python3.7/dist-packages (from joeynmt==1.3) (1.19.5)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from joeynmt==1.3) (57.4.0)
Requirement already satisfied: torch>=1.9.0 in /usr/local/lib/python3.7/dist-packages (from joeynmt==1.3) (1.9.0+cu111)
Requirement already satisfied: tensorboard>=1.15 in /usr/local/lib/python3.7/dist-packages (from joeynmt==1.3) (2.6.0)
Requirement already satisfied: torchtext>=0.10.0 in /usr/local/lib/python3.7/dist-packages (from joeynmt==1.3) (0.10.0)
Collecting sacrebleu>=2.0.0
Downloading sacrebleu-2.0.0-py3-none-any.whl (90 kB)
[K |████████████████████████████████| 90 kB 3.8 MB/s
[?25hCollecting subword-nmt
Downloading subword_nmt-0.3.7-py2.py3-none-any.whl (26 kB)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from joeynmt==1.3) (3.2.2)
Requirement already satisfied: seaborn in /usr/local/lib/python3.7/dist-packages (from joeynmt==1.3) (0.11.2)
Collecting pyyaml>=5.1
Downloading PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl (636 kB)
[K |████████████████████████████████| 636 kB 19.6 MB/s
[?25hCollecting pylint>=2.9.6
Downloading pylint-2.11.1-py3-none-any.whl (392 kB)
[K |████████████████████████████████| 392 kB 38.0 MB/s
[?25hCollecting six==1.12
Downloading six-1.12.0-py2.py3-none-any.whl (10 kB)
Collecting wrapt==1.11.1
Downloading wrapt-1.11.1.tar.gz (27 kB)
Collecting typing-extensions>=3.10.0
Downloading typing_extensions-3.10.0.2-py3-none-any.whl (26 kB)
Collecting astroid<2.9,>=2.8.0
Downloading astroid-2.8.2-py3-none-any.whl (246 kB)
[K |████████████████████████████████| 246 kB 42.7 MB/s
[?25hRequirement already satisfied: toml>=0.7.1 in /usr/local/lib/python3.7/dist-packages (from pylint>=2.9.6->joeynmt==1.3) (0.10.2)
Collecting mccabe<0.7,>=0.6
Downloading mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB)
Collecting platformdirs>=2.2.0
Downloading platformdirs-2.4.0-py3-none-any.whl (14 kB)
Collecting isort<6,>=4.2.5
Downloading isort-5.9.3-py3-none-any.whl (106 kB)
[K |████████████████████████████████| 106 kB 51.6 MB/s
[?25hCollecting typed-ast<1.5,>=1.4.0
Downloading typed_ast-1.4.3-cp37-cp37m-manylinux1_x86_64.whl (743 kB)
[K |████████████████████████████████| 743 kB 39.4 MB/s
[?25hCollecting lazy-object-proxy>=1.4.0
Downloading lazy_object_proxy-1.6.0-cp37-cp37m-manylinux1_x86_64.whl (55 kB)
[K |████████████████████████████████| 55 kB 3.0 MB/s
[?25hCollecting portalocker
Downloading portalocker-2.3.2-py2.py3-none-any.whl (15 kB)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from sacrebleu>=2.0.0->joeynmt==1.3) (2019.12.20)
Collecting colorama
Downloading colorama-0.4.4-py2.py3-none-any.whl (16 kB)
Requirement already satisfied: tabulate>=0.8.9 in /usr/local/lib/python3.7/dist-packages (from sacrebleu>=2.0.0->joeynmt==1.3) (0.8.9)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=1.15->joeynmt==1.3) (2.23.0)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=1.15->joeynmt==1.3) (1.41.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=1.15->joeynmt==1.3) (1.8.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=1.15->joeynmt==1.3) (1.35.0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=1.15->joeynmt==1.3) (0.37.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=1.15->joeynmt==1.3) (0.4.6)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=1.15->joeynmt==1.3) (1.0.1)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=1.15->joeynmt==1.3) (0.12.0)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=1.15->joeynmt==1.3) (0.6.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=1.15->joeynmt==1.3) (3.3.4)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard>=1.15->joeynmt==1.3) (3.17.3)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=1.15->joeynmt==1.3) (4.7.2)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=1.15->joeynmt==1.3) (4.2.4)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard>=1.15->joeynmt==1.3) (0.2.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=1.15->joeynmt==1.3) (1.3.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard>=1.15->joeynmt==1.3) (4.8.1)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard>=1.15->joeynmt==1.3) (0.4.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard>=1.15->joeynmt==1.3) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard>=1.15->joeynmt==1.3) (2021.5.30)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard>=1.15->joeynmt==1.3) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard>=1.15->joeynmt==1.3) (2.10)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=1.15->joeynmt==1.3) (3.1.1)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from torchtext>=0.10.0->joeynmt==1.3) (4.62.3)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->markdown>=2.6.8->tensorboard>=1.15->joeynmt==1.3) (3.6.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->joeynmt==1.3) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->joeynmt==1.3) (2.8.2)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->joeynmt==1.3) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->joeynmt==1.3) (1.3.2)
Requirement already satisfied: scipy>=1.0 in /usr/local/lib/python3.7/dist-packages (from seaborn->joeynmt==1.3) (1.4.1)
Requirement already satisfied: pandas>=0.23 in /usr/local/lib/python3.7/dist-packages (from seaborn->joeynmt==1.3) (1.1.5)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.23->seaborn->joeynmt==1.3) (2018.9)
Building wheels for collected packages: joeynmt, wrapt
Building wheel for joeynmt (setup.py) ... [?25l[?25hdone
Created wheel for joeynmt: filename=joeynmt-1.3-py3-none-any.whl size=86029 sha256=9cf2ca4c26274d4054c39fdc30dfaf0ba310c6316021ec2f7424258690d7aec8
Stored in directory: /tmp/pip-ephem-wheel-cache-gciht89z/wheels/0a/f4/bf/6c9d3b8efbfece6cd209f865be37382b02e7c3584df2e28ca4
Building wheel for wrapt (setup.py) ... [?25l[?25hdone
Created wheel for wrapt: filename=wrapt-1.11.1-cp37-cp37m-linux_x86_64.whl size=68437 sha256=2abf7cfa0208c10d77e60b78de85b364d802f8b1038841523467211057b0ac37
Stored in directory: /root/.cache/pip/wheels/4e/58/9d/da8bad4545585ca52311498ff677647c95c7b690b3040171f8
Successfully built joeynmt wrapt
Installing collected packages: typing-extensions, six, wrapt, typed-ast, lazy-object-proxy, portalocker, platformdirs, mccabe, isort, colorama, astroid, subword-nmt, sacrebleu, pyyaml, pylint, joeynmt
Attempting uninstall: typing-extensions
Found existing installation: typing-extensions 3.7.4.3
Uninstalling typing-extensions-3.7.4.3:
Successfully uninstalled typing-extensions-3.7.4.3
Attempting uninstall: six
Found existing installation: six 1.15.0
Uninstalling six-1.15.0:
Successfully uninstalled six-1.15.0
Attempting uninstall: wrapt
Found existing installation: wrapt 1.12.1
Uninstalling wrapt-1.12.1:
Successfully uninstalled wrapt-1.12.1
Attempting uninstall: pyyaml
Found existing installation: PyYAML 3.13
Uninstalling PyYAML-3.13:
Successfully uninstalled PyYAML-3.13
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.6.0 requires six~=1.15.0, but you have six 1.12.0 which is incompatible.
tensorflow 2.6.0 requires typing-extensions~=3.7.4, but you have typing-extensions 3.10.0.2 which is incompatible.
tensorflow 2.6.0 requires wrapt~=1.12.1, but you have wrapt 1.11.1 which is incompatible.
google-colab 1.0.0 requires six~=1.15.0, but you have six 1.12.0 which is incompatible.
google-api-python-client 1.12.8 requires six<2dev,>=1.13.0, but you have six 1.12.0 which is incompatible.
google-api-core 1.26.3 requires six>=1.13.0, but you have six 1.12.0 which is incompatible.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
albumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.[0m
Successfully installed astroid-2.8.2 colorama-0.4.4 isort-5.9.3 joeynmt-1.3 lazy-object-proxy-1.6.0 mccabe-0.6.1 platformdirs-2.4.0 portalocker-2.3.2 pylint-2.11.1 pyyaml-5.4.1 sacrebleu-2.0.0 six-1.12.0 subword-nmt-0.3.7 typed-ast-1.4.3 typing-extensions-3.10.0.2 wrapt-1.11.1
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Collecting torch==1.8.0+cu101
Downloading https://download.pytorch.org/whl/cu101/torch-1.8.0%2Bcu101-cp37-cp37m-linux_x86_64.whl (763.5 MB)
[K |████████████████████████████████| 763.5 MB 14 kB/s
[?25hRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.8.0+cu101) (3.10.0.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torch==1.8.0+cu101) (1.19.5)
Installing collected packages: torch
Attempting uninstall: torch
Found existing installation: torch 1.9.0+cu111
Uninstalling torch-1.9.0+cu111:
Successfully uninstalled torch-1.9.0+cu111
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchvision 0.10.0+cu111 requires torch==1.9.0, but you have torch 1.8.0+cu101 which is incompatible.
torchtext 0.10.0 requires torch==1.9.0, but you have torch 1.8.0+cu101 which is incompatible.
joeynmt 1.3 requires torch>=1.9.0, but you have torch 1.8.0+cu101 which is incompatible.[0m
Successfully installed torch-1.8.0+cu101
###Markdown
Preprocessing the Data into Subword BPE Tokens- One of the most powerful improvements for agglutinative languages (a feature of most Bantu languages) is using BPE tokenization [ (Sennrich, 2015) ](https://arxiv.org/abs/1508.07909).- It was also shown that by optimizing the umber of BPE codes we significantly improve results for low-resourced languages [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021) [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)- Below we have the scripts for doing BPE tokenization of our data. We use 4000 tokens as recommended by [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021). You do not need to change anything. Simply running the below will be suitable.
###Code
# One of the huge boosts in NMT performance was to use a different method of tokenizing.
# Usually, NMT would tokenize by words. However, using a method called BPE gave amazing boosts to performance
# Do subword NMT
from os import path
os.environ["src"] = source_language # Sets them in bash as well, since we often use bash scripts
os.environ["tgt"] = target_language
# Learn BPEs on the training data.
os.environ["data_path"] = path.join("joeynmt", "data",target_language + source_language ) # Herman!
! subword-nmt learn-joint-bpe-and-vocab --input train.$src train.$tgt -s 4000 -o bpe.codes.4000 --write-vocabulary vocab.$src vocab.$tgt
# Apply BPE splits to the development and test data.
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < train.$src > train.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < train.$tgt > train.bpe.$tgt
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < dev.$src > dev.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < dev.$tgt > dev.bpe.$tgt
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < test.$src > test.bpe.$src
! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < test.$tgt > test.bpe.$tgt
# Create directory, move everyone we care about to the correct location
! mkdir -p $data_path
! cp train.* $data_path
! cp test.* $data_path
! cp dev.* $data_path
! cp bpe.codes.4000 $data_path
! ls $data_path
# Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path
! cp train.* "$gdrive_path"
! cp test.* "$gdrive_path"
! cp dev.* "$gdrive_path"
! cp bpe.codes.4000 "$gdrive_path"
! ls "$gdrive_path"
# Create that vocab using build_vocab
! sudo chmod 777 joeynmt/scripts/build_vocab.py
! joeynmt/scripts/build_vocab.py joeynmt/data/$tgt$src/train.bpe.$src joeynmt/data/$tgt$src/train.bpe.$tgt --output_path joeynmt/data/$tgt$src/vocab.txt
# Some output
! echo "BPE Sesotho Sentences"
! tail -n 5 test.bpe.$tgt
! echo "Combined BPE Vocab"
! tail -n 10 joeynmt/data/$tgt$src/vocab.txt # Herman
# Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path
! cp train.* "$gdrive_path"
! cp test.* "$gdrive_path"
! cp dev.* "$gdrive_path"
! cp bpe.codes.4000 "$gdrive_path"
! ls "$gdrive_path"
###Output
bpe.codes.4000 dev.en test.bpe.st test.st train.en
dev.bpe.en dev.st test.en train.bpe.en train.st
dev.bpe.st test.bpe.en test.en-any.en train.bpe.st
###Markdown
Creating the JoeyNMT ConfigJoeyNMT requires a yaml config. We provide a template below. We've also set a number of defaults with it, that you may play with!- We used Transformer architecture - We set our dropout to reasonably high: 0.3 (recommended in [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021))Things worth playing with:- The batch size (also recommended to change for low-resourced languages)- The number of epochs (we've set it at 30 just so it runs in about an hour, for testing purposes)- The decoder options (beam_size, alpha)- Evaluation metrics (BLEU versus Crhf4)
###Code
# This creates the config file for our JoeyNMT system. It might seem overwhelming so we've provided a couple of useful parameters you'll need to update
# (You can of course play with all the parameters if you'd like!)
name = '%s%s' % (target_language, source_language)
# gdrive_path = os.environ["gdrive_path"]
# Create the config
config = """
name: "{target_language}{source_language}_reverse_transformer"
data:
src: "{target_language}"
trg: "{source_language}"
train: "data/{name}/train.bpe"
dev: "data/{name}/dev.bpe"
test: "data/{name}/test.bpe"
level: "bpe"
lowercase: False
max_sent_length: 100
src_vocab: "data/{name}/vocab.txt"
trg_vocab: "data/{name}/vocab.txt"
testing:
beam_size: 5
alpha: 1.0
training:
#load_model: "{gdrive_path}/models/{name}_transformer/1.ckpt" # if uncommented, load a pre-trained model from this checkpoint
random_seed: 42
optimizer: "adam"
normalization: "tokens"
adam_betas: [0.9, 0.999]
scheduling: "Noam scheduling" # TODO: try switching from plateau to Noam scheduling. plateau to Noam scheduling
patience: 5 # For plateau: decrease learning rate by decrease_factor if validation score has not improved for this many validation rounds.
learning_rate_factor: 0.5 # factor for Noam scheduler (used with Transformer)
learning_rate_warmup: 1000 # warmup steps for Noam scheduler (used with Transformer)
decrease_factor: 0.7
loss: "crossentropy"
learning_rate: 0.0003
learning_rate_min: 0.00000001
weight_decay: 0.0
label_smoothing: 0.1
batch_size: 4096
batch_type: "token"
eval_batch_size: 3600
eval_batch_type: "token"
batch_multiplier: 1
early_stopping_metric: "ppl"
epochs: 3 # TODO: Decrease for when playing around and checking of working. Around 30 is sufficient to check if its working at all. 5 - 3
validation_freq: 1000 # TODO: Set to at least once per epoch.
logging_freq: 100
eval_metric: "bleu"
model_dir: "models/{name}_reverse_transformer"
overwrite: True # TODO: Set to True if you want to overwrite possibly existing models.
shuffle: True
use_cuda: True
max_output_length: 100
print_valid_sents: [0, 1, 2, 3]
keep_last_ckpts: 3
model:
initializer: "xavier"
bias_initializer: "zeros"
init_gain: 1.0
embed_initializer: "xavier"
embed_init_gain: 1.0
tied_embeddings: True
tied_softmax: True
encoder:
type: "transformer"
num_layers: 6
num_heads: 8 # TODO: Increase to 8 for larger data. 4 - 8
embeddings:
embedding_dim: 512 # TODO: Increase to 512 for larger data. 256 -512
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 512 # TODO: Increase to 512 for larger data. 256 - 512
ff_size: 2048 # TODO: Increase to 2048 for larger data. 1024 - 2048
dropout: 0.3
decoder:
type: "transformer"
num_layers: 6
num_heads: 8 # TODO: Increase to 8 for larger data. 4 - 8
embeddings:
embedding_dim: 512 # TODO: Increase to 512 for larger data. 256 - 512
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 512 # TODO: Increase to 512 for larger data. 256 - 512
ff_size: 2048 # TODO: Increase to 2048 for larger data. 1024 - 2048
dropout: 0.3
""".format(name=name, gdrive_path=os.environ["gdrive_path"], source_language=source_language, target_language=target_language)
with open("joeynmt/configs/transformer_reverse_{name}.yaml".format(name=name),'w') as f:
f.write(config)
###Output
_____no_output_____
###Markdown
Train the ModelThis single line of joeynmt runs the training using the config we made above
###Code
# Train the model
# You can press Ctrl-C to stop. And then run the next cell to save your checkpoints!
!cd joeynmt; python3 -m joeynmt train configs/transformer_reverse_$tgt$src.yaml
# Copy the created models from the notebook storage to google drive for persistant storage
!cp -r joeynmt/models/${tgt}${src}_reverse_transformer/* "$gdrive_path/models/${src}${tgt}_reverse_transformer/"
# Output our validation accuracy
! cat "$gdrive_path/models/${tgt}${src}_reverse_transformer/validations.txt"
# Test our model
! cd joeynmt; python3 -m joeynmt test "$gdrive_path/models/${tgt}${src}_reverse_transformer/config.yaml"
###Output
_____no_output_____ |
nbs/55_cosine_search.ipynb | ###Markdown
Cosine
###Code
# default_exp cosine
# export
import numpy as np
import pandas as pd
from forgebox.category import Category
###Output
_____no_output_____
###Markdown
Cosine Similarity
###Code
# export
class CosineSearch:
"""
Build a index search on cosine distance
cos = CosineSearch(base_array)
idx_order = cos(vec)
"""
def __init__(self, base):
assert len(base.shape) == 2,\
f"Base array has to be 2 dimentional, input is {len(base.shape)}"
self.base = base
self.base_norm = self.calc_base_norm(self.base)
self.normed_base = self.base/self.base_norm[:, None]
self.dim = self.base.shape[1]
def __len__(self): return base.shape[0]
@staticmethod
def calc_base_norm(base: np.ndarray) -> np.ndarray:
return np.sqrt(np.power(base, 2).sum(1))
def search(self, vec: np.ndarray, return_similarity: bool = False):
if return_similarity:
similarity = (vec * self.normed_base /
(np.power(vec, 2).sum())).sum(1)
order = similarity.argsort()[::-1]
return order, similarity[order]
return self(vec)
def __call__(self, vec: np.ndarray) -> np.ndarray:
"""
Return the order index of the closest vector to the furthest
vec: an 1 dimentional vector
"""
return (vec * self.normed_base).sum(1).argsort()[::-1]
class CosineSearchWithCategory(CosineSearch):
"""
Combine with the category manager
The class can return a dataframe with category information
search_dataframe
"""
def __init__(self, base: np.ndarray, category: np.ndarray):
super().__init__(base)
self.category = category
assert len(self.category) >= len(self), "category number too small"
def search_dataframe(
self, vec, return_similarity=True
) -> pd.DataFrame:
"""
return a dataframe from the closest
category to the furthest
"""
if return_similarity:
idx, similarity = self.search(vec, return_similarity)
return pd.DataFrame({
"category": self.category.i2c[idx],
"idx": idx,
"similarity": similarity})
idx = self.search(vec, return_similarity)
return pd.DataFrame({
"category": self.category.i2c[idx],
"idx": idx})
###Output
_____no_output_____
###Markdown
Test search
###Code
base = np.random.rand(50000,100)-.2
vec = base[200]
cosine = CosineSearch(base)
cosine(vec)
cosine.search(vec, return_similarity=True)
# cos_cat = CosineSearchWithCategory(base, Category(list(f"c{i}" for i in range(len(base)))))
%%time
for i in range(100):
cosine(vec)
###Output
CPU times: user 1.21 s, sys: 147 ms, total: 1.36 s
Wall time: 1.37 s
|
DeepFake-Xception.ipynb | ###Markdown
Dir
###Code
train_dir = '/mnt/a/fakedata/deepfake/train'
validation_dir = '/mnt/a/fakedata/deepfake/val'
test50_dir = '/mnt/a/fakedata/deepfake/test'
###Output
_____no_output_____
###Markdown
Xception
###Code
img_input = Input(shape=(img_height, img_width, 3))
# layer 1 #
x = Conv2D(filters=32, kernel_size=(3, 3), strides=2, padding='valid', use_bias=False)(img_input)
x = BatchNormalization()(x)
x = Activation('relu')(x)
# layer 2 #
x = Conv2D(filters=64, kernel_size=(3, 3), padding='valid', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
# skip layer 1 #
res = Conv2D(filters=128, kernel_size=(1, 1), strides=2, padding='same', use_bias=False)(x)
res = BatchNormalization()(res)
# layer 3 #
x = SeparableConv2D(filters=128, kernel_size=(3, 3), strides=1, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
# layer 4 #
x = Activation('relu')(x)
x = SeparableConv2D(filters=128, kernel_size=(3,3), strides=1, padding='same', use_bias=False)(x)
x = MaxPooling2D(pool_size=(3, 3), strides=2, padding='same')(x)
x = Add()([x, res])
# skip layer 2 #
res = Conv2D(filters=256, kernel_size=(1, 1), strides=2, padding='same', use_bias=False)(x)
res = BatchNormalization()(res)
# layer 5 #
x = Activation('relu')(x)
x = SeparableConv2D(filters=256, kernel_size=(3, 3), strides=1, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
# layer 6 #
x = Activation('relu')(x)
x = SeparableConv2D(filters=256, kernel_size=(3,3), strides=1, padding='same', use_bias=False)(x)
x = MaxPooling2D(pool_size=(3, 3), strides=2, padding='same')(x)
x = Add()([x, res])
# skip layer 3 #
res = Conv2D(filters=728, kernel_size=(1, 1), strides=2, padding='same', use_bias=False)(x)
res = BatchNormalization()(res)
# layer 7 #
x = Activation('relu')(x)
x = SeparableConv2D(filters=728, kernel_size=(3, 3), strides=1, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
# layer 8 #
x = Activation('relu')(x)
x = SeparableConv2D(filters=728, kernel_size=(3,3), strides=1, padding='same', use_bias=False)(x)
x = MaxPooling2D(pool_size=(3, 3), strides=2, padding='same')(x)
x = Add()([x, res])
# ======== middle flow ========= #
for i in range(8):
# layer 9, 10, 11, 12, 13, 14, 15, 16, 17 #
res = x
x = Activation('relu')(x)
x = SeparableConv2D(filters=728, kernel_size=(3, 3), strides=1, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = SeparableConv2D(filters=728, kernel_size=(3, 3), strides=1, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = SeparableConv2D(filters=728, kernel_size=(3, 3), strides=1, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Add()([x, res])
# ======== exit flow ========== #
# skip layer 4 #
res = Conv2D(filters=1024, kernel_size=(1, 1), strides=2, padding='same', use_bias=False)(x)
res = BatchNormalization()(res)
# layer 18 #
x = Activation('relu')(x)
x = SeparableConv2D(filters=728, kernel_size=(3, 3), strides=1, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
# layer 19 #
x = Activation('relu')(x)
x = SeparableConv2D(filters=1024, kernel_size=(3, 3), strides=1, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(3, 3), strides=2, padding='same')(x)
x = Add()([x, res])
# layer 20 #
x = SeparableConv2D(filters=1536, kernel_size=(3, 3), strides=1, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
# layer 21 #
x = SeparableConv2D(filters=2048, kernel_size=(3, 3), strides=1, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x_gap = GlobalAveragePooling2D()(x)
output = Dense(units=2, activation='softmax')(x_gap)
model = Model(img_input, output)
model.summary()
model.compile(optimizer=Adam(),
loss='categorical_crossentropy',
metrics=['accuracy'])
print(len(model.trainable_weights))
def bgr(img):
return cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
###Output
_____no_output_____
###Markdown
Data generator
###Code
train_datagen = ImageDataGenerator(rescale=1./255,
preprocessing_function=bgr)
test_datagen = ImageDataGenerator(rescale=1./255,
preprocessing_function=bgr)
train_generator = train_datagen.flow_from_directory(train_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
shuffle=True,
class_mode='categorical')
validation_generator = train_datagen.flow_from_directory(validation_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
test50_generator = test_datagen.flow_from_directory(test50_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
# callback_list = [EarlyStopping(monitor='val_accuracy', patience=10),
# ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3)]
# history = model.fit_generator(train_generator,
# steps_per_epoch=200,
# epochs=100,
# validation_data=validation_generator,
# validation_steps=len(validation_generator),
# callbacks=callback_list)
# model.save('/home/www/fake_detection/model/deepfake_xception.h5')
# model = load_model('/home/www/fake_detection/model/deepfake_xception.h5')
# output = model.predict_generator(test50_generator, steps=len(test50_generator), verbose=1)
# np.set_printoptions(formatter={'float': lambda x: "{0:0.3f}".format(x)})
# print(test50_generator.class_indices)
# print(output)
# output_score50 = []
# output_class50 = []
# answer_class50 = []
# answer_class50_1 =[]
# for i in trange(len(test50_generator)):
# output50 = model.predict_on_batch(test50_generator[i][0])
# output_score50.append(output50)
# answer_class50.append(test50_generator[i][1])
# output_score50 = np.concatenate(output_score50)
# answer_class50 = np.concatenate(answer_class50)
# output_class50 = np.argmax(output_score50, axis=1)
# answer_class50_1 = np.argmax(answer_class50, axis=1)
# print(output_class50)
# print(answer_class50_1)
# cm50 = confusion_matrix(answer_class50_1, output_class50)
# report50 = classification_report(answer_class50_1, output_class50)
# recall50 = cm50[0][0] / (cm50[0][0] + cm50[0][1])
# fallout50 = cm50[1][0] / (cm50[1][0] + cm50[1][1])
# fpr50, tpr50, thresholds50 = roc_curve(answer_class50_1, output_score50[:, 1], pos_label=1.)
# eer50 = brentq(lambda x : 1. - x - interp1d(fpr50, tpr50)(x), 0., 1.)
# thresh50 = interp1d(fpr50, thresholds50)(eer50)
# print(report50)
# print(cm50)
# print("AUROC: %f" %(roc_auc_score(answer_class50_1, output_score50[:, 1])))
# print(thresh50)
# print('test_acc: ', len(output_class50[np.equal(output_class50, answer_class50_1)]) / len(output_class50))
def cutout(img):
"""
# Function: RandomCrop (ZeroPadded (4, 4)) + random occulusion image
# Arguments:
img: image
# Returns:
img
"""
img = bgr(img)
height = img.shape[0]
width = img.shape[1]
channels = img.shape[2]
MAX_CUTS = 3 # chance to get more cuts
MAX_LENGTH_MUTIPLIER = 10 # chance to get larger cuts
# 16 for cifar10, 8 for cifar100
# Zero-padded (4, 4)
# img = np.pad(img, ((4,4),(4,4),(0,0)), mode='constant', constant_values=(0))
# # random-crop 64x64
# dy, dx = height, width
# x = np.random.randint(0, width - dx + 1)
# y = np.random.randint(0, height - dy + 1)
# img = img[y:(y+dy), x:(x+dx)]
# mean norm
# mean = img.mean(keepdims=True)
# img -= mean
img *= 1./255
mask = np.ones((height, width, channels), dtype=np.float32)
nb_cuts = np.random.randint(0, MAX_CUTS + 1)
# cutout
for i in range(nb_cuts):
y = np.random.randint(height)
x = np.random.randint(width)
length = 4 * np.random.randint(1, MAX_LENGTH_MUTIPLIER+1)
y1 = np.clip(y-length//2, 0, height)
y2 = np.clip(y+length//2, 0, height)
x1 = np.clip(x-length//2, 0, width)
x2 = np.clip(x+length//2, 0, width)
mask[y1:y2, x1:x2, :] = 0.
img = img * mask
return img
class ReLU6(Layer):
def __init__(self):
super().__init__(name="ReLU6")
self.relu6 = ReLU(max_value=6, name="ReLU6")
def call(self, input):
return self.relu6(input)
class HardSigmoid(Layer):
def __init__(self):
super().__init__()
self.relu6 = ReLU6()
def call(self, input):
return self.relu6(input + 3.0) / 6.0
class HardSwish(Layer):
def __init__(self):
super().__init__()
self.hard_sigmoid = HardSigmoid()
def call(self, input):
return input * self.hard_sigmoid(input)
class Attention(Layer):
def __init__(self, ch, **kwargs):
super(Attention, self).__init__(**kwargs)
self.channels = ch
self.filters_f_g = self.channels // 8
self.filters_h = self.channels
def build(self, input_shape):
kernel_shape_f_g = (1, 1) + (self.channels, self.filters_f_g)
print(kernel_shape_f_g)
kernel_shape_h = (1, 1) + (self.channels, self.filters_h)
# Create a trainable weight variable for this layer:
self.gamma = self.add_weight(name='gamma', shape=[1], initializer='zeros', trainable=True)
self.kernel_f = self.add_weight(shape=kernel_shape_f_g,
initializer='glorot_uniform',
name='kernel_f')
self.kernel_g = self.add_weight(shape=kernel_shape_f_g,
initializer='glorot_uniform',
name='kernel_g')
self.kernel_h = self.add_weight(shape=kernel_shape_h,
initializer='glorot_uniform',
name='kernel_h')
self.bias_f = self.add_weight(shape=(self.filters_f_g,),
initializer='zeros',
name='bias_F')
self.bias_g = self.add_weight(shape=(self.filters_f_g,),
initializer='zeros',
name='bias_g')
self.bias_h = self.add_weight(shape=(self.filters_h,),
initializer='zeros',
name='bias_h')
super(Attention, self).build(input_shape)
# Set input spec.
self.input_spec = InputSpec(ndim=4,
axes={3: input_shape[-1]})
self.built = True
def call(self, x):
def hw_flatten(x):
return K.reshape(x, shape=[K.shape(x)[0], K.shape(x)[1]*K.shape(x)[2], K.shape(x)[-1]])
f = K.conv2d(x,
kernel=self.kernel_f,
strides=(1, 1), padding='same') # [bs, h, w, c']
f = K.bias_add(f, self.bias_f)
g = K.conv2d(x,
kernel=self.kernel_g,
strides=(1, 1), padding='same') # [bs, h, w, c']
g = K.bias_add(g, self.bias_g)
h = K.conv2d(x,
kernel=self.kernel_h,
strides=(1, 1), padding='same') # [bs, h, w, c]
h = K.bias_add(h, self.bias_h)
s = tf.matmul(hw_flatten(g), hw_flatten(f), transpose_b=True) # # [bs, N, N]
beta = K.softmax(s, axis=-1) # attention map
o = K.batch_dot(beta, hw_flatten(h)) # [bs, N, C]
o = K.reshape(o, shape=K.shape(x)) # [bs, h, w, C]
x = self.gamma * o + x
return x
def compute_output_shape(self, input_shape):
return input_shape
ft_dir = '/mnt/a/fakedata/deepfake/finetune'
train_gen_aug = ImageDataGenerator(shear_range=0,
zoom_range=0,
rotation_range=0.2,
width_shift_range=2.,
height_shift_range=2.,
horizontal_flip=True,
zca_whitening=False,
fill_mode='nearest',
preprocessing_function=cutout)
test_datagen = ImageDataGenerator(rescale=1./255, preprocessing_function=bgr)
ft_gen = train_gen_aug.flow_from_directory(ft_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
shuffle=True,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(validation_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
test50_generator = test_datagen.flow_from_directory(test50_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
model_ft = load_model('/home/www/fake_detection/model/deepfake_xception.h5')
for i in range(2):
model_ft.layers.pop()
im_in = Input(shape=(img_width, img_height, 3))
base_model = Model(img_input, x)
base_model.set_weights(model_ft.get_weights())
# for i in range(len(base_model.layers) - 0):
# base_model.layers[i].trainable = False
x1 = base_model(im_in) # (12, 12, 32)
########### Mobilenet block bneck 3x3 (32 --> 128) #################
expand1 = Conv2D(576, kernel_size=1, strides=1, kernel_regularizer=l2(1e-5), use_bias=False)(x1)
expand1 = BatchNormalization()(expand1)
expand1 = HardSwish()(expand1)
dw1 = DepthwiseConv2D(kernel_size=(3,3), strides=(2,2), padding='same', depthwise_regularizer=l2(1e-5), use_bias=False)(expand1)
dw1 = BatchNormalization()(dw1)
se_gap1 = GlobalAveragePooling2D()(dw1)
se_gap1 = Reshape([1, 1, -1])(se_gap1)
se1 = Conv2D(144, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se_gap1)
se1 = Activation('relu')(se1)
se1 = Conv2D(576, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se1)
se1 = HardSigmoid()(se1)
se1 = Multiply()([expand1, se1])
project1 = HardSwish()(se1)
project1 = Conv2D(128, kernel_size=(1, 1), padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(project1)
project1 = BatchNormalization()(project1)
########### Mobilenet block bneck 5x5 (128 --> 128) #################
expand2 = Conv2D(576, kernel_size=1, strides=1, kernel_regularizer=l2(1e-5), use_bias=False)(project1)
expand2 = BatchNormalization()(expand2)
expand2 = HardSwish()(expand2)
dw2 = DepthwiseConv2D(kernel_size=(5,5), strides=(1,1), padding='same', depthwise_regularizer=l2(1e-5), use_bias=False)(expand2)
dw2 = BatchNormalization()(dw2)
se_gap2 = GlobalAveragePooling2D()(dw2)
se_gap2 = Reshape([1, 1, -1])(se_gap2)
se2 = Conv2D(144, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se_gap2)
se2 = Activation('relu')(se2)
se2 = Conv2D(576, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se2)
se2 = HardSigmoid()(se2)
se2 = Multiply()([expand2, se2])
project2 = HardSwish()(se2)
project2 = Conv2D(128, kernel_size=(1, 1), padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(project2)
project2 = BatchNormalization()(project2)
project2 = Add()([project1, project2])
########### Mobilenet block bneck 5x5 (128 --> 128) #################
expand3 = Conv2D(576, kernel_size=1, strides=1, kernel_regularizer=l2(1e-5), use_bias=False)(project2)
expand3 = BatchNormalization()(expand3)
expand3 = HardSwish()(expand3)
dw3 = DepthwiseConv2D(kernel_size=(5,5), strides=(1,1), padding='same', depthwise_regularizer=l2(1e-5), use_bias=False)(expand3)
dw3 = BatchNormalization()(dw3)
se_gap3 = GlobalAveragePooling2D()(dw3)
se_gap3 = Reshape([1, 1, -1])(se_gap3)
se3 = Conv2D(144, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se_gap3)
se3 = Activation('relu')(se3)
se3 = Conv2D(576, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se3)
se3 = HardSigmoid()(se3)
se3 = Multiply()([expand3, se3])
project3 = HardSwish()(se3)
project3 = Conv2D(128, kernel_size=(1, 1), padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(project3)
project3 = BatchNormalization()(project3)
project3 = Add()([project2, project3])
expand4 = Conv2D(576, kernel_size=1, strides=1, kernel_regularizer=l2(1e-5), use_bias=False)(project3)
expand4 = BatchNormalization()(expand4)
expand4 = HardSwish()(expand4)
dw4 = DepthwiseConv2D(kernel_size=(5,5), strides=(1,1), padding='same', depthwise_regularizer=l2(1e-5), use_bias=False)(expand4)
dw4 = BatchNormalization()(dw4)
se_gap4 = GlobalAveragePooling2D()(dw4)
se_gap4 = Reshape([1, 1, -1])(se_gap4)
se4 = Conv2D(144, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se_gap4)
se4 = Activation('relu')(se4)
se4 = Conv2D(576, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se4)
se4 = HardSigmoid()(se4)
se4 = Multiply()([expand4, se4])
project4 = HardSwish()(se4)
project4 = Conv2D(128, kernel_size=(1, 1), padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(project4)
project4 = BatchNormalization()(project4)
project4 = Add()([project3, project4])
########## Classification ##########
x2 = Conv2D(576, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(project4)
x2 = BatchNormalization()(x2)
x2 = HardSwish()(x2)
x2 = GlobalAveragePooling2D()(x2)
######### Image Attention Model #########
### Block 1 ###
x3 = SeparableConv2D(32, kernel_size=(3, 3), strides=(2,2), padding='same', depthwise_regularizer=l2(1e-5), pointwise_regularizer=l2(1e-5), use_bias=False)(im_in)
x3 = BatchNormalization()(x3)
x3 = Activation('relu')(x3)
x3 = Attention(32)(x3)
### Block 2 ###
x4 = SeparableConv2D(64, kernel_size=(3, 3), strides=(2,2), padding='same', depthwise_regularizer=l2(1e-5), pointwise_regularizer=l2(1e-5), use_bias=False)(x3)
x4 = BatchNormalization()(x4)
x4 = Activation('relu')(x4)
x4 = Attention(64)(x4)
### Block 3 ###
x5 = SeparableConv2D(128, kernel_size=(3, 3), strides=(2,2), padding='same', depthwise_regularizer=l2(1e-5), pointwise_regularizer=l2(1e-5), use_bias=False)(x4)
x5 = BatchNormalization()(x5)
x5 = Activation('relu')(x5)
x5 = Attention(128)(x5)
### final stage ###
x6 = Conv2D(576, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(x5)
x6 = BatchNormalization()(x6)
x6 = Activation('relu')(x6)
x6 = GlobalAveragePooling2D()(x6)
######## final addition #########
x2 = Add()([x2, x6])
x2 = Dense(2, kernel_regularizer=l2(1e-5))(x2)
x2 = Activation('softmax')(x2)
model_top = Model(inputs=im_in, outputs=x2)
model_top.summary()
# optimizer = SGD(lr=1e-3, momentum=0.9, nesterov=True)
optimizer = Adam()
model_top.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['acc'])
callback_list = [EarlyStopping(monitor='val_acc', patience=30),
ReduceLROnPlateau(monitor='loss', factor=np.sqrt(0.5), cooldown=0, patience=5, min_lr=0.5e-5)]
output = model_top.fit_generator(ft_gen, steps_per_epoch=200, epochs=300,
validation_data=validation_generator, validation_steps=len(validation_generator), callbacks=callback_list)
output_score50 = []
output_class50 = []
answer_class50 = []
answer_class50_1 =[]
for i in trange(len(test50_generator)):
output50 = model_top.predict_on_batch(test50_generator[i][0])
output_score50.append(output50)
answer_class50.append(test50_generator[i][1])
output_score50 = np.concatenate(output_score50)
answer_class50 = np.concatenate(answer_class50)
output_class50 = np.argmax(output_score50, axis=1)
answer_class50_1 = np.argmax(answer_class50, axis=1)
print(output_class50)
print(answer_class50_1)
cm50 = confusion_matrix(answer_class50_1, output_class50)
report50 = classification_report(answer_class50_1, output_class50)
recall50 = cm50[0][0] / (cm50[0][0] + cm50[0][1])
fallout50 = cm50[1][0] / (cm50[1][0] + cm50[1][1])
fpr50, tpr50, thresholds50 = roc_curve(answer_class50_1, output_score50[:, 1], pos_label=1.)
eer50 = brentq(lambda x : 1. - x - interp1d(fpr50, tpr50)(x), 0., 1.)
thresh50 = interp1d(fpr50, thresholds50)(eer50)
print(report50)
print(cm50)
print("AUROC: %f" %(roc_auc_score(answer_class50_1, output_score50[:, 1])))
print(thresh50)
print('test_acc: ', len(output_class50[np.equal(output_class50, answer_class50_1)]) / len(output_class50))
model_top.save("/home/www/fake_detection/model/deepfake_xception_ft1.h5")
###Output
_____no_output_____ |
Magnetic Pickup/ReadScopeData.ipynb | ###Markdown
Read data from Rigol DS1054Z scope https://readthedocs.org/projects/ds1054z/downloads/pdf/stable/ Import the libraries
###Code
from ds1054z import DS1054Z
import matplotlib.pyplot as plt
import numpy as np
import math
import scipy.io as sio
import scipy.signal as sig
from scipy.fft import rfft, rfftfreq
import pyvisa as visa
import time
import os
import shutil
###Output
_____no_output_____
###Markdown
Define plot mode. Interactive mode is helpful for visuallizing the program execution
###Code
#%matplotlib widget
###Output
_____no_output_____
###Markdown
Verify scope connection
###Code
scope = DS1054Z('192.168.1.206')
print(scope.idn)
###Output
RIGOL TECHNOLOGIES,DS1054Z,DS1ZA200902668,00.04.04.SP3
###Markdown
Test Description This sheet is designed to take data on a Rigol DS1054z oscilloscope with channel 1 of the scope connected to an MSP6729 magnetic pick up viewing the chuck of a 7" x 10" mini lathe. Channel 1 common of the oscilloscope is connected to the magnetic pick up shield wire and the black wire. The signal of channel 1 is connected to the red wire of the mag pick up.  Define functions used in the test This is the function that sets the trigger level
###Code
def b_set_trigger(d_trigger_level = 1e-01):
"""Set the trigger configuration
Keyword arguments:
d_trigger_level -- Voltage level to trigger scope (default: 0.1 volts)
Return values:
[None]
"""
scope.write(':trigger:edge:source CHAN1')
scope.write(':trigger:edge:level ' + format(d_trigger_level))
scope.single()
###Output
_____no_output_____
###Markdown
Function that contains the commands that setup the scope
###Code
def b_setup_scope(scope, d_ch1_scale=5.e-1, timebase_scale=5e-2,
d_trigger_level = 1e-01, b_single = True):
"""Setup Rigol ds1054z to read a 3/8-24 magnetic pickup
Keyword arguments:
scope -- Connection to scope
d_ch1_scale -- Channel 1 scale (default: 0.5 volts)
timebase_scale -- Time scale for data (default: 0.005 seconds)
d_trigger_level -- Voltage level to trigger scope (default: 0.1 volts)
b_trigger -- If true, then use trigger levels (default: True)
Return values:
d_ch1_scale_actual -- The closest value chosen by the scope
"""
scope.timebase_scale = timebase_scale
scope.run()
scope.display_channel(1,enable=True)
scope.set_probe_ratio(1,1)
scope.set_channel_scale(1,"{:e}".format(d_ch1_scale) +'V')
scope.write(':CHANnel1:COUPling AC')
scope.display_channel(2,enable=False)
scope.display_channel(3,enable=False)
scope.display_channel(4,enable=False)
# Do we need a trigger?
if b_single:
# Set the scope to capture after trigger
b_set_trigger(d_trigger_level)
else:
# No trigger, useful for seeing the scope data when you aren't sure
# what the signal looks like
scope.write(":TRIGger:SWEep AUTO")
return scope.get_channel_scale(1)
###Output
_____no_output_____
###Markdown
Verify the help comments are at least somewhat on point
###Code
help(b_setup_scope)
###Output
Help on function b_setup_scope in module __main__:
b_setup_scope(scope, d_ch1_scale=0.5, timebase_scale=0.05, d_trigger_level=0.1, b_single=True)
Setup Rigol ds1054z to read a 3/8-24 magnetic pickup
Keyword arguments:
scope -- Connection to scope
d_ch1_scale -- Channel 1 scale (default: 0.5 volts)
timebase_scale -- Time scale for data (default: 0.005 seconds)
d_trigger_level -- Voltage level to trigger scope (default: 0.1 volts)
b_trigger -- If true, then use trigger levels (default: True)
Return values:
d_ch1_scale_actual -- The closest value chosen by the scope
###Markdown
Define the function that acquires data from scopeThis one is a little tricky because it can take time to acquire the signal so there are pause statements to allow data to accumulate at the scope. If the acquisition terminates before the sampling is complete there will be NaN's in the list. In this case the NaN's are converte zeros to allow processing to continue. It can be helpful to see a partial waveform to troubleshoot timing at the scope.
###Code
def d_get_data(i_ch=1, timebase_scale=5e-2):
"""Get data from the scope
Keyword arguments:
i_ch -- 1-based index of channel to sample (default: 1)
Return values:
np_d_ch1 -- numpy array of values from the scope
"""
# Calculate the delay time
d_time_delay = timebase_scale*32 + 1.
# Acquire the data
time.sleep(d_time_delay)
d_ch1 = scope.get_waveform_samples(i_ch, mode='NORM')
time.sleep(d_time_delay)
scope.run()
# Convert the list to a numpy array and replace NaN's with zeros
np_d_ch1 = np.array(d_ch1)
np_d_ch1 = np.nan_to_num(np_d_ch1)
return np.array(np_d_ch1)
###Output
_____no_output_____
###Markdown
Verify the help text
###Code
help(d_get_data)
###Output
Help on function d_get_data in module __main__:
d_get_data(i_ch=1, timebase_scale=0.05)
Get data from the scope
Keyword arguments:
i_ch -- 1-based index of channel to sample (default: 1)
Return values:
np_d_ch1 -- numpy array of values from the scope
###Markdown
Define the function that extracts features from the data
###Code
class cl_sig_features:
"""Class to manage signal features on scope data
Example usage:
cl_test = cl_sig_features(np.array([1.,2., 3.]),1.1)
Should produce:
print('np_d_ch1: '+ np.array2string(cl_test.np_d_ch1))
print('timebase_scale: ' + '%0.3f' % cl_test.timebase_scale)
print('i_ns: ' + '%3.f' % cl_test.i_ns)
print('d_t_del: ' + '%0.3f' % cl_test.d_t_del)
print('d_time' + np.array2string(cl_test.d_time))
np_d_ch1: [1. 2. 3.]
timebase_scale: 1.000
i_ns: 3
d_t_del: 4.000
d_time[0. 4. 8.]
"""
def __init__(self, np_d_ch1, timebase_scale):
self.__np_d_ch1 = np_d_ch1
self.__timebase_scale = float(timebase_scale)
self.__np_d_rpm = np.zeros_like(self.np_d_ch1)
self.__d_thresh = np.NaN
self.__d_events_per_rev = np.NaN
@property
def np_d_ch1(self):
"""Numpy array containing the scope data"""
return self.__np_d_ch1
@property
def timebase_scale(self):
"""Scope time scale"""
return self.__timebase_scale
@property
def i_ns(self):
"""Number of samples in the scope data"""
self.__i_ns = len(self.__np_d_ch1)
return self.__i_ns
@property
def d_t_del(self):
"""Delta time between each sample"""
self.__d_t_del = (12.*float(self.timebase_scale))/float(self.i_ns)
return self.__d_t_del
@property
def d_time(self):
"""Numpy array with time values, in seconds"""
self.__d_time = np.linspace(0,(self.i_ns-1),self.i_ns)*self.d_t_del
return self.__d_time
@property
def d_fs(self):
"""Sampling frequeny in hertz"""
self.__d_fs = 1.0/(self.__d_time[1]-self.__d_time[0])
return self.__d_fs
@property
def np_d_ch1_filt(self):
""" Return the signal, filtered with Savitsky-Golay"""
self.__i_win_len = 31;
self.__i_poly_order = 1;
self.__np_d_ch1_filt = sig.savgol_filter(self.np_d_ch1,
self.__i_win_len,
self.__i_poly_order);
self.__str_filt_desc = ('Savitsky-Golay | Window Length: ' +
'%3.f' % self.__i_win_len +
' | Polynomial Order: ' + '%2.f' % self.__i_poly_order)
self.__str_filt_desc_short = 'SGolay'
return self.__np_d_ch1_filt
@property
def str_filt_desc(self):
"Complete Filt description of the Savitsky-Golay filter design"
return self.__str_filt_desc
@property
def str_filt_desc_short(self):
"""Short Filt description, useful for plot legend labels"""
return self.__str_filt_desc_short
@property
def np_d_ch1_filt1(self):
""" Return the signal, filtered with butter FIR filter"""
self.__i_poles = 1
if self.d_fs < 300:
self.__d_wn = fs/8
else:
self.__d_wn = 100
self.__sos = sig.butter(self.__i_poles, self.__d_wn, btype='low',
fs=self.d_fs, output = 'sos')
self.__np_d_ch1_filt1 = sig.sosfilt(self.__sos, self.np_d_ch1)
self.__str_filt1_desc = ('Butterworth | Poles: ' +
'%2.f' % self.__i_poles +
' | Lowpass corner (Hz): ' + '%0.2f' % self.__d_wn)
self.__str_filt1_desc_short = 'Butter'
return self.__np_d_ch1_filt1
@property
def str_filt1_desc(self):
"Complete Filt1 description of the Butterworth filter design"
return self.__str_filt1_desc
@property
def str_filt1_desc_short(self):
"""Short Filt1 description, useful for plot legend labels"""
return self.__str_filt1_desc_short
@property
def np_d_eventtimes(self):
"""Numpy array of trigger event times"""
return self.__np_d_eventtimes
@property
def d_thresh(self):
"""Trigger threshold value"""
return self.__d_thresh
@property
def np_d_rpm(self):
"""Estimated RPM values"""
return self.__np_d_rpm
@property
def d_events_per_rev(self):
"""Events per revolution"""
return self.__d_events_per_rev
@np_d_ch1.setter
def np_d_ch1(self, np_d_ch1):
self.__np_d_ch1 = np_d_ch1
@timebase_scale.setter
def timebase_scale(self, timebase_scale):
self.__timebase_scale = timebase_scale
# Method for calculating the spectrum for a real signal
def d_fft_real(self):
"""Calculate the half spectrum since this is a real-valued signal"""
d_y = rfft(np_d_ch1)
d_ws = rfftfreq(self.i_ns, 1./self.d_fs)
return([d_ws, d_y])
# Plotting method, time domain signals.
def plt_sigs(self):
"""Plot out the data in this signal feature class in the time domain
Return values:
handle to the plot
"""
plt.figure()
plt.plot(self.d_time, self.np_d_ch1)
plt.plot(self.d_time, self.np_d_ch1_filt)
plt.plot(self.d_time, self.np_d_ch1_filt1)
plt.grid()
plt.xlabel("Time, seconds")
plt.ylabel("Channel output, volts")
plt.legend(['as-aquired', self.str_filt_desc_short,
self.str_filt1_desc_short])
plt.show()
self.__plot_handle = plt.gcf()
return self.__plot_handle
# Plotting method for single-sided (real signal) spectrum
def plt_spec(self):
"""Plot data in frequency domain. This method assumes a real signal
Return values:
handle to the plot
"""
self.__spec = self.d_fft_real()
plt.figure()
plt.plot(self.__spec[0], np.abs(self.__spec[1]))
plt.grid()
plt.xlabel("Frequency, hertz")
plt.ylabel("Channel amplitude, volts")
plt.show()
self.__plot_handle = plt.gcf()
return [self.__plot_handle, self.__spec[0], self.__spec[1]]
# Plotting method for the eventtimes
def plt_eventtimes(self):
"""Plot event data in time.
Return values:
list: [handle to the plot, np array of eventtimes]
"""
# The eventtimes all should have threshold value for voltage
self.__np_d_eventvalue = np.ones_like(self.__np_d_eventtimes)*self.d_thresh
# Put up the the plot time
plt.figure()
plt.plot(self.__d_time, self.np_d_ch1)
plt.plot(self.np_d_eventtimes, self.__np_d_eventvalue, "ok")
plt.xlabel('Time, seconds')
plt.ylabel('Amplitude, volts')
plt.legend(['as-aquired', 'eventtimes'])
plt.title('Amplitude and eventtimes vs. time')
self.__plot_handle = plt.gcf()
return [self.__plot_handle, self.__np_d_eventtimes]
# Plotting method for the eventtimes
def plt_rpm(self):
"""Plot rpm data in time.
Return values:
list: [handle to the plot, np array of RPM values]
"""
# Put up the the plot time
fig,ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(self.__d_time, self.np_d_ch1)
ax2.plot(self.np_d_eventtimes, self.__np_d_rpm, "ok")
ax1.set_xlabel('Time, seconds')
ax1.set_ylabel('Amplitude, volts')
ax2.set_ylabel('Event speed, RPM')
plt.legend(['as-aquired', 'RPM'])
plt.title('Amplitude and eventtimes vs. time')
plt.show()
self.__plot_handle = plt.gcf()
return [self.__plot_handle, self.__np_d_rpm]
# Estimate triggers for speed, public method
def np_d_est_triggers(self, i_direction=0, d_thresh=0, d_hyst=0.1, i_kernel=5, b_verbose=False):
"""
This method estimates speed by identifying trigger points in time,
a given threshold and hysteresis. When the signal level crosses the threshold,
the trigger holds off. The trigger holds off until the signal crosses
hysteresis levels. Hysteresis is defined relative to the threshold voltage.
The trigger times can be used to estimate the rotating speed.
Keyword arguments:
i_direction -- 0 to search for threshold on rising signal, 1 to search
on a falling signal.
d_thresh -- Threshold value (default: 0.0 volts for zero crossings)
d_hyst -- Hysteresis value (default: 0.1 volts)
i_kernel -- Number of samples to consider in estimating slope,
must be an odd number (default: 5)
b_verbose -- Print the intermediate steps (default: False). Useful
for stepping through the method to troubleshoot or
understand it better.
Return values:
np_d_eventtimes -- numpy array with list of trigger event times
"""
# Store to local private member, it gets used in other places in the class
self.__d_thresh = d_thresh
# Initialize trigger state to hold off: the trigger will be active
# once the signal crosses the hysteresis
b_trigger_hold = True
# half kernel get used a lot
self.__i_half_kernel = int((i_kernel - 1)/2.)
# Use smoothing and derivative functions of S-G filter for estimating rise/fall
self.__np_d_ch1_dir = sig.savgol_filter(self.np_d_ch1,
i_kernel, 1, deriv=1);
# Initiate state machine: one state for rising signal, 'up', (i_direction = 0)
# and another for falling signal, 'down', (i_direction = 1)
self.__d_hyst_abs = 0
idx_event = 0
self.__np_d_eventtimes = np.zeros_like(self.np_d_ch1)
if i_direction == 0:
# Define the absolute hysteretic value, rising
self.__d_hyst_ab = self.__d_thresh - d_hyst
# Loop through the signal
for idx,x in enumerate(self.np_d_ch1):
# Intermediate results
if b_verbose:
print('idx: ' + '%2.f' % idx + ' | x: ' + '%0.5f' % x +
' | s-g: ' + '%0.4f' % self.__np_d_ch1_dir[idx])
# The trigger leaves 'hold-off' state if the slope is
# negative and we fall below the threshold
if (x <= self.__d_hyst_ab and self.__np_d_ch1_dir[idx] < 0 and
b_trigger_hold == True):
# Next time the signal rises above the threshold, trigger
# will be set to hold-off state
b_trigger_hold = False
# If we are on the rising portion of the signal and there is no hold off
# state on the trigger, trigger, and change state
if (x >= self.__d_thresh and self.__np_d_ch1_dir[idx] > 0 and
b_trigger_hold == False):
# Change state to hold off
b_trigger_hold = True
# Estimate time of crossing with interpolation
if idx>0:
# Interpolate to estimate the actual crossing from the 2 nearest points
xp = np.array([self.np_d_ch1[idx-1], self.np_d_ch1[idx]])
fp = np.array([self.__d_time[idx-1], self.__d_time[idx]])
self.__np_d_eventtimes[idx_event] = np.interp(d_thresh, xp, fp)
# More intermediate results
if b_verbose:
print('xp: ' + np.array2string(xp) + ' | fp: ' + np.array2string(fp) +
' | d_thresh: ' + '%0.4f' % d_thresh + ' | eventtimes: ' +
'%0.4f' % self.__np_d_eventtimes[idx_event])
# Increment the eventtimes index
idx_event += 1
else:
# Define the absolute hysteretic value, falling
self.__d_hyst_ab = self.__d_thresh + d_hyst
# Loop through the signal
for idx,x in enumerate(self.np_d_ch1):
# Intermediate results
if b_verbose:
print('idx: ' + '%2.f' % idx + ' | x: ' + '%0.5f' % x +
' | s-g: ' + '%0.4f' % self.__np_d_ch1_dir[idx])
# The trigger leaves 'hold-off' state if the slope is
# positive and we rise above the threshold
if (x >= self.__d_hyst_ab and self.__np_d_ch1_dir[idx] > 0 and
b_trigger_hold == True):
# Next time the signal rises above the threshold, trigger
# will be set to hold-off state
b_trigger_hold = False
# If we are on the falling portion of the signal and there is no hold off
# state on the trigger, trigger, and change state
if (x <= self.__d_thresh and self.__np_d_ch1_dir[idx] < 0 and
b_trigger_hold == False):
# Change state to hold off
b_trigger_hold = True
# Estimate time of crossing with interpolation
if idx>0:
# Interpolate to estimate the actual crossing from the 2 nearest points
xp = np.array([self.np_d_ch1[idx-1], self.np_d_ch1[idx]])
fp = np.array([self.__d_time[idx-1], self.__d_time[idx]])
self.__np_d_eventtimes[idx_event] = np.interp(d_thresh, xp, fp)
# More intermediate results
if b_verbose:
print('xp: ' + np.array2string(xp) + ' | fp: ' + np.array2string(fp) +
' | d_thresh: ' + '%0.4f' % d_thresh + ' | eventtimes: ' +
'%0.4f' % self.__np_d_eventtimes[idx_event])
# Increment the eventtimes index
idx_event += 1
# Remove zero-valued element
self.__np_d_eventtimes = np.delete(self.__np_d_eventtimes, np.where(self.__np_d_eventtimes == 0))
return self.__np_d_eventtimes
# Method to estimate the RPM values
def d_est_rpm(self, d_events_per_rev=1):
"""
Estimate the RPM from the signal using eventtimes which must have calculate
with a previous call to the method np_d_est_triggers.
"""
# Store the new value in the object
self.__d_events_per_rev = d_events_per_rev
# Calculate the RPM using the difference in event times
self.__np_d_rpm = 60./(np.diff(self.np_d_eventtimes)*float(d_events_per_rev))
# To keep the lengths the same, append the last sample
self.__np_d_rpm = np.append(self.__np_d_rpm, self.__np_d_rpm[len(self.__np_d_rpm)-1])
return self.__np_d_rpm
# Save the data
def b_save_data(self, str_data_prefix = 'testclass', idx_data=1):
"""
Save the data in the object to a .csv file
Keyword arguments:
str_data_prefix -- String with file prefix (defaults to 'testclass')
idx_data -- File index (defaults to 1)
Return values:
True if write succeeds
"""
str_file = str_data_prefix + '_' '%03.0f' % idx_data + '.csv'
file_data = open(str_file,'w+')
file_data.write('X,CH1,Start,Increment,\n')
str_line = 'Sequence,Volt,Volt,0.000000e-03,' + str(self.d_t_del)
file_data.write(str_line+'\n')
for idx_line in range(0, self.i_ns):
str_line = str(idx_line) + ',' + '%0.5f' % self.np_d_ch1[idx_line] + ',' + ','
file_data.write(str_line+'\n')
file_data.close()
return True
###Output
_____no_output_____
###Markdown
Verify help and class structure
###Code
help(cl_sig_features)
###Output
Help on class cl_sig_features in module __main__:
class cl_sig_features(builtins.object)
| cl_sig_features(np_d_ch1, timebase_scale)
|
| Class to manage signal features on scope data
|
| Example usage:
| cl_test = cl_sig_features(np.array([1.,2., 3.]),1.1)
|
| Should produce:
|
| print('np_d_ch1: '+ np.array2string(cl_test.np_d_ch1))
| print('timebase_scale: ' + '%0.3f' % cl_test.timebase_scale)
| print('i_ns: ' + '%3.f' % cl_test.i_ns)
| print('d_t_del: ' + '%0.3f' % cl_test.d_t_del)
| print('d_time' + np.array2string(cl_test.d_time))
|
| np_d_ch1: [1. 2. 3.]
| timebase_scale: 1.000
| i_ns: 3
| d_t_del: 4.000
| d_time[0. 4. 8.]
|
| Methods defined here:
|
| __init__(self, np_d_ch1, timebase_scale)
| Initialize self. See help(type(self)) for accurate signature.
|
| b_save_data(self, str_data_prefix='testclass', idx_data=1)
| Save the data in the object to a .csv file
|
| Keyword arguments:
| str_data_prefix -- String with file prefix (defaults to 'testclass')
| idx_data -- File index (defaults to 1)
|
| Return values:
| True if write succeeds
|
| d_est_rpm(self, d_events_per_rev=1)
| Estimate the RPM from the signal using eventtimes which must have calculate
| with a previous call to the method np_d_est_triggers.
|
| d_fft_real(self)
| Calculate the half spectrum since this is a real-valued signal
|
| np_d_est_triggers(self, i_direction=0, d_thresh=0, d_hyst=0.1, i_kernel=5, b_verbose=False)
| This method estimates speed by identifying trigger points in time,
| a given threshold and hysteresis. When the signal level crosses the threshold,
| the trigger holds off. The trigger holds off until the signal crosses
| hysteresis levels. Hysteresis is defined relative to the threshold voltage.
|
| The trigger times can be used to estimate the rotating speed.
|
| Keyword arguments:
| i_direction -- 0 to search for threshold on rising signal, 1 to search
| on a falling signal.
| d_thresh -- Threshold value (default: 0.0 volts for zero crossings)
| d_hyst -- Hysteresis value (default: 0.1 volts)
| i_kernel -- Number of samples to consider in estimating slope,
| must be an odd number (default: 5)
| b_verbose -- Print the intermediate steps (default: False). Useful
| for stepping through the method to troubleshoot or
| understand it better.
|
| Return values:
| np_d_eventtimes -- numpy array with list of trigger event times
|
| plt_eventtimes(self)
| Plot event data in time.
|
| Return values:
| list: [handle to the plot, np array of eventtimes]
|
| plt_rpm(self)
| Plot rpm data in time.
|
| Return values:
| list: [handle to the plot, np array of RPM values]
|
| plt_sigs(self)
| Plot out the data in this signal feature class in the time domain
|
| Return values:
| handle to the plot
|
| plt_spec(self)
| Plot data in frequency domain. This method assumes a real signal
|
| Return values:
| handle to the plot
|
| ----------------------------------------------------------------------
| Readonly properties defined here:
|
| d_events_per_rev
| Events per revolution
|
| d_fs
| Sampling frequeny in hertz
|
| d_t_del
| Delta time between each sample
|
| d_thresh
| Trigger threshold value
|
| d_time
| Numpy array with time values, in seconds
|
| i_ns
| Number of samples in the scope data
|
| np_d_ch1_filt
| Return the signal, filtered with Savitsky-Golay
|
| np_d_ch1_filt1
| Return the signal, filtered with butter FIR filter
|
| np_d_eventtimes
| Numpy array of trigger event times
|
| np_d_rpm
| Estimated RPM values
|
| str_filt1_desc
| Complete Filt1 description of the Butterworth filter design
|
| str_filt1_desc_short
| Short Filt1 description, useful for plot legend labels
|
| str_filt_desc
| Complete Filt description of the Savitsky-Golay filter design
|
| str_filt_desc_short
| Short Filt description, useful for plot legend labels
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| np_d_ch1
| Numpy array containing the scope data
|
| timebase_scale
| Scope time scale
###Markdown
Setup the test sequence Seed the sequence, these typically work well for 200-500 RPM and the mag pick-up gapped at 10 mils
###Code
str_data_prefix = 'test010'
idx_data = 0
d_timebase_scale = 1e-1
d_ch1_scale = 5.e-1
###Output
_____no_output_____
###Markdown
Acquisition loopThis loop has several steps: Acquire discovery signalThe code does not assume an RPM so it derives it from signal features. Scope setupSetup the vertical and horizontal scales on the scope. For the first pass, no trigger is used and time scale is set so that at least 2 revolutions of the lathe should be seen in the signal. This lets us see if the signal is valid and how we want to configure the trigger. Initial acquisitionOnce the setup is complete acquire the data and push the information into the signal feature class. The speed will be used to set the timescale so that we get about 5 events in the scope window Visualize the dataA few plots are presented of the scope data. Useful for troubleshooting Estimate the speedWith the event frequency known, the scope timebase_scale can be calculated
###Code
while True:
# Setup the scope for the trial sample acquisition
d_ch1_scale = b_setup_scope(scope, d_ch1_scale=d_ch1_scale, timebase_scale=d_timebase_scale,
d_trigger_level = 1e-01, b_single = False)
# Acquire the test sample
np_d_ch1 = d_get_data(i_ch=1, timebase_scale=d_timebase_scale)
# Instatiate the class, send the waveform samples and scales
cl_sig_no_trigger = cl_sig_features(np_d_ch1, d_timebase_scale)
# Plot out the signal
hp = cl_sig_no_trigger.plt_sigs()
# The shape of the response is similar as speed increased, but the
# triggering threshold has to increase to accomodate the higher
# amplitudes
d_thresh_est = 0.2 * (d_ch1_scale/0.5)
# Calculate the trigger event times
np_d_eventtimes = cl_sig_no_trigger.np_d_est_triggers(i_direction=0,
d_thresh=d_thresh_est, d_hyst=0.2, b_verbose=False)
np_d_eventtimes
# Visualize the eventtimes
hp = cl_sig_no_trigger.plt_eventtimes()
# Calculated the desired timebase_scale
print("d_timebase_scale (prior to adjustment): " '%0.3f' % d_timebase_scale)
d_timebase_scale = (6./12.)*(np.mean(np.diff(np_d_eventtimes)))
print("d_timebase_scale (after adjustment): " '%0.3f' % d_timebase_scale)
# Check for clipping and correct scaling. The scope has 8 vertical division so the
# total voltage range on the screen is 8 * d_ch1_scale
d_pkpk = np.max(np_d_ch1) - np.min(np_d_ch1)
print("d_pkpk: " + "%0.4f" % d_pkpk)
d_volts_scale = (8*d_ch1_scale)
print("d_volts_scale: " + "%0.4f" % d_volts_scale)
if ( d_pkpk > d_volts_scale ):
print("Voltage scale reduced")
d_ch1_scale = d_ch1_scale*2.
# Could be the vertical scale is too small, check for that
if ( abs( d_volts_scale/d_pkpk > 2. )):
print("Voltage scale increased")
d_ch1_scale = d_ch1_scale/2.
# The scope trigger setting scales with the overall amplitude since the
# shape of the response is similar
d_trigger_level_est = 0.2 * (d_ch1_scale/0.5)
# Reset the scope with the adjusted features, set to trigger on single sample
b_setup_scope(scope, d_ch1_scale=d_ch1_scale, timebase_scale=d_timebase_scale,
d_trigger_level = 1e-01, b_single = True)
# Acquire the sample
np_d_ch1 = d_get_data(i_ch=1, timebase_scale=d_timebase_scale)
# Reset back to free-run
b_setup_scope(scope, d_ch1_scale=d_ch1_scale, timebase_scale=d_timebase_scale,
d_trigger_level = 1e-01, b_single = False)
# Instatiate the class, send the waveform samples and scales
cl_sig_no_trigger = cl_sig_features(np_d_ch1, d_timebase_scale)
# Visualize the data
hp = cl_sig_no_trigger.plt_sigs()
# Save it off to a file
b_file_save = cl_sig_no_trigger.b_save_data(str_data_prefix = str_data_prefix, idx_data = idx_data)
# Wait for the next speed adjustment and continue
idx_data += 1
time.sleep(2)
input("Press Enter to continue...")
###Output
_____no_output_____ |
Course2/week2/week-2-multiple-regression-assignment-2-blank.ipynb | ###Markdown
Regression Week 2: Multiple Regression (gradient descent) In the first notebook we explored multiple regression using graphlab create. Now we will use graphlab along with numpy to solve for the regression weights with gradient descent.In this notebook we will cover estimating multiple regression weights via gradient descent. You will:* Add a constant column of 1's to a graphlab SFrame to account for the intercept* Convert an SFrame into a Numpy array* Write a predict_output() function using Numpy* Write a numpy function to compute the derivative of the regression weights with respect to a single feature* Write gradient descent function to compute the regression weights given an initial weight vector, step size and tolerance.* Use the gradient descent function to estimate regression weights for multiple features Fire up graphlab create Make sure you have the latest version of graphlab (>= 1.7)
###Code
import graphlab
###Output
_____no_output_____
###Markdown
Load in house sales dataDataset is from house sales in King County, the region where the city of Seattle, WA is located.
###Code
sales = graphlab.SFrame('kc_house_data.gl/')
###Output
[INFO] graphlab.cython.cy_server: GraphLab Create v2.1 started. Logging: /tmp/graphlab_server_1477228368.log
###Markdown
If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features. Convert to Numpy Array Although SFrames offer a number of benefits to users (especially when using Big Data and built-in graphlab functions) in order to understand the details of the implementation of algorithms it's important to work with a library that allows for direct (and optimized) matrix operations. Numpy is a Python solution to work with matrices (or any multi-dimensional "array").Recall that the predicted value given the weights and the features is just the dot product between the feature and weight vector. Similarly, if we put all of the features row-by-row in a matrix then the predicted value for *all* the observations can be computed by right multiplying the "feature matrix" by the "weight vector". First we need to take the SFrame of our data and convert it into a 2D numpy array (also called a matrix). To do this we use graphlab's built in .to_dataframe() which converts the SFrame into a Pandas (another python library) dataframe. We can then use Panda's .as_matrix() to convert the dataframe into a numpy matrix.
###Code
import numpy as np # note this allows us to refer to numpy as np instead
###Output
_____no_output_____
###Markdown
Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things:* A numpy matrix whose columns are the desired features plus a constant column (this is how we create an 'intercept')* A numpy array containing the values of the outputWith this in mind, complete the following function (where there's an empty line you should write a line of code that does what the comment above indicates)**Please note you will need GraphLab Create version at least 1.7.1 in order for .to_numpy() to work!**
###Code
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = graphlab.SFrame()
for feature in features:
features_sframe[feature] = data_sframe[feature]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
###Output
_____no_output_____
###Markdown
For testing let's use the 'sqft_living' feature and a constant as our features and price as our output:
###Code
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') # the [] around 'sqft_living' makes it a list
print example_features[0,:] # this accesses the first row of the data the ':' indicates 'all columns'
print example_output[0] # and the corresponding output
###Output
[ 1.00000000e+00 1.18000000e+03]
221900.0
###Markdown
Predicting output given regression weights Suppose we had the weights [1.0, 1.0] and the features [1.0, 1180.0] and we wanted to compute the predicted output 1.0\*1.0 + 1.0\*1180.0 = 1181.0 this is the dot product between these two arrays. If they're numpy arrayws we can use np.dot() to compute this:
###Code
my_weights = np.array([1., 1.]) # the example weights
my_features = example_features[0,] # we'll use the first data point
predicted_value = np.dot(my_features, my_weights)
print predicted_value
###Output
1181.0
###Markdown
np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features *matrix* and the weights *vector*. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights:
###Code
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
###Output
_____no_output_____
###Markdown
If you want to test your code run the following cell:
###Code
test_predictions = predict_output(example_features, my_weights)
print test_predictions[0] # should be 1181.0
print test_predictions[1] # should be 2571.0
###Output
1181.0
2571.0
###Markdown
Computing the Derivative We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output.Since the derivative of a sum is the sum of the derivatives we can compute the derivative for a single data point and then sum over data points. We can write the squared difference between the observed output and predicted output for a single point as follows:(w[0]\*[CONSTANT] + w[1]\*[feature_1] + ... + w[i] \*[feature_i] + ... + w[k]\*[feature_k] - output)^2Where we have k features and a constant. So the derivative with respect to weight w[i] by the chain rule is:2\*(w[0]\*[CONSTANT] + w[1]\*[feature_1] + ... + w[i] \*[feature_i] + ... + w[k]\*[feature_k] - output)\* [feature_i]The term inside the paranethesis is just the error (difference between prediction and output). So we can re-write this as:2\*error\*[feature_i]That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself. In the case of the constant then this is just twice the sum of the errors!Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors. With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points).
###Code
def feature_derivative(errors, feature):
# Assume that errors and feature are both numpy arrays of the same length (number of data points)
# compute twice the dot product of these vectors as 'derivative' and return the value
derivative = 2 * np.dot(errors, feature)
return(derivative)
###Output
_____no_output_____
###Markdown
To test your feature derivartive run the following:
###Code
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([0., 0.]) # this makes all the predictions 0
test_predictions = predict_output(example_features, my_weights)
# just like SFrames 2 numpy arrays can be elementwise subtracted with '-':
errors = test_predictions - example_output # prediction errors in this case is just the -example_output
feature = example_features[:,0] # let's compute the derivative with respect to 'constant', the ":" indicates "all rows"
derivative = feature_derivative(errors, feature)
print derivative
print -np.sum(example_output)*2 # should be the same as derivative
###Output
-23345850022.0
-23345850022.0
###Markdown
Gradient Descent Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of *increase* and therefore the negative gradient is the direction of *decrease* and we're trying to *minimize* a cost function. The amount by which we move in the negative gradient *direction* is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'.With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria
###Code
from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False
weights = np.array(initial_weights) # make sure it's a numpy array
while not converged:
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
gradient_sum_squares = 0 # initialize the gradient sum of squares
# while we haven't reached the tolerance yet, update each feature's weight
for i in range(len(weights)): # loop over each weight
# Recall that feature_matrix[:, i] is the feature column associated with weights[i]
# compute the derivative for weight[i]:
_derivative = feature_derivative(errors, feature_matrix[:, i])
# add the squared value of the derivative to the gradient sum of squares (for assessing convergence)
gradient_sum_squares += _derivative * _derivative
# subtract the step size times the derivative from the current weight
weights[i] -= _derivative * step_size
# compute the square-root of the gradient sum of squares to get the gradient magnitude:
gradient_magnitude = sqrt(gradient_sum_squares)
if gradient_magnitude < tolerance:
converged = True
return(weights)
###Output
###Markdown
A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect "tolerance" to be small, small is only relative to the size of the features. For similar reasons the step size will be much smaller than you might expect but this is because the gradient has such large values. Running the Gradient Descent as Simple Regression First let's split the data into training and test data.
###Code
train_data,test_data = sales.random_split(.8,seed=0)
###Output
_____no_output_____
###Markdown
Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimat the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model:
###Code
# let's test out the gradient descent
simple_features = ['sqft_living']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
initial_weights = np.array([-47000., 1.])
step_size = 7e-12
tolerance = 2.5e7
###Output
_____no_output_____
###Markdown
Next run your gradient descent with the above parameters.
###Code
myGrad1 = regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, tolerance)
print myGrad1
###Output
[-46999.88716555 281.91211912]
###Markdown
How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)? **Quiz Question: What is the value of the weight for sqft_living -- the second element of ‘simple_weights’ (rounded to 1 decimal place)?** Use your newly estimated weights and your predict_output() function to compute the predictions on all the TEST data (you will need to create a numpy array of the test feature_matrix and test output first:
###Code
(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
###Output
_____no_output_____
###Markdown
Now compute your predictions using test_simple_feature_matrix and your weights from above.
###Code
test_predictions = predict_output(test_simple_feature_matrix, myGrad1)
###Output
_____no_output_____
###Markdown
**Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 1 (round to nearest dollar)?**
###Code
print test_predictions[0]
###Output
356134.443171
###Markdown
Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).
###Code
i=0
RSS = 0
while i < len(test_data):
error = test_predictions[i] - test_data["price"][i]
error = error*error
RSS += error
i += 1
print RSS
###Output
2.75400047593e+14
###Markdown
Running a multiple regression Now we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters:
###Code
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
initial_weights = np.array([-100000., 1., 1.])
step_size = 4e-12
tolerance = 1e9
###Output
_____no_output_____
###Markdown
Use the above parameters to estimate the model weights. Record these values for your quiz.
###Code
myGrad2 = regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance)
print myGrad2
###Output
[ -9.99999688e+04 2.45072603e+02 6.52795277e+01]
###Markdown
Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!
###Code
(test_simple_feature_matrix2, test_output) = get_numpy_data(test_data, model_features, my_output)
test_predictions2 = predict_output(test_simple_feature_matrix2, myGrad2)
###Output
_____no_output_____
###Markdown
**Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 2 (round to nearest dollar)?**
###Code
test_predictions2[0]
###Output
_____no_output_____
###Markdown
What is the actual price for the 1st house in the test data set?
###Code
test_data[0]["price"]
###Output
_____no_output_____
###Markdown
**Quiz Question: Which estimate was closer to the true price for the 1st house on the TEST data set, model 1 or model 2?**model2 Now use your predictions and the output to compute the RSS for model 2 on TEST data.
###Code
i=0
RSS = 0
while i < len(test_data):
error = test_predictions2[i] - test_data["price"][i]
error = error*error
RSS += error
i += 1
print RSS
###Output
2.70263446465e+14
|
Python/SpaCy/spaCy.ipynb | ###Markdown
spaCy experiments Imports & initialization Import the required modules.
###Code
import collections
import itertools
import matplotlib.pyplot as plt
import numpy as np
import spacy
###Output
_____no_output_____
###Markdown
Create a language model, English in this case.
###Code
en_nlp = spacy.load('en_core_web_sm')
###Output
_____no_output_____
###Markdown
Part of speech tagging (POS) Read a text file into a string variable.
###Code
with open('Data/frost.txt') as file:
text = ''.join(file.readlines())
###Output
_____no_output_____
###Markdown
Parse the text using the language model.
###Code
doc = en_nlp(text)
###Output
_____no_output_____
###Markdown
Show the part of speech tags, as well as the context of the words.
###Code
for word in doc:
print(f'{word.text!r}: {word.pos_}, '
f'{word.left_edge.text!r} <- {word.head.text!r} -> {word.right_edge.text!r}')
###Output
_____no_output_____
###Markdown
Since we can't use backslashes in f-strings, we define a constant to represent it.
###Code
newline = '\n'
###Output
_____no_output_____
###Markdown
To split a text in sentences, a statistical model is used that was obtained from the training corpus.
###Code
for i, sentence in enumerate(doc.sents):
print(f'{i:3d} {sentence.text.replace(newline, " ")}')
###Output
_____no_output_____
###Markdown
For poetry, sentences seem somewhat hard to detect. However, it is possible to define a language model for English and add a rule-based sentencizer to it.
###Code
en_nlp_alt = spacy.lang.en.English()
sentencizer = en_nlp_alt.create_pipe('sentencizer')
en_nlp_alt.add_pipe(sentencizer)
doc = en_nlp_alt(text)
for i, sentence in enumerate(doc.sents):
print(f'{i:3d} {sentence.text.replace(newline, " ").strip()}')
###Output
_____no_output_____
###Markdown
Lemmatization By way of example, consider Plato's *Republic*. This is a fairly long text.
###Code
!wc -l -w Data/republic.mb.txt
with open('Data/republic.mb.txt') as file:
text = ''.join(file.readlines())
###Output
_____no_output_____
###Markdown
The full result of the language model parsing this text would be rather large, but for our purposes, we require only tokenization, not POS or NER, hence we disable these features.
###Code
doc = en_nlp(text, disable=['parser', 'ner'])
###Output
_____no_output_____
###Markdown
We can now perform lemmatization on all words that are not stop words, and we also eliminate named entities (`-PROP-` as value for lemma) and punctuation. On the resulting list, a word count is performed.
###Code
stopwords = en_nlp.Defaults.stop_words | {'\n', '\n\n', '-PRON-'}
punctuation = ',.;?!:-'
counts = collections.Counter([token.lemma_.lower() for token in doc
if token.lemma_ not in stopwords and token.lemma_ not in punctuation])
###Output
_____no_output_____
###Markdown
The top-20 words are given below.
###Code
counts.most_common(20)
def plot_distr(counts, nr_words):
words = list()
numbers = list()
for word, number in counts.most_common(nr_words):
words.append(word)
numbers.append(number)
figure, axes = plt.subplots(1, 1, figsize=(15, 5))
axes.bar(words, numbers)
axes.set_xticklabels(words, rotation=45)
plot_distr(counts, 30)
###Output
_____no_output_____
###Markdown
Named entiry recognition (NER) Named entity recognition is supported as well.
###Code
sentence = 'Music by Johann Sebastian Bach is better than that by Friederich Buxtehude. Both lived in Germany'
doc = en_nlp(sentence)
for i, word in enumerate(doc):
print(f'{i:3d} {word.text!r}: {word.pos_}, {word.ent_type_}')
###Output
_____no_output_____
###Markdown
It is also possible to retrieve named entities from the document explicitly.
###Code
for entity in doc.ents:
print(f'{entity} ({entity.label_}): {entity.start} -> {entity.end}')
###Output
_____no_output_____
###Markdown
Note that the first name of Buxtehude is in fact Dietrich, not Friederich. Nevertheless, the NER marks `Friederich Buxtehude` as a person. This can also be visualized as markup in the sentence.
###Code
spacy.displacy.render(doc, style='ent', jupyter=True)
spacy.displacy.render(doc, style='dep', jupyter=True,
options={'distance': 140, 'compact': True})
###Output
_____no_output_____
###Markdown
Similarity Document similarity can also be computed conveniently.
###Code
doc1 = en_nlp('The book is nice')
doc2 = en_nlp('The novel is beautiful')
doc1.similarity(doc2)
doc1 = en_nlp('The book is nice')
doc2 = en_nlp('The house is on fire')
doc1.similarity(doc2)
words = ['queen', 'lady', 'girl', 'king', 'lord', 'boy', 'cat', 'dog', 'lion']
similarity = np.empty((len(words), len(words)))
for i, word1 in enumerate(words):
for j, word2 in enumerate(words):
similarity[i, j] = en_nlp(word1).similarity(en_nlp(word2))
###Output
_____no_output_____
###Markdown
The similarity matrix can be visualized as a heat map using the following function:
###Code
def plot_similarity_matrix(sim, words, cmap=plt.cm.Blues):
figure, axes = plt.subplots(figsize=(6, 6))
axes.imshow(sim, interpolation='nearest', cmap=cmap)
axes.set_xticks(range(len(words)))
axes.set_xticklabels(words, rotation=45)
axes.set_yticks(range(len(words)))
axes.set_yticklabels(words)
fmt = '{0:.2f}'
thresh = 0.5*(sim.max() + sim.min())
for i, j in itertools.product(range(sim.shape[0]), range(sim.shape[1])):
axes.text(j, i, fmt.format(sim[i, j]),
horizontalalignment="center",
color="white" if sim[i, j] > thresh else "black",
fontsize=8)
figure.tight_layout()
axes.set_xlabel('word 1')
axes.set_ylabel('word 2')
plot_similarity_matrix(similarity, words)
###Output
_____no_output_____
###Markdown
However, small language models don't contain real word vectors, only context sensitive tensors. We can repeat the computation above with a medium sized language model.
###Code
en_nlp_md = spacy.load('en_core_web_md')
similarity = np.empty((len(words), len(words)))
for i, word1 in enumerate(words):
for j, word2 in enumerate(words):
similarity[i, j] = en_nlp_md(word1).similarity(en_nlp_md(word2))
plot_similarity_matrix(similarity, words)
doc1 = en_nlp_md('The book is nice')
doc2 = en_nlp_md('The novel is beautiful')
doc1.similarity(doc2)
doc1 = en_nlp_md('The book is nice')
doc2 = en_nlp_md('The house is on fire')
doc1.similarity(doc2)
doc1 = en_nlp_md('Stock prices for Intel are on the rise.')
doc2 = en_nlp_md('The value of NVIDIA shares is increasing.')
doc1.similarity(doc2)
doc1 = en_nlp_md('Stock prices for Intel are on the rise.')
doc2 = en_nlp_md('The economy of Denmark is flourishing.')
doc1.similarity(doc2)
doc1 = en_nlp_md('Stock prices for Intel are on the rise.')
doc2 = en_nlp_md('The value of NVIDIA shares is plumetting.')
doc1.similarity(doc2)
###Output
_____no_output_____ |
Matplotlib_multivariate/Encodings_Practice.ipynb | ###Markdown
In this notebook, you'll be working with the Pokémon dataset from the univariate plots lesson.
###Code
pokemon = pd.read_csv('./data/pokemon.csv')
pokemon.head()
###Output
_____no_output_____
###Markdown
**Task 1**: To start, let's look at the relationship between the Pokémon combat statistics of Speed, Defense, and Special-Defense. If a Pokémon has higher defensive statistics, does it necessarily sacrifice speed? Create a single plot to depict this relationship.
###Code
# YOUR CODE HERE
# run this cell to check your work against ours
encodings_solution_1()
###Output
When creating the plot, I made the figure size bigger and set axis limits to zoom into the majority of data points. I might want to apply some manual jitter to the data since I suspect there to be a lot of overlapping points. From the plot as given, I see a slight increase in speed as both defense and special defense increase. However, the brightest points seem to be clumped up in the center in the 60-80 defense and special defense ranges with the two brightest points on the lower left of the diagonal.
###Markdown
To complete the second task, we need to first reshape the dataset so that all Pokémon types are recorded in a single column. This will add duplicates of Pokémon with two types, which is fine for the task to be performed.
###Code
type_cols = ['type_1','type_2']
non_type_cols = pokemon.columns.difference(type_cols)
pkmn_types = pokemon.melt(id_vars = non_type_cols, value_vars = type_cols,
var_name = 'type_level', value_name = 'type').dropna()
pkmn_types.head()
###Output
_____no_output_____
###Markdown
**Task 2**: How do weights and heights compare between Fairy type Pokémon and Dragon type Pokémon? You may want to subset your dataframe before proceeding with the plotting code. **Hint**: If you remember from the univariate plots lesson, one of your axis variables may need to be transformed. If you plan on using FacetGrid, its `.set()` method will be vital for adjusting the axis scaling and tick marks. Check the [last example in the Seaborn documentation](https://seaborn.pydata.org/generated/seaborn.FacetGrid.html) for an example of how the `.set()` method is used, and the [matplotlib documentation of the Axes class](https://matplotlib.org/api/axes_api.html) for properties that you might want to set.
###Code
# YOUR CODE HERE
# run this cell to check your work against ours
encodings_solution_2()
###Output
After subsetting the data, I used FacetGrid to set up and generate the plot. I used the .set() method for FacetGrid objects to set the x-scaling and tick marks. The plot shows the drastic difference in sizes and weights for the Fairy and Dragon Pokemon types.
|
Frequentist_vs_Bayesian_Regression.ipynb | ###Markdown
1. Simple Linear Regression Load the data
###Code
datafolder = "/content/drive/My Drive/NUS/BT4012/"
file_name = "student-mat.csv"
df_data = pd.read_csv(datafolder + file_name, sep=';', index_col=None)
df_data.rename(columns={'G3': 'Grade'}, inplace=True)
df_data = df_data[~df_data['Grade'].isin([0, 1])]
df_used = df_data[['studytime', 'Medu', 'Grade']]
df_used.head(2)
df_X = df_used[['studytime', 'Medu']] #store features
df_y = df_used[['Grade']]
# Split into training/testing sets with 25% split
X_train, X_test, y_train, y_test = train_test_split(df_X, df_y,
test_size = 0.25,
random_state=123)
###Output
_____no_output_____
###Markdown
*Train linear regression model on X_train and y_train** Adopt the default hyperparamter setting
###Code
## write your code
lr = LinearRegression()
lr.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
*Check MAE and RMSE on testing data*
###Code
## Write your code
predictions = lr.predict(X_test)
mae = np.mean(abs(predictions - y_test))
rmse = np.sqrt(np.mean((predictions - y_test)**2))
print('MAE: %0.2f' % mae)
print('RMSE: %0.2f' % rmse)
###Output
MAE: 2.84
RMSE: 3.45
###Markdown
*Check the predicted grade of one student*The 10th student in the testing data
###Code
## write your code
predictions[9]
###Output
_____no_output_____
###Markdown
*Print the learned model parameters of linear regression*
###Code
## Write your code
intercept = lr.intercept_[0]
coef = lr.coef_
formula = 'Grade = %0.2f +' % intercept
for i, col in enumerate(X_train.columns):
formula += ' %0.2f * %s +' % (coef[0][i], col)
print(formula[:-2])
print()
print("For an unit increase in study time, expected grade increases by 0.68, independent of all other variables.\
Likewise, for an unit increase in Medu, expected grade increases by 0.47, independent of all other variables.")
###Output
Grade = 8.75 + 0.68 * studytime + 0.47 * Medu
For an unit increase in study time, expected grade increases by 0.68, independent of all other variables. Likewise, for an unit increase in Medu, expected grade increases by 0.47, independent of all other variables.
###Markdown
2. Bayesian Linear RegressionHere, two bayesian models will be implemented with **two different sets of prior functions**. The first bayeisan model is given as:$u_i = \beta_0 + \beta_1*{studytime}_i + \beta_2*{medu}_i$$grade_i \sim Norm(u_i, \sigma^2_\epsilon)$$\beta_0 \sim Norm(0, 1)$$\beta_1 \sim Norm(0, 1)$$\beta_2 \sim Norm(0, 100)$$\sigma_\epsilon \sim {Uniform}(0, 10)$Here, $\beta_0$ is the intercept. Then, $\beta_1$ and $\beta_2$ are the coefficients for features: studytime and medu. For the i-th datasample, a mean $u_i$ can be computed linearly from two features. Then, the target grade $y_i$ is assumed to be normally distributed around this $u_i$. Make sure the version of pymc3 is 3.8
###Code
! pip install pymc3==3.8
import pymc3 as pm
## Define your model here
def model_build(df_train, df_label=None):
"""
build genearlized linear model
"""
with pm.Model() as model:
## write your code here
num_fea = df_train.shape[1]
#error term
sigma = pm.Uniform('sigma', 0, 10)
#intercept
mu_infe = pm.Normal('intercept', mu=0, sigma=1)
#beta1
mu_infe = mu_infe + pm.Normal('beta_1_coeff_for_{}'.format(df_train.columns[0]), mu=0, sigma=1)*df_train.loc[:, df_train.columns[0]]
#beta2
mu_infe = mu_infe + pm.Normal('beta_2_coeff_for_{}'.format(df_train.columns[1]), mu=0, sigma=10)*df_train.loc[:, df_train.columns[1]]
if df_label is None:
# inference
likelihood = pm.Normal('y', mu=mu_infe, sigma=sigma, observed = False)
else:
# training
likelihood = pm.Normal('y', mu=mu_infe, sigma=sigma, observed = df_label['Grade'].values)
return model
# Use MCMC algorithm to draw samples to approximate the posterior for model parameters (error term, bias term and all coefficients)
with model_build(X_train, y_train):
trace = pm.sample(draws=2000, chains = 2, tune = 500)
# sample the posterior predictive distribution for the 10th student in testing data
# 4000 samples (2 chains and each chain has 2000 samples) will be sampled for this student.
with model_build(X_test.iloc[9:10,:]):
ppc = pm.sample_posterior_predictive(trace)
###Output
100%|██████████| 4000/4000 [00:07<00:00, 521.53it/s]
###Markdown
*Compute the mean and standard deviation of your prediction.*
###Code
## write your code
print("The mean of prediction is %0.3f and the standard deviation is %0.3f " %(np.mean(ppc['y']),np.std(ppc['y'])))
###Output
The mean of prediction is 12.353 and the standard deviation is 3.163
###Markdown
*Check the posterior distribution for the model parameters*$p(w|D)$
###Code
## write your code here
print(pm.summary(trace).round(5))
pm.plot_posterior(trace, figsize = (12, 3))
###Output
/usr/local/lib/python3.6/dist-packages/arviz/data/io_pymc3.py:89: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context.
FutureWarning,
###Markdown
The other bayeisan model is given as:$u_i = \beta_0 + \beta_1*{studytime}_i + \beta_2*{medu}_i$$grade_i \sim Norm(u_i, \sigma^2_\epsilon)$$\beta_0 \sim Norm(0, 100)$$\beta_1 \sim Norm(0, 100)$$\beta_2 \sim Norm(0, 100)$$\sigma_\epsilon \sim {Uniform}(0, 10)$
###Code
## Define your model here
def model_build(df_train, df_label=None):
"""
build genearlized linear model
"""
with pm.Model() as model:
## write your code here
num_fea = df_train.shape[1]
#error term
sigma = pm.Uniform('sigma', 0, 10)
#intercept
mu_infe = pm.Normal('intercept', mu=0, sigma=10)
#beta1
mu_infe = mu_infe + pm.Normal('beta_1_coeff_for_{}'.format(df_train.columns[0]), mu=0, sigma=10)*df_train.loc[:, df_train.columns[0]]
#beta2
mu_infe = mu_infe + pm.Normal('beta_2_coeff_for_{}'.format(df_train.columns[1]), mu=0, sigma=10)*df_train.loc[:, df_train.columns[1]]
if df_label is None:
# inference
likelihood = pm.Normal('y', mu=mu_infe, sigma=sigma, observed = False)
else:
# training
likelihood = pm.Normal('y', mu=mu_infe, sigma=sigma, observed = df_label['Grade'].values)
return model
# Use MCMC algorithm to draw samples to approximate the posterior for model parameters (error term, bias term and all coefficients)
with model_build(X_train, y_train):
trace = pm.sample(draws=2000, chains = 2, tune = 500)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [beta_2_coeff_for_Medu, beta_1_coeff_for_studytime, intercept, sigma]
Sampling chain 0, 0 divergences: 100%|██████████| 2500/2500 [00:06<00:00, 367.14it/s]
Sampling chain 1, 0 divergences: 100%|██████████| 2500/2500 [00:06<00:00, 394.84it/s]
The acceptance probability does not match the target. It is 0.881162011783612, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
*Similar to the first bayesian linear regression model, check the distribution for model parameters and the prediction distribution of the previous chosen data sample*
###Code
## write your code
# prediction distribution
with model_build(X_test.iloc[9:10,:]): # 9th student
ppc = pm.sample_posterior_predictive(trace)
print("The mean of prediction is %0.3f and the standard deviation is %0.3f " %(np.mean(ppc['y']),np.std(ppc['y'])))
# distribution of model parameters
print(pm.summary(trace).round(5))
pm.plot_posterior(trace, figsize = (12, 3))
###Output
100%|██████████| 4000/4000 [00:05<00:00, 688.23it/s]
/usr/local/lib/python3.6/dist-packages/arviz/data/io_pymc3.py:89: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context.
FutureWarning,
|
notebooks/10-data-prep/01.20-ol-entry.ipynb | ###Markdown
Code for cleanning and preparing the entry surveys data
###Code
from __future__ import absolute_import, division, print_function
import datetime
import time
import os
import pandas as pd
import sys
sys.path.insert(0, '../../src/data/')
from config import *
###Output
_____no_output_____
###Markdown
Config
###Code
# Matplotlib for additional customization
from matplotlib import pyplot as plt
%matplotlib inline
# Seaborn for plotting and styling
import seaborn as sns
###Output
2019-01-18 13:20:44,777 - DEBUG - backend module://ipykernel.pylab.backend_inline version unknown
###Markdown
read
###Code
participants_entry_survey_data_anon = pd.read_hdf(surveys_anon_store_path, 'entry/participants_entry_survey_data_anon')
participants_entry_survey_data_anon.head()
###Output
_____no_output_____
###Markdown
Rename columns
###Code
columns_dict = {
'member':'member',
'Q3.3':'race',
'Q3.3_7_TEXT':'race_other',
'Q3.2':'age',
'Q3.1':'gender',
'Q3.4':'citizneships',
'Q5.3':'is_cofounder',
'Q5.5_1_TEXT':'title',
'Q5.6':'time_in_startup',
'Q6.1':'HHH_type',
'Q6.2':'experience',
'Q7.2_1':'TIPI_1',
'Q7.2_2':'TIPI_2',
'Q7.2_3':'TIPI_3',
'Q7.2_4':'TIPI_4',
'Q7.2_5':'TIPI_5',
'Q7.2_6':'TIPI_6',
'Q7.2_7':'TIPI_7',
'Q7.2_8':'TIPI_8',
'Q7.2_9':'TIPI_9',
'Q7.2_10':'TIPI_10',
}
participants_entry_survey_data_clean = participants_entry_survey_data_anon.rename(columns=columns_dict)
participants_entry_survey_data_clean.head(5)
###Output
_____no_output_____
###Markdown
Translate values Big 5 TIPI Transalte text to numbers
###Code
participants_entry_survey_data_clean.TIPI_1.unique()
def TIPI_Translation(answer):
"""
A function to translate the string answers from the TIPI survey to numbers
"""
if answer == 'Agree strongly':
return(7.0)
if answer == 'Agree moderately':
return(6.0)
if answer == 'Agree a little':
return(5.0)
if answer == 'Neither agree nor disagree':
return(4.0)
if answer == 'Disagree a little':
return(3.0)
if answer == 'Disagree moderately':
return(2.0)
if answer == 'Disagree strongly ': #extra space....
return(1.0)
if answer == 'Disagree strongly':
return(1.0)
return answer
#Apply translation functions
for i in range(1,11):
participants_entry_survey_data_clean['TIPI_{}'.format(i)] = participants_entry_survey_data_clean['TIPI_{}'.format(i)].apply(TIPI_Translation)
###Output
_____no_output_____
###Markdown
Transalte answers to personalityTIPI scale scoring (“R” denotes reverse-scored items):* Extraversion: 1, 6R* Agreeableness: 2R, 7* Conscientiousness; 3, 8R* Emotional Stability: 4R, 9* Openness to Experiences: 5, 10RScoring the TIPI:1. Recode the reverse-scored items (i.e., recode a 7 with a 1, a 6 with a 2, a 5 with a 3, etc.). The reverse scored items are 2, 4, 6, 8, & 10.2. Take the AVERAGE of the two items (the standard item and the recoded reverse-scored item) that make up each scale.Example using the Extraversion scale: A participant has scores of 5 on item 1 (Extraverted, enthusiastic) and and 2 on item 6 (Reserved, quiet). First, recode the reverse-scored item (i.e., item 6), replacing the 2 with a 6. Second, take the average of the score for item 1 and the (recoded) score for item 6. So the TIPI Extraversion scale score would be: (5 + 6)/2 = 5.5
###Code
for i in range(2,11,2):
participants_entry_survey_data_clean['TIPI_{}R'.format(i)] = 8 - participants_entry_survey_data_clean['TIPI_{}'.format(i)]
df = participants_entry_survey_data_clean
participants_entry_survey_data_clean['TIPI_extraversion'] = (df['TIPI_1']+df['TIPI_6R'])/2
participants_entry_survey_data_clean['TIPI_agreeableness'] = (df['TIPI_2R']+df['TIPI_7'])/2
participants_entry_survey_data_clean['TIPI_conscientiousness'] = (df['TIPI_3']+df['TIPI_8R'])/2
participants_entry_survey_data_clean['TIPI_emotional_stability'] = (df['TIPI_4R']+df['TIPI_9'])/2
participants_entry_survey_data_clean['TIPI_openness'] = (df['TIPI_5']+df['TIPI_10R'])/2
for i in range(1,11):
del participants_entry_survey_data_clean['TIPI_{}'.format(i)]
for i in range(2,11,2):
del participants_entry_survey_data_clean['TIPI_{}R'.format(i)]
participants_entry_survey_data_clean.head()
###Output
_____no_output_____
###Markdown
is CEO?
###Code
participants_entry_survey_data_clean.title.value_counts()
#CEO
#Ceo
#participants_entry_survey_data_clean['is_ceo'] = \
# participants_entry_survey_data_clean.title.str.contains('ceo',case=False)\
# .fillna(False)
participants_entry_survey_data_clean['is_ceo'] = 0
cond = (participants_entry_survey_data_clean.title.fillna("").str.contains("ceo",case=False))
participants_entry_survey_data_clean.loc[cond,'is_ceo'] = 1
participants_entry_survey_data_clean.query('is_ceo == 1')[['title','is_ceo']]
participants_entry_survey_data_clean[['member','title','is_ceo']].query('is_ceo')
participants_entry_survey_data_clean.head()
###Output
_____no_output_____
###Markdown
Is co-founder
###Code
# Set for 'Yes and maybe'
participants_entry_survey_data_clean['is_cofounder_temp'] = 0
cond = (participants_entry_survey_data_clean.is_cofounder != 'No')
participants_entry_survey_data_clean.loc[cond,'is_cofounder_temp'] = 1
print(len(participants_entry_survey_data_clean.query('is_cofounder_temp == 1')))
# Manually fix
cond = (participants_entry_survey_data_clean.member == 'XLIPIHEOIT')
participants_entry_survey_data_clean.loc[cond,'is_cofounder_temp'] = 0
print(len(participants_entry_survey_data_clean.query('is_cofounder_temp == 1')))
participants_entry_survey_data_clean['is_cofounder'] = participants_entry_survey_data_clean['is_cofounder_temp']
del participants_entry_survey_data_clean['is_cofounder_temp']
###Output
_____no_output_____
###Markdown
HHH
###Code
participants_entry_survey_data_clean.HHH_type.value_counts()
def HHH_Translation(answer):
"""
"""
if answer == 'Hacker (you can solve any technical problem and make anything work)':
return('Hacker')
if answer == 'Hustler (you are the one who closes deals and brings back the money)':
return('Hustler')
if answer == 'Hipster (you are a creative design genius who makes the user experience awesome)':
return('Hispster')
return answer
participants_entry_survey_data_clean['HHH_type'] = participants_entry_survey_data_clean['HHH_type'].apply(HHH_Translation)
participants_entry_survey_data_clean.HHH_type.value_counts()
participants_entry_survey_data_clean.head()
###Output
_____no_output_____
###Markdown
Gender
###Code
def gender_Translation(answer):
"""
"""
if answer == 'Male':
return('M')
if answer == 'Female':
return('F')
else:
return('U')
return answer
participants_entry_survey_data_clean['gender'] = participants_entry_survey_data_clean['gender'].apply(gender_Translation)
#participants_entry_survey_data_clean.query('gender == "U"')
#participants_entry_survey_data_clean.loc['UAXR5EMOI2','gender']='M'
participants_entry_survey_data_clean.loc[participants_entry_survey_data_clean.member =='UAXR5EMOI2','gender']='M'
participants_entry_survey_data_clean.gender.value_counts()
###Output
_____no_output_____
###Markdown
tests
###Code
participants_entry_survey_data_clean.dtypes
participants_entry_survey_data_clean.HHH_type.value_counts()
participants_entry_survey_data_clean.TIPI_extraversion.hist()
participants_entry_survey_data_clean.TIPI_agreeableness.hist()
participants_entry_survey_data_clean.TIPI_openness.hist()
###Output
_____no_output_____
###Markdown
Store
###Code
participants_entry_survey_data_clean.set_index('member', inplace=True)
with pd.HDFStore(surveys_clean_store_path) as store:
store.put('entry/participants_entry_survey_data_clean', participants_entry_survey_data_clean, format='table')
###Output
_____no_output_____
###Markdown
Sainty check
###Code
members = pd.read_hdf(analysis_store_path, 'metadata/members')
members.head()
a = participants_entry_survey_data_clean.join(members)
a.query('is_ceo == 1')[['company']].sort_values('company')
###Output
_____no_output_____ |
python/d2l-en/pytorch/chapter_recurrent-modern/gru.ipynb | ###Markdown
Gated Recurrent Units (GRU):label:`sec_gru`In :numref:`sec_bptt`,we discussed how gradients are calculatedin RNNs.In particular we found that long products of matrices can leadto vanishing or exploding gradients.Let us briefly think about what suchgradient anomalies mean in practice:* We might encounter a situation where an early observation is highly significant for predicting all future observations. Consider the somewhat contrived case where the first observation contains a checksum and the goal is to discern whether the checksum is correct at the end of the sequence. In this case, the influence of the first token is vital. We would like to have some mechanisms for storing vital early information in a *memory cell*. Without such a mechanism, we will have to assign a very large gradient to this observation, since it affects all the subsequent observations.* We might encounter situations where some tokens carry no pertinent observation. For instance, when parsing a web page there might be auxiliary HTML code that is irrelevant for the purpose of assessing the sentiment conveyed on the page. We would like to have some mechanism for *skipping* such tokens in the latent state representation.* We might encounter situations where there is a logical break between parts of a sequence. For instance, there might be a transition between chapters in a book, or a transition between a bear and a bull market for securities. In this case it would be nice to have a means of *resetting* our internal state representation.A number of methods have been proposed to address this. One of the earliest is long short-term memory :cite:`Hochreiter.Schmidhuber.1997` which wewill discuss in :numref:`sec_lstm`. The gated recurrent unit (GRU):cite:`Cho.Van-Merrienboer.Bahdanau.ea.2014` is a slightly more streamlinedvariant that often offers comparable performance and is significantly faster tocompute :cite:`Chung.Gulcehre.Cho.ea.2014`.Due to its simplicity, let us start with the GRU. Gated Hidden StateThe key distinction between vanilla RNNs and GRUsis that the latter support gating of the hidden state.This means that we have dedicated mechanisms forwhen a hidden state should be *updated* andalso when it should be *reset*.These mechanisms are learned and they address the concerns listed above.For instance, if the first token is of great importancewe will learn not to update the hidden state after the first observation.Likewise, we will learn to skip irrelevant temporary observations.Last, we will learn to reset the latent state whenever needed.We discuss this in detail below. Reset Gate and Update GateThe first thing we need to introduce arethe *reset gate* and the *update gate*.We engineer them to be vectors with entries in $(0, 1)$such that we can perform convex combinations.For instance,a reset gate would allow us to control how much of the previous state we might still want to remember.Likewise, an update gate would allow us to control how much of the new state is just a copy of the old state.We begin by engineering these gates.:numref:`fig_gru_1` illustrates the inputs for boththe reset and update gates in a GRU, given the inputof the current time stepand the hidden state of the previous time step.The outputs of two gatesare given by two fully-connected layerswith a sigmoid activation function.:label:`fig_gru_1`Mathematically,for a given time step $t$,suppose that the input isa minibatch$\mathbf{X}_t \in \mathbb{R}^{n \times d}$ (number of examples: $n$, number of inputs: $d$) and the hidden state of the previous time step is $\mathbf{H}_{t-1} \in \mathbb{R}^{n \times h}$ (number of hidden units: $h$). Then, the reset gate $\mathbf{R}_t \in \mathbb{R}^{n \times h}$ and update gate $\mathbf{Z}_t \in \mathbb{R}^{n \times h}$ are computed as follows:$$\begin{aligned}\mathbf{R}_t = \sigma(\mathbf{X}_t \mathbf{W}_{xr} + \mathbf{H}_{t-1} \mathbf{W}_{hr} + \mathbf{b}_r),\\\mathbf{Z}_t = \sigma(\mathbf{X}_t \mathbf{W}_{xz} + \mathbf{H}_{t-1} \mathbf{W}_{hz} + \mathbf{b}_z),\end{aligned}$$where $\mathbf{W}_{xr}, \mathbf{W}_{xz} \in \mathbb{R}^{d \times h}$ and$\mathbf{W}_{hr}, \mathbf{W}_{hz} \in \mathbb{R}^{h \times h}$ are weightparameters and $\mathbf{b}_r, \mathbf{b}_z \in \mathbb{R}^{1 \times h}$ arebiases.Note that broadcasting (see :numref:`subsec_broadcasting`) is triggered during the summation.We use sigmoid functions (as introduced in :numref:`sec_mlp`) to transform input values to the interval $(0, 1)$. Candidate Hidden StateNext, let usintegrate the reset gate $\mathbf{R}_t$ withthe regular latent state updating mechanismin :eqref:`rnn_h_with_state`.It leads to the following*candidate hidden state*$\tilde{\mathbf{H}}_t \in \mathbb{R}^{n \times h}$ at time step $t$:$$\tilde{\mathbf{H}}_t = \tanh(\mathbf{X}_t \mathbf{W}_{xh} + \left(\mathbf{R}_t \odot \mathbf{H}_{t-1}\right) \mathbf{W}_{hh} + \mathbf{b}_h),$$:eqlabel:`gru_tilde_H`where $\mathbf{W}_{xh} \in \mathbb{R}^{d \times h}$ and $\mathbf{W}_{hh} \in \mathbb{R}^{h \times h}$are weight parameters,$\mathbf{b}_h \in \mathbb{R}^{1 \times h}$is the bias,and the symbol $\odot$ is the Hadamard (elementwise) product operator.Here we use a nonlinearity in the form of tanh to ensure that the values in the candidate hidden state remain in the interval $(-1, 1)$.The result is a *candidate* since we still need to incorporate the action of the update gate.Comparing with :eqref:`rnn_h_with_state`,now the influence of the previous statescan be reduced with theelementwise multiplication of$\mathbf{R}_t$ and $\mathbf{H}_{t-1}$in :eqref:`gru_tilde_H`.Whenever the entries in the reset gate $\mathbf{R}_t$ are close to 1, we recover a vanilla RNN such as in :eqref:`rnn_h_with_state`.For all entries of the reset gate $\mathbf{R}_t$ that are close to 0, the candidate hidden state is the result of an MLP with $\mathbf{X}_t$ as the input. Any pre-existing hidden state is thus *reset* to defaults.:numref:`fig_gru_2` illustrates the computational flow after applying the reset gate.:label:`fig_gru_2` Hidden StateFinally, we need to incorporate the effect of the update gate $\mathbf{Z}_t$. This determines the extent to which the new hidden state $\mathbf{H}_t \in \mathbb{R}^{n \times h}$ is just the old state $\mathbf{H}_{t-1}$ and by how much the new candidate state $\tilde{\mathbf{H}}_t$ is used.The update gate $\mathbf{Z}_t$ can be used for this purpose, simply by taking elementwise convex combinations between both $\mathbf{H}_{t-1}$ and $\tilde{\mathbf{H}}_t$.This leads to the final update equation for the GRU:$$\mathbf{H}_t = \mathbf{Z}_t \odot \mathbf{H}_{t-1} + (1 - \mathbf{Z}_t) \odot \tilde{\mathbf{H}}_t.$$Whenever the update gate $\mathbf{Z}_t$ is close to 1, we simply retain the old state. In this case the information from $\mathbf{X}_t$ is essentially ignored, effectively skipping time step $t$ in the dependency chain. In contrast, whenever $\mathbf{Z}_t$ is close to 0, the new latent state $\mathbf{H}_t$ approaches the candidate latent state $\tilde{\mathbf{H}}_t$. These designs can help us cope with the vanishing gradient problem in RNNs and better capture dependencies for sequences with large time step distances.For instance,if the update gate has been close to 1for all the time steps of an entire subsequence,the old hidden state at the time step of its beginningwill be easily retained and passedto its end,regardless of the length of the subsequence.:numref:`fig_gru_3` illustrates the computational flow after the update gate is in action.:label:`fig_gru_3`In summary, GRUs have the following two distinguishing features:* Reset gates help capture short-term dependencies in sequences.* Update gates help capture long-term dependencies in sequences. Implementation from ScratchTo gain a better understanding of the GRU model, let us implement it from scratch. We begin by reading the time machine dataset that we used in :numref:`sec_rnn_scratch`. The code for reading the dataset is given below.
###Code
import torch
from torch import nn
from d2l import torch as d2l
batch_size, num_steps = 32, 35
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)
###Output
_____no_output_____
###Markdown
(**Initializing Model Parameters**)The next step is to initialize the model parameters.We draw the weights from a Gaussian distributionwith standard deviation to be 0.01 and set the bias to 0. The hyperparameter `num_hiddens` defines the number of hidden units.We instantiate all weights and biases relating to the update gate, the reset gate, the candidate hidden state,and the output layer.
###Code
def get_params(vocab_size, num_hiddens, device):
num_inputs = num_outputs = vocab_size
def normal(shape):
return torch.randn(size=shape, device=device)*0.01
def three():
return (normal((num_inputs, num_hiddens)),
normal((num_hiddens, num_hiddens)),
torch.zeros(num_hiddens, device=device))
W_xz, W_hz, b_z = three() # Update gate parameters
W_xr, W_hr, b_r = three() # Reset gate parameters
W_xh, W_hh, b_h = three() # Candidate hidden state parameters
# Output layer parameters
W_hq = normal((num_hiddens, num_outputs))
b_q = torch.zeros(num_outputs, device=device)
# Attach gradients
params = [W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q]
for param in params:
param.requires_grad_(True)
return params
###Output
_____no_output_____
###Markdown
Defining the ModelNow we will define [**the hidden state initialization function**] `init_gru_state`. Just like the `init_rnn_state` function defined in :numref:`sec_rnn_scratch`, this function returns a tensor with a shape (batch size, number of hidden units) whose values are all zeros.
###Code
def init_gru_state(batch_size, num_hiddens, device):
return (torch.zeros((batch_size, num_hiddens), device=device), )
###Output
_____no_output_____
###Markdown
Now we are ready to [**define the GRU model**].Its structure is the same as that of the basic RNN cell, except that the update equations are more complex.
###Code
def gru(inputs, state, params):
W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q = params
H, = state
outputs = []
for X in inputs:
Z = torch.sigmoid((X @ W_xz) + (H @ W_hz) + b_z)
R = torch.sigmoid((X @ W_xr) + (H @ W_hr) + b_r)
H_tilda = torch.tanh((X @ W_xh) + ((R * H) @ W_hh) + b_h)
H = Z * H + (1 - Z) * H_tilda
Y = H @ W_hq + b_q
outputs.append(Y)
return torch.cat(outputs, dim=0), (H,)
###Output
_____no_output_____
###Markdown
Training and Predicting[**Training**] and prediction work in exactly the same manner as in :numref:`sec_rnn_scratch`.After training,we print out the perplexity on the training setand the predicted sequence followingthe provided prefixes "time traveller" and "traveller", respectively.
###Code
vocab_size, num_hiddens, device = len(vocab), 256, d2l.try_gpu()
num_epochs, lr = 500, 1
model = d2l.RNNModelScratch(len(vocab), num_hiddens, device, get_params,
init_gru_state, gru)
d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
###Output
perplexity 1.1, 23227.8 tokens/sec on cuda:0
time traveller with a slight accession ofcheerfulness really thi
###Markdown
[**Concise Implementation**]In high-level APIs,we can directlyinstantiate a GPU model.This encapsulates all the configuration detail that we made explicit above.The code is significantly faster as it uses compiled operators rather than Python for many details that we spelled out before.
###Code
num_inputs = vocab_size
gru_layer = nn.GRU(num_inputs, num_hiddens)
model = d2l.RNNModel(gru_layer, len(vocab))
model = model.to(device)
d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
###Output
perplexity 1.0, 292351.9 tokens/sec on cuda:0
time travelleryou can show black is white by argument said filby
|
exercises/E06Solutions_Analog-signals.ipynb | ###Markdown
E06 Analog signals This weeks homework asks you to perform a fast Fourier transform (FFT) on a given signal. The aim is to find out which frequency components are contained in the noisy but stationary signal. The signal is stored in an numpy array saved in the file `analag-signal1.npy`. The array contains two rows, the first row is the time axis and the second row is the signal. You first need to download the file `analag-signal1.npy` from the moodle or the github repository. Next you are asked to perform the discrete fast Fourier transform (numpy function `numpy.fft.fft`) on the signal and determine which frequencies are contained in the signal. Note that the signal is stationary, in other words, the frequency content does not change over time and you can use the entire signal to compute the FFT.Adapt the 'Fast Fourier transform of a constructed signal' code from the in-class tutorial in order to do implement this exercise.Here are the specific questions. 1. What is the sampling rate of the signal? In other words, at which frequency was the signal acquired? (Note that you first need to load the array from the `analag-signal1.npy` file by using the numpy `np.load()` function).
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data = np.load('analog-signal1b.npy')
dt = np.diff(data[0])
print(dt)
print('The sampling rate of the signal is : ',1/dt[0],' Hz')
###Output
[0.0004 0.0004 0.0004 ... 0.0004 0.0004 0.0004]
The sampling rate of the signal is : 2500.0 Hz
###Markdown
2. Perform the fast Fourier transform (FFT, with numpy function `numpy.fft.fft`) on the signal. Determine through plotting the FFT of the signal (power on y-axis over frequency on x-axis) which frequencies are contained in the signal (determine the frequencies through visual inspection of the plot). Note that you need the sampling rate obtained above to get the correct scaling of the frequency axis.
###Code
# performing FFT on signal ######################
fs = 1./dt[0]
nyquist = fs/2.
fSpaceSignal = np.fft.fft(data[1])
fBase = np.linspace(0,nyquist,np.floor(len(data[1])/2)+1)
halfTheSignal = fSpaceSignal[:len(fBase)]
complexConjugate = np.conj(halfTheSignal)
powe = halfTheSignal*complexConjugate
# plotting results ##############################
fig = plt.figure(figsize=(10,10))
ax0 = fig.add_subplot(2,1,1)
ax0.plot(data[0],data[1])
ax2 = fig.add_subplot(2,1,2)
ax2.plot(fBase,powe/max(powe))
ax2.set_xlim([0,15])
plt.show()
###Output
/home/mgraupe/.virtualenvs/locorungs/lib/python3.6/site-packages/ipykernel_launcher.py:5: DeprecationWarning: object of type <class 'numpy.float64'> cannot be safely interpreted as an integer.
"""
|
GEE/jrc_gsw/NormalizedDifference.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print("Installing geemap ...")
subprocess.check_call(["python", "-m", "pip", "install", "geemap"])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40, -100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# NormalizedDifference example.
#
# Compute Normalized Difference Vegetation Index over MOD09GA product.
# NDVI = (NIR - RED) / (NIR + RED), where
# RED is sur_refl_b01, 620-670nm
# NIR is sur_refl_b02, 841-876nm
# Load a MODIS image.
img = ee.Image('MODIS/006/MOD09GA/2012_03_09')
# Use the normalizedDifference(A, B) to compute (A - B) / (A + B)
ndvi = img.normalizedDifference(['sur_refl_b02', 'sur_refl_b01'])
# Make a palette: a list of hex strings.
palette = ['FFFFFF', 'CE7E45', 'DF923D', 'F1B555', 'FCD163', '99B718',
'74A901', '66A000', '529400', '3E8601', '207401', '056201',
'004C00', '023B01', '012E01', '011D01', '011301']
# Center the map
Map.setCenter(-94.84497, 39.01918, 8)
# Display the input image and the NDVI derived from it.
Map.addLayer(img.select(['sur_refl_b01', 'sur_refl_b04', 'sur_refl_b03']),
{'gain': [0.1, 0.1, 0.1]}, 'MODIS bands 1/4/3')
Map.addLayer(ndvi, {'min': 0, 'max': 1, 'palette': palette}, 'NDVI')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____ |
Exercise 2 - CNN.ipynb | ###Markdown
Brief IntroWith no doubt, Convolutional Neural Networks (CNN) has been the most successful model for computer vision tasks. Convolutional operation has similar mechanism to the way that human eyes work in visual perception. When humans explore the visual world, the eyes behave in a pattern of alternative fixations and saccades. The saccadic eye movements bring the visual target to the fovea abruptly (about 20ms), and the target information is then processed during eye fixations when the eyes stay relatively stable (e.g 200ms). We are usually un-aware of the eye movements, as they are programed and executed automatically by cognitive brain process. Our brain then aggregate all these local information to a global decision, based on previous knowledge/experience. The visual field is not explored as a whole. Only a selective set of local positions are viewed, and that turns out to be enough to serve the perception needs in our daily lives (It means images are extremely redundant to serve the recognition/classification popurse. Duplicated and irrelavant information should be effectively discarded to gain efficiency, e.g through weighting and local operator (local operators also can be considered as weighting by penalizing weights of the positions outside the receptive field to 0). Images are too rich and also too costy.).From this perspective, CNN is very much bio-inspired methodology: local-to-global, like divide-and-conquer (e.g to sort a list, you can sort the sublists (local) then merge to have the global solution). It acts like information selector and aggragator, grab the needed and throw away the rest. OK, too much talking, stop brain storming and code it. Let code say
###Code
## load libs
%matplotlib inline
import time
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_mldata
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score
from sklearn.preprocessing import OneHotEncoder
###Output
_____no_output_____
###Markdown
Load MNIST
###Code
mnist = fetch_mldata('mnist original', data_home = 'datasets/')
X, y = mnist['data'], mnist['target']
X.shape, y.shape ## shape check
###Output
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\deprecation.py:77: DeprecationWarning: Function fetch_mldata is deprecated; fetch_mldata was deprecated in version 0.20 and will be removed in version 0.22
warnings.warn(msg, category=DeprecationWarning)
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\deprecation.py:77: DeprecationWarning: Function mldata_filename is deprecated; mldata_filename was deprecated in version 0.20 and will be removed in version 0.22
warnings.warn(msg, category=DeprecationWarning)
###Markdown
Preprocess MNIST
###Code
X = X.T
X = X / 255.0
Y = OneHotEncoder().fit_transform(y.reshape(-1,1).astype('int32')).toarray().T
X.shape, Y.shape
###Output
_____no_output_____
###Markdown
Make Train/Test Splits
###Code
m = 60000
X_train, X_test = X[:,:m].reshape(1,28,28,-1), X[:,m:].reshape(1,28,28,-1)
Y_train, Y_test = Y[:,:m], Y[:,m:]
X_train.shape, Y_train.shape, X_test.shape, Y_test.shape
###Output
_____no_output_____
###Markdown
Shuffle Train set
###Code
np.random.seed(54321)
shuffle = np.random.permutation(m)
X_train, Y_train = X_train[:,:,:,shuffle], Y_train[:,shuffle]
X_train.shape, Y_train.shape
###Output
_____no_output_____
###Markdown
Visual check
###Code
idx = 134
plt.imshow(X_train[:,:,:,idx].squeeze(), cmap = 'binary_r')
plt.title(np.argmax(Y_train[:,idx]))
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Define network
###Code
## input layer
input_depth = 1
input_height = 28
input_width = 28
## convolution layer
conv_depth = 2
conv_height = 3
conv_width = 3
## trainable parameters connecting input & convolution layers
W1 = np.random.randn(conv_depth, input_depth, conv_height, conv_width)
b1 = np.zeros((conv_depth, 1))
## densely connected (fc) layer
fc_dims = 32
flatten_dims = conv_depth * (input_height - conv_height + 1) * (input_width - conv_width + 1)
## trainable parameters connecting convolution & dense layers
W2 = np.random.randn(fc_dims, flatten_dims)
b2 = np.zeros((fc_dims, 1))
## output layer
output_dims = 10
## trainable parameters connecting dense & output layers
W3 = np.random.randn(output_dims, fc_dims)
b3 = np.zeros((output_dims, 1))
###Output
_____no_output_____
###Markdown
Training CNN
###Code
## prepare inputs
Input = X_train.copy()
Target = Y_train.copy()
Input.shape, Target.shape
## initialize convolution output
conv_output_height = input_height - conv_height + 1
conv_output_width = input_width - conv_width + 1
conv_output = np.zeros((conv_depth, conv_output_height, conv_output_width, Input.shape[-1]))
for epoch in range(20):
#------------------------------------------------------------------FORWARD BLOCK
## feed forward: convolution operation
for f in range(conv_depth):
for r in range(conv_output_height):
for c in range(conv_output_width):
current_patch = Input[:, r : r + conv_height, c : c + conv_width]
current_filter = np.expand_dims(W1[f,:,:,:], axis = 3) ## to match shape for broadcasting
conv_output[f, r, c] = (current_patch * current_filter + b1[f]).reshape(-1, Input.shape[-1]).sum(axis = 0) ## reshape 2X faster
# conv_output[f, r, c] += (current_patch * current_filter + b1[f]).sum(axis = 0).sum(axis = 0).sum(axis = 0)
## feed forward: flatten the convolution output
conv_output_flatten = conv_output.reshape(-1, Input.shape[-1])
A1 = 1 / (1 + np.exp(-conv_output_flatten)) ## sigmoid
## feed forward: affine operation
Z2 = W2 @ A1 + b2
A2 = 1/(1 + np.exp(-Z2))
## geed forward: affine + softmax operation
Z3 = W3 @ A2 + b3
Z3 = Z3 - np.max(Z3, axis = 0)
A3 = np.exp(Z3)/np.exp(Z3).sum(axis = 0)
#------------------------------------------------------------------BACKWARD BLOCK
## backpropagation: softmax layer
dZ3 = A3 - Y_train
dW3 = dZ3 @ A2.T / Input.shape[-1]
db3 = dZ3.mean(axis = 1, keepdims = True)
## backpropagation: dense layer
dA2 = W3.T @ dZ3
dZ2 = dA2 * A2 * (1 - A2)
dW2 = dZ2 @ A1.T / Input.shape[-1]
db2 = dZ2.mean(axis = 1, keepdims = True)
## backpropagation: convolution layer
dA1 = W2.T @ dZ2
d_conv_flatten = dA1 * A1 * (1 - A1)
d_conv_matrix = d_conv_flatten.reshape(conv_output.shape)
## backpropagation: convolution layer --> weight
dW1 = np.zeros(W1.shape)
for in_c in range(Input.shape[0]):
for out_c in range(conv_output.shape[0]):
for r in range(conv_height):
for c in range(conv_width):
conv_input_patch = Input[in_c, r : r + conv_output_height, c : c + conv_output_width, :] ## conv input
conv_output_vals = d_conv_matrix[out_c] ## conv results
dW1[out_c, in_c, r, c] = np.sum(conv_input_patch * conv_output_vals)/Input.shape[-1]
## backpropagation: convolution layer --> bias
db1 = d_conv_matrix.sum(axis = 1).sum(axis = 1).mean(axis = 1, keepdims = True)
# equivalent
# db1 = np.zeros((b1.shape))
# for out_c in range(d_conv_matrix.shape[0]):
# db1[out_c] += d_conv_matrix[out_c].sum()/Input.shape[-1]
## backpropagation: convolution layer --> Input
dInput = np.zeros_like(Input)
for in_c in range(Input.shape[0]):
for out_c in range(conv_output.shape[0]):
current_filter = np.expand_dims(W1[out_c, in_c], axis = 2)
for r in range(conv_output_height):
for c in range(conv_output_width):
d_conv_val = d_conv_matrix[out_c, in_c, r, c]
dInput[in_c, r : r+conv_height, c : c + conv_width, :] += d_conv_val * current_filter
#------------------------------------------------------------------ UPDATE PARAMETERS
## update model
lr = 1
W3 -= dW3 * lr
W2 -= dW2 * lr
W1 -= dW1 * lr
b3 -= db3 * lr
b2 -= db2 * lr
b1 -= db1 * lr
## compute loss
Loss = -np.mean(Y_train * np.log(A3), axis = 1)
print('epoch:', epoch, ', loss:', Loss.sum())
###Output
epoch: 0 , loss: [1.28140686 0.20822068 0.2338827 0.46949416 0.60362163 0.17409459
0.35062963 1.02211311 1.32937071 0.04804767]
epoch: 1 , loss: [0.38369461 0.06354422 0.77907208 0.20330568 0.38941393 0.18225178
0.46335044 0.22145518 0.68721877 1.18083523]
epoch: 2 , loss: [0.34118477 0.98553669 0.58973446 0.09616901 0.11638067 0.16605581
0.25018618 0.26129972 0.40057913 0.76627435]
epoch: 3 , loss: [0.13881807 0.66915701 0.38335981 0.55071828 0.3880977 0.23016209
0.24817241 0.08562567 0.22102089 0.52127314]
epoch: 4 , loss: [0.55372952 0.38240065 0.14696803 0.54680683 0.17156462 0.17519124
0.13035284 0.70305301 0.20451274 0.36796369]
epoch: 5 , loss: [0.34151146 0.16762261 0.26628403 0.32332559 0.20978761 0.18224469
0.3076453 0.46574045 0.17083644 0.18504516]
epoch: 6 , loss: [0.21586048 0.32785749 0.19406651 0.20420862 0.21118384 0.21637319
0.20095451 0.29251818 0.2560406 0.24154615]
epoch: 7 , loss: [0.23242883 0.22555531 0.25360815 0.25162868 0.23244112 0.21426809
0.24437044 0.2203629 0.21580981 0.22066201]
epoch: 8 , loss: [0.22568329 0.2612961 0.22039339 0.22442451 0.22329748 0.2172761
0.22149135 0.24540321 0.23199796 0.23268553]
epoch: 9 , loss: [0.23015672 0.23679541 0.23447607 0.23837881 0.22862081 0.21729422
0.23231189 0.23134363 0.22498903 0.22764145]
epoch: 10 , loss: [0.22758303 0.25200254 0.22677901 0.23018803 0.22573063 0.21695097
0.22648256 0.23839846 0.22770123 0.22964755]
epoch: 11 , loss: [0.22917324 0.24169142 0.23076751 0.23478301 0.22740242 0.21740039
0.22956132 0.23468208 0.22678078 0.22901753]
epoch: 12 , loss: [0.22819881 0.24834426 0.22857501 0.23208175 0.22642583 0.21705683
0.22785971 0.23653148 0.2270004 0.2291191 ]
epoch: 13 , loss: [0.2288109 0.24388698 0.22978328 0.23365703 0.22701621 0.21730915
0.22881446 0.2356316 0.22704205 0.2292138 ]
epoch: 14 , loss: [0.22842485 0.24680646 0.22909761 0.23271827 0.22665451 0.21713782
0.22826391 0.23604126 0.22693296 0.22907742]
epoch: 15 , loss: [0.22867101 0.24486233 0.22949335 0.23328011 0.22688052 0.21725326
0.2285881 0.23587407 0.22704354 0.22920425]
epoch: 16 , loss: [0.22851321 0.24614424 0.22926003 0.23293913 0.2267379 0.21717704
0.22839324 0.23592658 0.22695367 0.22910357]
epoch: 17 , loss: [0.22861485 0.24529276 0.22940014 0.23314757 0.22682893 0.21722734
0.22851247 0.23592458 0.2270208 0.22917828]
epoch: 18 , loss: [0.22854897 0.24585603 0.22931462 0.23301865 0.22677045 0.21719432
0.22843837 0.23590762 0.22697309 0.22912515]
epoch: 19 , loss: [0.22859166 0.24548223 0.22936765 0.23309886 0.22680828 0.21721603
0.22848499 0.23592912 0.22700614 0.22916205]
###Markdown
Test
###Code
## initialize convolution output
Input = X_test.copy()
conv_output_height = input_height - conv_height + 1
conv_output_width = input_width - conv_width + 1
conv_output = np.zeros((conv_depth, conv_output_height, conv_output_width, Input.shape[-1]))
## feed forward: convolution operation
for f in range(conv_depth):
for r in range(conv_output_height):
for c in range(conv_output_width):
current_patch = Input[:, r : r + conv_height, c : c + conv_width]
current_filter = np.expand_dims(W1[f,:,:,:], axis = 3) ## to match shape for broadcasting
conv_output[f, r, c] = (current_patch * current_filter + b1[f]).reshape(-1, Input.shape[-1]).sum(axis = 0) ## reshape 2X faster
# conv_output[f, r, c] += (current_patch * current_filter + b1[f]).sum(axis = 0).sum(axis = 0).sum(axis = 0)
## feed forward: flatten the convolution output
conv_output_flatten = conv_output.reshape(-1, Input.shape[-1])
A1 = 1 / (1 + np.exp(-conv_output_flatten)) ## sigmoid
## feed forward: affine operation
Z2 = W2 @ A1 + b2
A2 = 1/(1 + np.exp(-Z2))
## geed forward: affine + softmax operation
Z3 = W3 @ A2 + b3
Z3 = Z3 - np.max(Z3, axis = 0)
A3 = np.exp(Z3)/np.exp(Z3).sum(axis = 0)
preds = np.argmax(A3, axis = 0)
truth = np.argmax(Y_test, axis = 0)
###Output
_____no_output_____
###Markdown
Results Report
###Code
print(accuracy_score(truth, preds))
print(confusion_matrix(truth, preds))
print(classification_report(truth, preds))
## something wrong inside, to be corrected
###Output
_____no_output_____ |
Session_13_logistic_regression.ipynb | ###Markdown
Logistic regressionI think it most simple to first demonstrate what type of problem that we are interested in. So here lets compare displacement (disp) for US made and non-US made cars.Let's first assign mtcars to a new dataframe and then identify the US (1) vs. non_US (0) under the column label "origin."
###Code
cars <- mtcars
non_US <- c(1:3, 8:14, 18:21, 26:28, 30:32)
origin <- rep(1, nrow(cars))
origin[non_US] <- 0
cars$origin <- origin
head(cars)
###Output
_____no_output_____
###Markdown
Now that we have an origin column in our dataframe, let's plot it compared to engine displacement (disp):
###Code
par(pin=c(3,3))
plot(cars$disp, cars$origin)
###Output
_____no_output_____
###Markdown
You can see from the figure that there is an obvious bias to the left for non_US (origin = 0) cars versus US (origin = 1). So clearly, there is a relationship between origin and displacement---US cars have larger engine displacements.The question that we may then ask is, "Given the engine displacement of a car, is that car likely to be made in the US or not?" Logistic regression is one approach to dealing with this classification problem. Definition: Binary logistic regressionFor a binary categorical (this would be the dependant variable) versus a continuous (independent) variable we can describe this simple system with the following equations: $$\begin{align}t =& \beta_0 + \beta_1 x \\Pr(y|x) = p(x) =& \frac{1}{1 + e^{-t}}\end{align}$$The function t is called the linear predictor function which includes an intercept and coefficient for the independent variable x. The probability of a positive classification y given x is Pr(y|x). The logit function is the natural logarithm of the odds ratio of the outcome:$$\begin{align}logit (p) =& \ln \left(\frac{p(x)}{1-p(x)}\right) = \beta_0 + \beta_1 x \end{align}$$To determine if the value of a given variable should be classified as being of y or not y, the following holds:$$y =\left\{ \begin{array}{ll} 1 \mbox{ if: } \beta_0 + \beta_1 x + \epsilon > 0\\ 0 \mbox{ else:} \end{array} \right. $$Unlike linear regression, we cannot fit this equation to our data with a simple least-squares fit. Logistic regression requires the use of a different approach called maximum likelihood estimation (MLE). The computational overhead of finding the best fitting solution means that this approach only became viable with the ubiquity of computers.Let's plot this function and play with the parameters.
###Code
#Definition of the logistic function
x <- seq(-20,20, length=100)
par(mfrow=c(2,2))
beta0 <- 0
beta1 <- 1
pr <- 1/(1+exp(- (beta1*x + beta0) ))
plot(x,pr, xlim=c(-20,20), type="l", xlab='Cont. var.', main='beta(0,1)')
beta0 <- -5
beta1 <- 1
t <- beta0 + beta1*x
pr <- 1/(1+exp(- (t) ))
plot(x,pr, xlim=c(-20,20), type="l", xlab='Cont. var.', main='beta(-5,1)')
beta0 <- 0
beta1 <- 0.2
t <- beta0 + beta1*x
pr <- 1/(1+exp(- (t) ))
plot(x,pr, xlim=c(-20,20), type="l", xlab='Cont. var.', main='beta(0,0.2)')
beta0 <- -5
beta1 <- 0.2
t <- beta0 + beta1*x
pr <- 1/(1+exp(- (t) ))
plot(x,pr, xlim=c(-20,20), type="l", xlab='Cont. var.', main='beta(-5,0.2)')
###Output
_____no_output_____
###Markdown
Note that since the linear predictor function (t) is inside an exponential term, the offset resulting from the \beta_0 term is scaled non-linearly with the other terms.Going back to our data for car origin versus engine displacement, we can use the general linear model to calculate the \beta parameters:
###Code
model <- glm(origin ~ disp, family=binomial(link='logit'), data=cars)
summary(model)
###Output
_____no_output_____
###Markdown
From the model coefficients, we can plot the probability density function against the real values:
###Code
x <- seq(50,500, length=100)
beta0 <- -8.73
beta1 <- 0.0315
t <- beta0 + beta1*x
pr <- 1/(1+exp(- (t) ))
par(pin=c(3,3))
plot(x,pr, type="l", xlab="disp")
points(cars$disp,cars$origin,col="red")
###Output
_____no_output_____
###Markdown
Odds ratioIn order to interpret the coefficients given by the summary of the general linear model, we need to extract them from the logit equation. Here we get an expression that tells us "for every one unit increase in x, the odds multiply by e^{\beta_1}":$$\begin{align}odds =& e^{\beta_1}\end{align}$$
###Code
or = exp(model$coefficient[2]) #We can extract the \beta_1 from the model directly
print(or)
###Output
disp
1.032036
###Markdown
We can then interpret this as, "for every increase in engine displacement by 1 cu-in, the likely odds that the car has a US origin increases by 3%." Multiple logistic regressionThere is no restriction on the number of terms present in the linear predictor function that we used in establishing the model. In the general case we may write the function as:$$\begin{align}t =& \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \ldots \\=& \sum_{i=1}^m \beta_i x_i\end{align}$$Note:- It is generally recommended that all predictors used in a model have at least ten corresponding events, although this is a theoretical argument. https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-016-0267-3 Classification based on a predictorThe logistic regression model may then be used as a binary classifier for a general set of predictor variables. So given a set of data to "train" on, we can calculate the coefficients and use the predictor function to determine if a case should be classified as yes/no.We can do this for the existing data by spliting it into a training and query set and calculating the model using only the training set. We can then use the predictor function to decide which way a case from the query set should be classified.
###Code
#Take a random sample of the data
set.seed(42) #This fixes the random number generator so that you get the same "random" sample every time
trainingIndex <- sample(1:nrow(cars), 0.8*nrow(cars))
trainingData <- cars[trainingIndex,]
queryData <- cars[-trainingIndex,]
print(queryData)
logmod <- glm(origin ~ disp, family=binomial(link='logit'), data=trainingData) #Create the model
print(logmod) #\beta_0 + \beta_1 x
dispP <- predict(logmod, queryData)
print(dispP)
###Output
mpg cyl disp hp drat wt qsec vs am gear carb origin
Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1 0
Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1 1
Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3 0
Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3 0
Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3 0
Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1 0
Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4 1
Call: glm(formula = origin ~ disp, family = binomial(link = "logit"),
data = trainingData)
Coefficients:
(Intercept) disp
-10.43294 0.03948
Degrees of Freedom: 24 Total (i.e. Null); 23 Residual
Null Deviance: 33.65
Residual Deviance: 5.976 AIC: 9.976
Datsun 710 Valiant Merc 450SE Merc 450SL Merc 450SLC
-6.169078 -1.549898 0.455695 0.455695 0.455695
Fiat X1-9 Ford Pantera L
-7.314003 3.424604
###Markdown
From the model of the training data, we can use the \beta coefficients in the linear predictor function (t) to determine whether a classification of US or non-US is more likely. Recall that if the t value is greater than zero, then the probability of positive classification is greater than 0.5. Reciprocally, if the t value is less than zero, we can negatively classify the input.Finally, we can compare the results of origin with predicted origin in a table.
###Code
dispP[dispP>0] <- 1
dispP[dispP<0] <- 0
queryData['predicted'] <- dispP
print(queryData)
xtabs( ~ predicted + origin, queryData) #Contingency table or confusion matrix
###Output
mpg cyl disp hp drat wt qsec vs am gear carb origin
Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1 0
Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1 1
Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3 0
Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3 0
Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3 0
Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1 0
Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4 1
predicted
Datsun 710 0
Valiant 0
Merc 450SE 1
Merc 450SL 1
Merc 450SLC 1
Fiat X1-9 0
Ford Pantera L 1
###Markdown
Contingency tablesWe can look at this table in terms of a "contingency table" or in the general classification case "confusion matrix." Here the "truth" of car origin is represented by the columns, where zero is non-US and unity is US. The "predicted" classification is presented by row.This resembles the false positive (type I) false negative (type II) table for determining the validity of a statistical hypothesis.For a perfect model the diagonal would constitute the sum of all cases and the off-diagonal would sum to zero. In our case the classifier is fairly poor: Of the two US cars in the sample, one was classified as US (true positive) and one as non-US (false negative). Of the five non-US cars, two were classified as non-US (true negative) and three as US (false positive). Precision & recallPrecision is the ratio of true positives to true positives plus true negatives. Recall is the true positives to true positives plus false positives.$$\begin{align}P = \frac{tp}{tp + fp} \\R = \frac{tp}{tp + fn}\end{align}$$In the case of predicting origin based on engine displacement alone:$$\begin{align}P = \frac{1}{1 + 3}= 1/3 \\R = \frac{1}{1 + 1} = 1/2\end{align}$$ F_1 scoreThe F_1 score is a measure of a test's accuracy that considers both precision and recall.$$\begin{align}F_1 =& 2 \frac{P R}{P+R}\end{align}$$Calculated for the prior case:$$\begin{align}F_1 =& 2 \frac{1/6}{5/6} \\=& 1/5\end{align}$$ Expanding the modelLet's train a new model that includes the cars weight to determine if our contingency table scores improve.
###Code
logmod <- glm(origin ~ disp + wt, family=binomial(link='logit'), data=trainingData) #Create the model
print(logmod) #\beta_0 + \beta_1 x
dispP <- predict(logmod, queryData)
dispP[dispP>0] <- 1
dispP[dispP<0] <- 0
queryData['predicted'] <- dispP
print(queryData)
xtabs( ~ predicted + origin, queryData)
###Output
Call: glm(formula = origin ~ disp + wt, family = binomial(link = "logit"),
data = trainingData)
Coefficients:
(Intercept) disp wt
-3.75075 0.06992 -4.49680
Degrees of Freedom: 24 Total (i.e. Null); 22 Residual
Null Deviance: 33.65
Residual Deviance: 5.259 AIC: 11.26
mpg cyl disp hp drat wt qsec vs am gear carb origin
Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1 0
Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1 1
Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3 0
Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3 0
Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3 0
Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1 0
Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4 1
predicted
Datsun 710 0
Valiant 0
Merc 450SE 0
Merc 450SL 0
Merc 450SLC 0
Fiat X1-9 0
Ford Pantera L 1
|
tikz_jupyter_starter_demo.ipynb | ###Markdown
Tikz Diagrams in Jupyter Setup Instructions Install Dependencies ```!apt-get -qq install -y texlive-xetex!apt-get -qq install -y imagemagick``` Install Python Interface Libraries```!pip install git+git://github.com/mkrphys/ipython-tikzmagic.git``` Supress Code OutputPut the magic ```%%capture``` at the start of the code cell to discard the output printouts. Import Modulestandard way```import tikz```or via Python Magic```%load_ext tikzmagic``` Demo Install and import
###Code
%%capture
# dependency installs
!apt-get -qq install -y texlive-xetex
!apt-get -qq install -y imagemagick
# package install
!pip install git+git://github.com/mkrphys/ipython-tikzmagic.git
# module import
%load_ext tikzmagic
###Output
_____no_output_____
###Markdown
Draw Something
###Code
%%tikz
\draw[thick] (0cm,0cm) circle(1cm);
\draw[thick] (-0.35,0.35) ellipse (0.1cm and 0.1cm); %draw left eye
\draw[thick] (0.35,0.35) ellipse (0.1cm and 0.1cm); %draw right eye
\draw[thick] plot [smooth,tension=1.5] coordinates{(-0.5,-0.5) (0,-0.8) (0.5,-0.5)};%draw smile
###Output
_____no_output_____ |
Data Science/Python/.ipynb_checkpoints/03_Practice+Exercise+1-checkpoint.ipynb | ###Markdown
Practice Exercise 1 You are provided with 2 lists that contain the data of an ecommerce website. The first list contains the data for the number of items sold for a particular product and the second list contains the price of the product sold. As a part of this exercise, solve the questions that are provided below.
###Code
number = [8, 9, 9, 1, 6, 9, 5, 7, 3, 9, 7, 3, 4, 8, 3, 5, 8, 4, 8, 7, 5, 7, 3, 6, 1, 2, 7, 4, 7, 7, 8, 4, 3, 4, 2, 2, 2, 7, 3, 5, 6, 1, 1, 3, 2, 1, 1, 7, 7, 1, 4, 4, 5, 6, 1, 2, 7, 4, 5, 8, 1, 4, 8, 6, 2, 4, 3, 7, 3, 6, 2, 3, 3, 3, 2, 4, 6, 8, 9, 3, 9, 3, 1, 8, 6, 6, 3, 3, 9, 4, 6, 4, 9, 6, 7, 1, 2, 8, 7, 8, 1, 4]
price = [195, 225, 150, 150, 90, 60, 75, 255, 270, 225, 135, 195, 30, 15, 210, 105, 15, 30, 180, 60, 165, 60, 45, 225, 180, 90, 30, 210, 150, 15, 270, 60, 210, 180, 60, 225, 150, 150, 120, 195, 75, 240, 60, 45, 30, 180, 240, 285, 135, 165, 180, 240, 60, 105, 165, 240, 120, 45, 120, 165, 285, 225, 90, 105, 225, 45, 45, 45, 75, 180, 90, 240, 30, 30, 60, 135, 180, 15, 255, 180, 270, 135, 105, 135, 210, 180, 135, 195, 225, 75, 225, 15, 240, 60, 15, 180, 255, 90, 15, 150, 230, 150]
###Output
_____no_output_____
###Markdown
How many different products are sold by the company in total?- 99- 100- 101- 102
###Code
# Type your code here
###Output
_____no_output_____
###Markdown
How many items were sold in total?- 460- 490- 500- 520
###Code
# Type your code here
###Output
_____no_output_____
###Markdown
What is the average price of the products sold by the ecommerce company?- 139- 151- 142- 128
###Code
# Type your code here
###Output
_____no_output_____
###Markdown
What is the price of the costliest item sold?- 225- 310- 280- 285
###Code
# Type your code here
###Output
_____no_output_____
###Markdown
What is the total revenue of the company? [Revenue = Price\*Quantity]- 67100- 53900- 45300- 71200
###Code
# Type your code here
###Output
_____no_output_____
###Markdown
Demand for the 20th product in the list is more than the 50th product. [True/False]- True- False- Can't be calculated
###Code
# Type your code here
###Output
_____no_output_____
###Markdown
How many products fall under the category of expensive goods? An expensive good is that good whose price is more than the average price of the products sold by the company.- 48- 50- 52- 54
###Code
# Type your code here
###Output
_____no_output_____ |
Code/Data Preprocessing/Data Resizing.ipynb | ###Markdown
Importing Libraries
###Code
# importing libraries
import os
import sys
import os.path
import cv2
from threading import Thread
###Output
_____no_output_____
###Markdown
Global Variables for Setting I/O Directories
###Code
#############################################Global Variables##################################################
General_Directory = 'D:\\Food Datasets\\' #location of full resolution Datasets
General_Out_Directory_144p = 'D:\\Food Datasets\\144p_16-9\\' #16:9 images are downscaled to 144p (144x256)
General_Out_Directory_224p = 'D:\\Food Datasets\\224p_1-1_4-3\\' #only 1:1 and 4:3 images are downscaled to 224p (224x224)
General_Out_Directory_224x224 = 'D:\\Food Datasets\\224p_ALL\\' #all images are downscaled to 224p (224x224)
General_Out_Directory_224x224x224 = 'D:\\Food Datasets\\224p_Mixed\\' #Mixed images are downscaled to 224p (224x224)
General_Out_Directory_224x224x224x224 = 'D:\\Food Datasets\\224p_4-3_16-9\\' #only 16:9 and 4:3 images are downscaled to 224p (224x224)
General_Out_Directory_240p = 'D:\\Food Datasets\\240p_4-3_16-9\\' #16:9 and 4:3 images are dowscaled to 240p (240x360)
General_Out_Directory_360p = 'D:\\Food Datasets\\360p_ALL\\' #all images are dowscaled to 360p (360x480)
General_Out_Directory_480p = 'D:\\Food Datasets\\480p_4-3\\' #4:3 images are dowscaled to 360p (360x480)
General_Out_Directory_640p = 'D:\\Food Datasets\\640p_4-3_16-9\\' #16:9 and 4:3 images are downscaled to 360p (360x640)
General_Out_Directory_640x480 = 'D:\\Food Datasets\\640x480_Mixed\\' #Mixed images are downscaled to 480p(480x640)
##############################################################################################################
Very_Tiny_General_Directory = General_Directory + 'Full Training Dataset\\'
Tiny_General_Directory = General_Directory + 'Large Training Dataset\\'
Small_General_Directory = General_Directory + 'Small Training Dataset\\'
Balanced_General_Directory = General_Directory + 'Balanced Training Dataset\\'
Large_General_Directory = General_Directory + 'Large Training Dataset\\'
Full_General_Directory = General_Directory + 'Full Training Dataset\\'
Very_Tiny_Out_Directory_144p = General_Out_Directory_144p + 'Very Tiny Training Dataset\\'
Tiny_Out_Directory_144p = General_Out_Directory_144p + 'Tiny Training Dataset\\'
Small_Out_Directory_144p = General_Out_Directory_144p + 'Small Training Dataset\\'
Balanced_Out_Directory_144p = General_Out_Directory_144p + 'Balanced Training Dataset\\'
Large_Out_Directory_144p = General_Out_Directory_144p + 'Large Training Dataset\\'
Full_Out_Directory_144p = General_Out_Directory_144p + 'Full Training Dataset\\'
Very_Tiny_Out_Directory_224p = General_Out_Directory_224p + 'Very Tiny Training Dataset\\'
Tiny_Out_Directory_224p = General_Out_Directory_224p + 'Tiny Training Dataset\\'
Small_Out_Directory_224p = General_Out_Directory_224p + 'Small Training Dataset\\'
Balanced_Out_Directory_224p = General_Out_Directory_224p + 'Balanced Training Dataset\\'
Large_Out_Directory_224p = General_Out_Directory_224p + 'Large Training Dataset\\'
Full_Out_Directory_224p = General_Out_Directory_224p + 'Full Training Dataset\\'
Very_Tiny_Out_Directory_224x224 = General_Out_Directory_224x224 + 'Very Tiny Training Dataset\\'
Tiny_Out_Directory_224x224 = General_Out_Directory_224x224 + 'Tiny Training Dataset\\'
Small_Out_Directory_224x224 = General_Out_Directory_224x224 + 'Small Training Dataset\\'
Balanced_Out_Directory_224x224 = General_Out_Directory_224x224 + 'Balanced Training Dataset\\'
Large_Out_Directory_224x224 = General_Out_Directory_224x224 + 'Large Training Dataset\\'
Full_Out_Directory_224x224 = General_Out_Directory_224x224 + 'Full Training Dataset\\'
Very_Tiny_Out_Directory_224x224x224 = General_Out_Directory_224x224x224 + 'Very Tiny Training Dataset\\'
Tiny_Out_Directory_224x224x224 = General_Out_Directory_224x224x224 + 'Tiny Training Dataset\\'
Small_Out_Directory_224x224x224 = General_Out_Directory_224x224x224 + 'Small Training Dataset\\'
Balanced_Out_Directory_224x224x224 = General_Out_Directory_224x224x224 + 'Balanced Training Dataset\\'
Large_Out_Directory_224x224x224 = General_Out_Directory_224x224x224 + 'Large Training Dataset\\'
Full_Out_Directory_224x224x224 = General_Out_Directory_224x224x224 + 'Full Training Dataset\\'
Very_Tiny_Out_Directory_224x224x224x224 = General_Out_Directory_224x224x224x224 + 'Very Tiny Training Dataset\\'
Tiny_Out_Directory_224x224x224x224 = General_Out_Directory_224x224x224x224 + 'Tiny Training Dataset\\'
Small_Out_Directory_224x224x224x224 = General_Out_Directory_224x224x224x224 + 'Small Training Dataset\\'
Balanced_Out_Directory_224x224x224x224 = General_Out_Directory_224x224x224x224 + 'Balanced Training Dataset\\'
Large_Out_Directory_224x224x224x224 = General_Out_Directory_224x224x224x224 + 'Large Training Dataset\\'
Full_Out_Directory_224x224x224x224 = General_Out_Directory_224x224x224x224 + 'Full Training Dataset\\'
Very_Tiny_Out_Directory_240p = General_Out_Directory_240p + 'Very Tiny Training Dataset\\'
Tiny_Out_Directory_240p = General_Out_Directory_240p + 'Tiny Training Dataset\\'
Small_Out_Directory_240p = General_Out_Directory_240p + 'Small Training Dataset\\'
Balanced_Out_Directory_240p = General_Out_Directory_240p + 'Balanced Training Dataset\\'
Large_Out_Directory_240p = General_Out_Directory_240p + 'Large Training Dataset\\'
Full_Out_Directory_240p = General_Out_Directory_240p + 'Full Training Dataset\\'
Very_Tiny_Out_Directory_360p = General_Out_Directory_360p + 'Very Tiny Training Dataset\\'
Tiny_Out_Directory_360p = General_Out_Directory_360p + 'Tiny Training Dataset\\'
Small_Out_Directory_360p = General_Out_Directory_360p + 'Small Training Dataset\\'
Balanced_Out_Directory_360p = General_Out_Directory_360p + 'Balanced Training Dataset\\'
Large_Out_Directory_360p = General_Out_Directory_360p + 'Large Training Dataset\\'
Full_Out_Directory_360p = General_Out_Directory_360p + 'Full Training Dataset\\'
Very_Tiny_Out_Directory_480p = General_Out_Directory_480p + 'Very Tiny Training Dataset\\'
Tiny_Out_Directory_480p = General_Out_Directory_480p + 'Tiny Training Dataset\\'
Small_Out_Directory_480p = General_Out_Directory_480p + 'Small Training Dataset\\'
Balanced_Out_Directory_480p = General_Out_Directory_480p + 'Balanced Training Dataset\\'
Large_Out_Directory_480p = General_Out_Directory_480p + 'Large Training Dataset\\'
Full_Out_Directory_480p = General_Out_Directory_480p + 'Full Training Dataset\\'
Very_Tiny_Out_Directory_640p = General_Out_Directory_640p + 'Very Tiny Training Dataset\\'
Tiny_Out_Directory_640p = General_Out_Directory_640p + 'Tiny Training Dataset\\'
Small_Out_Directory_640p = General_Out_Directory_640p + 'Small Training Dataset\\'
Balanced_Out_Directory_640p = General_Out_Directory_640p + 'Balanced Training Dataset\\'
Large_Out_Directory_640p = General_Out_Directory_640p + 'Large Training Dataset\\'
Full_Out_Directory_640p = General_Out_Directory_640p + 'Full Training Dataset\\'
Very_Tiny_Out_Directory_640x480 = General_Out_Directory_640x480 + 'Very Tiny Training Dataset\\'
Tiny_Out_Directory_640x480 = General_Out_Directory_640x480 + 'Tiny Training Dataset\\'
Small_Out_Directory_640x480 = General_Out_Directory_640x480 + 'Small Training Dataset\\'
Balanced_Out_Directory_640x480 = General_Out_Directory_640x480 + 'Balanced Training Dataset\\'
Large_Out_Directory_640x480 = General_Out_Directory_640x480 + 'Large Training Dataset\\'
Full_Out_Directory_640x480 = General_Out_Directory_640x480 + 'Full Training Dataset\\'
###########################################################################################################
VTFD_144p = Very_Tiny_Out_Directory_144p + 'Food\\'
TFD_144p = Tiny_Out_Directory_144p + 'Food\\'
SFD_144p = Small_Out_Directory_144p + 'Food\\'
BFD_144p = Balanced_Out_Directory_144p + 'Food\\'
LFD_144p = Large_Out_Directory_144p + 'Food\\'
FFD_144p = Full_Out_Directory_144p + 'Food\\'
VTNFD_144p = Very_Tiny_Out_Directory_144p + 'No Food\\'
TNFD_144p = Tiny_Out_Directory_144p + 'No Food\\'
SNFD_144p = Small_Out_Directory_144p + 'No Food\\'
BNFD_144p = Balanced_Out_Directory_144p + 'No Food\\'
LNFD_144p = Large_Out_Directory_144p + 'No Food\\'
FNFD_144p = Full_Out_Directory_144p + 'No Food\\'
###########################################################################################################
VTFD_224p = Very_Tiny_Out_Directory_224p + 'Food\\'
TFD_224p = Tiny_Out_Directory_224p + 'Food\\'
SFD_224p = Small_Out_Directory_224p + 'Food\\'
BFD_224p = Balanced_Out_Directory_224p + 'Food\\'
LFD_224p = Large_Out_Directory_224p + 'Food\\'
FFD_224p = Full_Out_Directory_224p + 'Food\\'
VTNFD_224p = Very_Tiny_Out_Directory_224p + 'No Food\\'
TNFD_224p = Tiny_Out_Directory_224p + 'No Food\\'
SNFD_224p = Small_Out_Directory_224p + 'No Food\\'
BNFD_224p = Balanced_Out_Directory_224p + 'No Food\\'
LNFD_224p = Large_Out_Directory_224p + 'No Food\\'
FNFD_224p = Full_Out_Directory_224p + 'No Food\\'
###########################################################################################################
VTFD_240p = Very_Tiny_Out_Directory_240p + 'Food\\'
TFD_240p = Tiny_Out_Directory_240p + 'Food\\'
SFD_240p = Small_Out_Directory_240p + 'Food\\'
BFD_240p = Balanced_Out_Directory_240p + 'Food\\'
LFD_240p = Large_Out_Directory_240p + 'Food\\'
FFD_240p = Full_Out_Directory_240p + 'Food\\'
VTNFD_240p = Very_Tiny_Out_Directory_240p + 'No Food\\'
TNFD_240p = Tiny_Out_Directory_240p + 'No Food\\'
SNFD_240p = Small_Out_Directory_240p + 'No Food\\'
BNFD_240p = Balanced_Out_Directory_240p + 'No Food\\'
LNFD_240p = Large_Out_Directory_240p + 'No Food\\'
FNFD_240p = Full_Out_Directory_240p + 'No Food\\'
###########################################################################################################
VTFD_360p = Very_Tiny_Out_Directory_360p + 'Food\\'
TFD_360p = Tiny_Out_Directory_360p + 'Food\\'
SFD_360p = Small_Out_Directory_360p + 'Food\\'
BFD_360p = Balanced_Out_Directory_360p + 'Food\\'
LFD_360p = Large_Out_Directory_360p + 'Food\\'
FFD_360p = Full_Out_Directory_360p + 'Food\\'
VTNFD_360p = Very_Tiny_Out_Directory_360p + 'No Food\\'
TNFD_360p = Tiny_Out_Directory_360p + 'No Food\\'
SNFD_360p = Small_Out_Directory_360p + 'No Food\\'
BNFD_360p = Balanced_Out_Directory_360p + 'No Food\\'
LNFD_360p = Large_Out_Directory_360p + 'No Food\\'
FNFD_360p = Full_Out_Directory_360p + 'No Food\\'
###########################################################################################################
VTFD_480p = Very_Tiny_Out_Directory_480p + 'Food\\'
TFD_480p = Tiny_Out_Directory_480p + 'Food\\'
SFD_480p = Small_Out_Directory_480p + 'Food\\'
BFD_480p = Balanced_Out_Directory_480p + 'Food\\'
LFD_480p = Large_Out_Directory_480p + 'Food\\'
FFD_480p = Full_Out_Directory_480p + 'Food\\'
VTNFD_480p = Very_Tiny_Out_Directory_480p + 'No Food\\'
TNFD_480p = Tiny_Out_Directory_480p + 'No Food\\'
SNFD_480p = Small_Out_Directory_480p + 'No Food\\'
BNFD_480p = Balanced_Out_Directory_480p + 'No Food\\'
LNFD_480p = Large_Out_Directory_480p + 'No Food\\'
FNFD_480p = Full_Out_Directory_480p + 'No Food\\'
###########################################################################################################
VTFD_640p = Very_Tiny_Out_Directory_640p + 'Food\\'
TFD_640p = Tiny_Out_Directory_640p + 'Food\\'
SFD_640p = Small_Out_Directory_640p + 'Food\\'
BFD_640p = Balanced_Out_Directory_640p + 'Food\\'
LFD_640p = Large_Out_Directory_640p + 'Food\\'
FFD_640p = Full_Out_Directory_640p + 'Food\\'
VTNFD_640p = Very_Tiny_Out_Directory_640p + 'No Food\\'
TNFD_640p = Tiny_Out_Directory_640p + 'No Food\\'
SNFD_640p = Small_Out_Directory_640p + 'No Food\\'
BNFD_640p = Balanced_Out_Directory_640p + 'No Food\\'
LNFD_640p = Large_Out_Directory_640p + 'No Food\\'
FNFD_640p = Full_Out_Directory_640p + 'No Food\\'
###########################################################################################################
VTFD_224x224 = Very_Tiny_Out_Directory_224x224 + 'Food\\'
TFD_224x224 = Tiny_Out_Directory_224x224 + 'Food\\'
SFD_224x224 = Small_Out_Directory_224x224 + 'Food\\'
BFD_224x224 = Balanced_Out_Directory_224x224 + 'Food\\'
LFD_224x224 = Large_Out_Directory_224x224 + 'Food\\'
FFD_224x224 = Full_Out_Directory_224x224 + 'Food\\'
VTNFD_224x224 = Very_Tiny_Out_Directory_224x224 + 'No Food\\'
TNFD_224x224 = Tiny_Out_Directory_224x224 + 'No Food\\'
SNFD_224x224 = Small_Out_Directory_224x224 + 'No Food\\'
BNFD_224x224 = Balanced_Out_Directory_224x224 + 'No Food\\'
LNFD_224x224 = Large_Out_Directory_224x224 + 'No Food\\'
FNFD_224x224 = Full_Out_Directory_224x224 + 'No Food\\'
###########################################################################################################
VTFD_224x224x224 = Very_Tiny_Out_Directory_224x224x224 + 'Food\\'
TFD_224x224x224 = Tiny_Out_Directory_224x224x224 + 'Food\\'
SFD_224x224x224 = Small_Out_Directory_224x224x224 + 'Food\\'
BFD_224x224x224 = Balanced_Out_Directory_224x224x224 + 'Food\\'
LFD_224x224x224 = Large_Out_Directory_224x224x224 + 'Food\\'
FFD_224x224x224 = Full_Out_Directory_224x224x224 + 'Food\\'
VTNFD_224x224x224 = Very_Tiny_Out_Directory_224x224x224 + 'No Food\\'
TNFD_224x224x224 = Tiny_Out_Directory_224x224x224 + 'No Food\\'
SNFD_224x224x224 = Small_Out_Directory_224x224x224 + 'No Food\\'
BNFD_224x224x224 = Balanced_Out_Directory_224x224x224 + 'No Food\\'
LNFD_224x224x224 = Large_Out_Directory_224x224x224 + 'No Food\\'
FNFD_224x224x224 = Full_Out_Directory_224x224x224 + 'No Food\\'
###########################################################################################################
VTFD_224x224x224x224 = Very_Tiny_Out_Directory_224x224x224x224 + 'Food\\'
TFD_224x224x224x224 = Tiny_Out_Directory_224x224x224x224 + 'Food\\'
SFD_224x224x224x224 = Small_Out_Directory_224x224x224x224 + 'Food\\'
BFD_224x224x224x224 = Balanced_Out_Directory_224x224x224x224 + 'Food\\'
LFD_224x224x224x224 = Large_Out_Directory_224x224x224x224 + 'Food\\'
FFD_224x224x224x224 = Full_Out_Directory_224x224x224x224 + 'Food\\'
VTNFD_224x224x224x224 = Very_Tiny_Out_Directory_224x224x224x224 + 'No Food\\'
TNFD_224x224x224x224 = Tiny_Out_Directory_224x224x224x224 + 'No Food\\'
SNFD_224x224x224x224 = Small_Out_Directory_224x224x224x224 + 'No Food\\'
BNFD_224x224x224x224 = Balanced_Out_Directory_224x224x224x224 + 'No Food\\'
LNFD_224x224x224x224 = Large_Out_Directory_224x224x224x224 + 'No Food\\'
FNFD_224x224x224x224 = Full_Out_Directory_224x224x224x224 + 'No Food\\'
###########################################################################################################
VTFD_640x480 = Very_Tiny_Out_Directory_640x480 + 'Food\\'
TFD_640x480 = Tiny_Out_Directory_640x480 + 'Food\\'
SFD_640x480 = Small_Out_Directory_640x480 + 'Food\\'
BFD_640x480 = Balanced_Out_Directory_640x480 + 'Food\\'
LFD_640x480 = Large_Out_Directory_640x480 + 'Food\\'
FFD_640x480 = Full_Out_Directory_640x480 + 'Food\\'
VTNFD_640x480 = Very_Tiny_Out_Directory_640x480 + 'No Food\\'
TNFD_640x480 = Tiny_Out_Directory_640x480 + 'No Food\\'
SNFD_640x480 = Small_Out_Directory_640x480 + 'No Food\\'
BNFD_640x480 = Balanced_Out_Directory_640x480 + 'No Food\\'
LNFD_640x480 = Large_Out_Directory_640x480 + 'No Food\\'
FNFD_640x480 = Full_Out_Directory_640x480 + 'No Food\\'
###########################################################################################################
VTFD = Very_Tiny_General_Directory + 'Food\\'
VTNFD = Very_Tiny_General_Directory + 'Non Food\\'
TFD = Tiny_General_Directory + 'Food\\'
TNFD = Tiny_General_Directory + 'Non Food\\'
SFD = Small_General_Directory + 'Food\\'
SNFD = Small_General_Directory + 'Non Food\\'
BFD = Balanced_General_Directory + 'Food\\'
BNFD = Balanced_General_Directory + 'Non Food\\'
LFD = Large_General_Directory + 'Food\\'
LNFD = Large_General_Directory + 'Non Food\\'
FFD = Full_General_Directory + 'Food\\'
FNFD =Full_General_Directory + 'Non Food\\'
########################################################################################################
#VTFD_G256p = VTFD + "G-256p\\"
VTFD_G360p = VTFD + "G-360p\\"
VTFD_G480p = VTFD + "G-480p\\"
VTFD_G640p = VTFD + "G-640p\\"
VTFD_G720p = VTFD + "G-720p\\"
VTFD_G1024p = VTFD + "G-1024p\\"
VTFD_G1080p = VTFD + "G-1080p\\"
#VTFD_GMedium = VTFD + "G-Medium\\"
#VTFD_GLarge = VTFD + "G-Large\\"
#VTNFD_G256p = VTNFD + "G-256p\\"
VTNFD_G360p = VTNFD + "G-360p\\"
VTNFD_G480p = VTNFD + "G-480p\\"
VTNFD_G640p = VTNFD + "G-640p\\"
VTNFD_G720p = VTNFD + "G-720p\\"
VTNFD_G1024p = VTNFD + "G-1024p\\"
VTNFD_G1080p = VTNFD + "G-1080p\\"
#VTNFD_GMedium = VTNFD + "G-Medium\\"
#VTNFD_GLarge = VTNFD + "G-Large\\"
#VTFD_BSmall = VTFD + "B-Small\\"
#VTFD_B640p = VTFD + "B-640p\\"
#VTFD_B1000p = VTFD + "B-1000p\\"
#VTFD_BMedium = VTFD + "B-Medium\\"
#VTFD_BLarge = VTFD + "B-Large\\"
#VTFD_B1280p = VTFD + "B-1280p\\"
#VTFD_B1920p = VTFD + "B-1920p\\"
#VTNFD_BSmall = VTNFD + "B-Small\\"
#VTNFD_B640p = VTNFD + "B-640p\\"
#VTNFD_B1000p = VTNFD + "B-1000p\\"
#VTNFD_BMedium = VTNFD + "B-Medium\\"
#VTNFD_BLarge = VTNFD + "B-Large\\"
#VTNFD_B1280p = VTNFD + "B-1280p\\"
#VTNFD_B1920p = VTNFD + "B-1920p\\"
############################################CONSTANTS###################################################
#TFD_G256p = TFD + "G-256p\\"
#TFD_G360p = TFD + "G-360p\\"
#TFD_G480p = TFD + "G-480p\\"
#TFD_G640p = TFD + "G-640p\\"
#TFD_G720p = TFD + "G-720p\\"
#TFD_G1024p = TFD + "G-1024p\\"
#TFD_G1080p = TFD + "G-1080p\\"
#TFD_GMedium = TFD + "G-Medium\\"
#TFD_GLarge = TFD + "G-Large\\"
#TNFD_G256p = TNFD + "G-256p\\"
#TNFD_G360p = TNFD + "G-360p\\"
#TNFD_G480p = TNFD + "G-480p\\"
#TNFD_G640p = TNFD + "G-640p\\"
#TNFD_G720p = TNFD + "G-720p\\"
#TNFD_G1024p = TNFD + "G-1024p\\"
#TNFD_G1080p = TNFD + "G-1080p\\"
#TNFD_GMedium = TNFD + "G-Medium\\"
#TNFD_GLarge = TNFD + "G-Large\\"
TFD_BSmall = TFD + "B-Small\\"
TFD_B640p = TFD + "B-640p\\"
TFD_B1000p = TFD + "B-1000p\\"
TFD_BMedium = TFD + "B-Medium\\"
TFD_BLarge = TFD + "B-Large\\"
TFD_B1280p = TFD + "B-1280p\\"
TFD_B1920p = TFD + "B-1920p\\"
TNFD_BSmall = TNFD + "B-Small\\"
TNFD_B640p = TNFD + "B-640p\\"
TNFD_B1000p = TNFD + "B-1000p\\"
TNFD_BMedium = TNFD + "B-Medium\\"
TNFD_BLarge = TNFD + "B-Large\\"
TNFD_B1280p = TNFD + "B-1280p\\"
TNFD_B1920p = TNFD + "B-1920p\\"
########################################################################################################
#SFD_G256p = SFD + "G-256p\\"
#SFD_G360p = SFD + "G-360p\\"
#SFD_G480p = SFD + "G-480p\\"
#SFD_G640p = SFD + "G-640p\\"
#SFD_G720p = SFD + "G-720p\\"
#SFD_G1024p = SFD + "G-1024p\\"
#SFD_G1080p = SFD + "G-1080p\\"
#SFD_GMedium = SFD + "G-Medium\\"
#SFD_GLarge = SFD + "G-Large\\"
#SNFD_G256p = SNFD + "G-256p\\"
#SNFD_G360p = SNFD + "G-360p\\"
#SNFD_G480p = SNFD + "G-480p\\"
#SNFD_G640p = SNFD + "G-640p\\"
#SNFD_G720p = SNFD + "G-720p\\"
#SNFD_G1024p = SNFD + "G-1024p\\"
#SNFD_G1080p = SNFD + "G-1080p\\"
#SNFD_GMedium = SNFD + "G-Medium\\"
#SNFD_GLarge = SNFD + "G-Large\\"
#SFD_BSmall = SFD + "B-Small\\"
SFD_B640p = SFD + "B-640p\\"
SFD_B1000p = SFD + "B-1000p\\"
SFD_BMedium = SFD + "B-Medium\\"
SFD_BLarge = SFD + "B-Large\\"
SFD_B1280p = SFD + "B-1280p\\"
SFD_B1920p = SFD + "B-1920p\\"
#SNFD_BSmall = SNFD + "B-Small\\"
SNFD_B640p = SNFD + "B-640p\\"
SNFD_B1000p = SNFD + "B-1000p\\"
SNFD_BMedium = SNFD + "B-Medium\\"
SNFD_BLarge = SNFD + "B-Large\\"
SNFD_B1280p = SNFD + "B-1280p\\"
SNFD_B1920p = SNFD + "B-1920p\\"
########################################################################################################
###Output
_____no_output_____
###Markdown
Datasets Clusters Based on Aspect Ratio
###Code
############################################CLUSTER DIRECTORIES###########################################
VTFCALL = []
VTNFCALL = []
VTFCALL.append(VTFD_G360p)
VTFCALL.append(VTFD_G480p)
VTFCALL.append(VTFD_G640p)
VTFCALL.append(VTFD_G720p)
VTFCALL.append(VTFD_G1024p)
VTFCALL.append(VTFD_G1080p)
#VTFCALL.append(VTFD_GLarge)
#VTFCALL.append(VTFD_GMedium)
#VTFCALL.append(VTFD_BLarge)
#VTFCALL.append(VTFD_BMedium)
#VTFCALL.append(VTFD_B1280p)
#VTFCALL.append(VTFD_B1920p)
#VTFCALL.append(VTFD_B1000p)
#VTFCALL.append(VTFD_B640p)
VTNFCALL.append(VTNFD_G360p)
VTNFCALL.append(VTNFD_G480p)
VTNFCALL.append(VTNFD_G640p)
VTNFCALL.append(VTNFD_G720p)
VTNFCALL.append(VTNFD_G1024p)
VTNFCALL.append(VTNFD_G1080p)
#VTNFCALL.append(VTNFD_GLarge)
#VTNFCALL.append(VTNFD_GMedium)
#VTNFCALL.append(VTNFD_BLarge)
#VTNFCALL.append(VTNFD_BMedium)
#VTNFCALL.append(VTNFD_B1280p)
#VTNFCALL.append(VTNFD_B1920p)
#VTNFCALL.append(VTNFD_B1000p)
#VTNFCALL.append(VTNFD_B640p)
##########################################################################################################
TFCALL = []
TNFCALL = []
TFCALL.append(TFD_BSmall)
TFCALL.append(TFD_BLarge)
TFCALL.append(TFD_BMedium)
TFCALL.append(TFD_B1280p)
TFCALL.append(TFD_B1920p)
TFCALL.append(TFD_B1000p)
TFCALL.append(TFD_B640p)
TNFCALL.append(TNFD_BSmall)
TNFCALL.append(TNFD_BLarge)
TNFCALL.append(TNFD_BMedium)
TNFCALL.append(TNFD_B1280p)
TNFCALL.append(TNFD_B1920p)
TNFCALL.append(TNFD_B1000p)
TNFCALL.append(TNFD_B640p)
#############################################################################################################
SFCALL = []
SNFCALL = []
SFCALL.append(SFD_BLarge)
SFCALL.append(SFD_BMedium)
SFCALL.append(SFD_B1280p)
SFCALL.append(SFD_B1920p)
SFCALL.append(SFD_B1000p)
SFCALL.append(SFD_B640p)
SNFCALL.append(SNFD_BLarge)
SNFCALL.append(SNFD_BMedium)
SNFCALL.append(SNFD_B1280p)
SNFCALL.append(SNFD_B1920p)
SNFCALL.append(SNFD_B1000p)
SNFCALL.append(SNFD_B640p)
#############################################################################################################
###Output
_____no_output_____
###Markdown
Functions
###Code
def resize144p(cluster, destination):
counter = 0
skipped = 0
dim = (256, 144)
if not os.path.exists(destination):
os.makedirs(destination)
for i in range (len(cluster)):
directory = cluster[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
print('%d images were resized. %d images were skipped' %(counter, skipped))
def resize224p(cluster1, cluster2, destination):
counter = 0
skipped = 0
dim = (224, 224)
if not os.path.exists(destination):
os.makedirs(destination)
for i in range (len(cluster1)):
directory = cluster1[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster1)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
for i in range (len(cluster2)):
directory = cluster2[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster2)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
print('%d images were resized. %d images were skipped' %(counter, skipped))
def resize224x224(cluster, destination):
counter = 0
skipped = 0
dim = (224, 224)
if not os.path.exists(destination):
os.makedirs(destination)
for i in range (len(cluster)):
directory = cluster[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
print('%d images were resized. %d images were skipped' %(counter, skipped))
def resize224x224x224(cluster, destination):
counter = 0
skipped = 0
dim = (224, 224)
if not os.path.exists(destination):
os.makedirs(destination)
for i in range (len(cluster)):
directory = cluster[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
print('%d images were resized. %d images were skipped' %(counter, skipped))
def resize224x224x224x224(cluster1, cluster2, destination):
counter = 0
skipped = 0
dim = (224, 224)
if not os.path.exists(destination):
os.makedirs(destination)
for i in range (len(cluster1)):
directory = cluster1[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster1)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
for i in range (len(cluster2)):
directory = cluster2[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster2)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
print('%d images were resized. %d images were skipped' %(counter, skipped))
def resize240p(cluster1, cluster2, destination):
counter = 0
skipped = 0
dim = (360, 240)
if not os.path.exists(destination):
os.makedirs(destination)
for i in range (len(cluster1)):
directory = cluster1[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster1)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
for i in range (len(cluster2)):
directory = cluster2[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster2)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
print('%d images were resized. %d images were skipped' %(counter, skipped))
def resize360p(cluster, destination):
counter = 0
skipped = 0
dim = (480, 360)
if not os.path.exists(destination):
os.makedirs(destination)
for i in range (len(cluster)):
directory = cluster[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
print('%d images were resized. %d images were skipped' %(counter, skipped))
def resize480p(cluster, destination):
counter = 0
skipped = 0
dim = (480, 360)
if not os.path.exists(destination):
os.makedirs(destination)
for i in range (len(cluster)):
directory = cluster[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
print('%d images were resized. %d images were skipped' %(counter, skipped))
def resize640p(cluster1, cluster2, destination):
counter = 0
skipped = 0
dim = (640, 360)
if not os.path.exists(destination):
os.makedirs(destination)
for i in range (len(cluster1)):
directory = cluster1[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster1)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
for i in range (len(cluster2)):
directory = cluster2[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster2)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
print('%d images were resized. %d images were skipped' %(counter, skipped))
def resize640x480(cluster, destination):
counter = 0
skipped = 0
dim = (640, 480)
if not os.path.exists(destination):
os.makedirs(destination)
for i in range (len(cluster)):
directory = cluster[i]
print("Processing Directory %d of %d..." %(i+1, len(cluster)))
print("Processing Directory: %s" % (directory))
for filename in os.listdir(directory):
image_path = os.path.join(directory, filename)
img = cv2.imread('%s' % (image_path))
if img is not None:
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite(destination + '%s.png' %(counter),resized)
counter += 1
else:
skipped += 1
if(counter % 100 == 0):
print('%d images have been resized so far...' %(counter))
print('%d images were resized. %d images were skipped' %(counter, skipped))
###Output
_____no_output_____
###Markdown
Very Tiny Dataset Resizing 144p: (240x144)
###Code
VT1 = Thread(target = resize144p, args = (VTFC169, VTFD_144p,))
VT2 = Thread(target = resize144p, args = (VTNFC169, VTNFD_144p,))
VT1.start()
VT2.start()
VT1.join()
VT2.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
VT3 = Thread(target = resize224p, args = (VTFC11, VTFC43, VTFD_224p,))
VT4 = Thread(target = resize224p, args = (VTNFC11, VTNFC43, VTNFD_224p,))
VT3.start()
VT4.start()
VT3.join()
VT4.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
VT5 = Thread(target = resize224x224, args = (VTFCALL, BFD_224x224,))
VT6 = Thread(target = resize224x224, args = (VTNFCALL, BNFD_224x224,))
VT5.start()
VT6.start()
VT5.join()
VT6.join()
###Output
Processing Directory 1 of 6...
Processing Directory: D:\Food Datasets\Full Training Dataset\Food\G-360p\
Processing Directory 1 of 6...
Processing Directory: D:\Food Datasets\Full Training Dataset\Non Food\G-360p\
100 images have been resized so far...
100 images have been resized so far...
200 images have been resized so far...
200 images have been resized so far...
300 images have been resized so far...
300 images have been resized so far...
400 images have been resized so far...
400 images have been resized so far...
500 images have been resized so far...
500 images have been resized so far...
600 images have been resized so far...
600 images have been resized so far...
700 images have been resized so far...
700 images have been resized so far...
800 images have been resized so far...
800 images have been resized so far...
900 images have been resized so far...
900 images have been resized so far...
1000 images have been resized so far...
1000 images have been resized so far...
1100 images have been resized so far...
1100 images have been resized so far...
1200 images have been resized so far...
1200 images have been resized so far...
1300 images have been resized so far...
1300 images have been resized so far...
1400 images have been resized so far...
1400 images have been resized so far...
1500 images have been resized so far...
1500 images have been resized so far...
1600 images have been resized so far...
1600 images have been resized so far...
1700 images have been resized so far...
1700 images have been resized so far...
1800 images have been resized so far...
1800 images have been resized so far...
1900 images have been resized so far...
1900 images have been resized so far...
2000 images have been resized so far...
2000 images have been resized so far...
2100 images have been resized so far...
2100 images have been resized so far...
2200 images have been resized so far...
2200 images have been resized so far...
2300 images have been resized so far...
2300 images have been resized so far...
2400 images have been resized so far...
2400 images have been resized so far...
2500 images have been resized so far...
2500 images have been resized so far...
2600 images have been resized so far...
2600 images have been resized so far...
2700 images have been resized so far...
2700 images have been resized so far...
2800 images have been resized so far...
2800 images have been resized so far...
2900 images have been resized so far...
2900 images have been resized so far...
3000 images have been resized so far...
3000 images have been resized so far...
3100 images have been resized so far...
3200 images have been resized so far...
3100 images have been resized so far...
3300 images have been resized so far...
3200 images have been resized so far...
3400 images have been resized so far...
3300 images have been resized so far...
3500 images have been resized so far...
3400 images have been resized so far...
3600 images have been resized so far...
3500 images have been resized so far...
3700 images have been resized so far...
3600 images have been resized so far...
3800 images have been resized so far...
3700 images have been resized so far...
3900 images have been resized so far...
3800 images have been resized so far...
4000 images have been resized so far...
3900 images have been resized so far...
4100 images have been resized so far...
4000 images have been resized so far...
4200 images have been resized so far...
4100 images have been resized so far...
4300 images have been resized so far...
4200 images have been resized so far...
4400 images have been resized so far...
4300 images have been resized so far...
4500 images have been resized so far...
4400 images have been resized so far...
4600 images have been resized so far...
4500 images have been resized so far...
4700 images have been resized so far...
4600 images have been resized so far...
4800 images have been resized so far...
4700 images have been resized so far...
4900 images have been resized so far...
4800 images have been resized so far...
5000 images have been resized so far...
4900 images have been resized so far...
5100 images have been resized so far...
5000 images have been resized so far...
5200 images have been resized so far...
5100 images have been resized so far...
5300 images have been resized so far...
5200 images have been resized so far...
5400 images have been resized so far...
5300 images have been resized so far...
5400 images have been resized so far...
5500 images have been resized so far...
5500 images have been resized so far...
5600 images have been resized so far...
5600 images have been resized so far...
5700 images have been resized so far...
5800 images have been resized so far...
5700 images have been resized so far...
5800 images have been resized so far...
5900 images have been resized so far...
6000 images have been resized so far...
5900 images have been resized so far...
6000 images have been resized so far...
6100 images have been resized so far...
6200 images have been resized so far...
6100 images have been resized so far...
6300 images have been resized so far...
6200 images have been resized so far...
6400 images have been resized so far...
6300 images have been resized so far...
6500 images have been resized so far...
6400 images have been resized so far...
6600 images have been resized so far...
6500 images have been resized so far...
6700 images have been resized so far...
6600 images have been resized so far...
6800 images have been resized so far...
6700 images have been resized so far...
6800 images have been resized so far...
6900 images have been resized so far...
6900 images have been resized so far...
7000 images have been resized so far...
7100 images have been resized so far...
7000 images have been resized so far...
7200 images have been resized so far...
7100 images have been resized so far...
7300 images have been resized so far...
7200 images have been resized so far...
7400 images have been resized so far...
7300 images have been resized so far...
7500 images have been resized so far...
7400 images have been resized so far...
7600 images have been resized so far...
7500 images have been resized so far...
7700 images have been resized so far...
7600 images have been resized so far...
7800 images have been resized so far...
7700 images have been resized so far...
7900 images have been resized so far...
7800 images have been resized so far...
8000 images have been resized so far...
7900 images have been resized so far...
8100 images have been resized so far...
8000 images have been resized so far...
8200 images have been resized so far...
8100 images have been resized so far...
8300 images have been resized so far...
Processing Directory 2 of 6...
Processing Directory: D:\Food Datasets\Full Training Dataset\Food\G-480p\
8200 images have been resized so far...
8400 images have been resized so far...
8500 images have been resized so far...
8300 images have been resized so far...
8600 images have been resized so far...
8400 images have been resized so far...
8700 images have been resized so far...
8500 images have been resized so far...
8800 images have been resized so far...
8900 images have been resized so far...
8600 images have been resized so far...
9000 images have been resized so far...
8700 images have been resized so far...
9100 images have been resized so far...
9200 images have been resized so far...
8800 images have been resized so far...
9300 images have been resized so far...
8900 images have been resized so far...
9400 images have been resized so far...
9000 images have been resized so far...
9500 images have been resized so far...
9600 images have been resized so far...
Processing Directory 2 of 6...
Processing Directory: D:\Food Datasets\Full Training Dataset\Non Food\G-480p\
9100 images have been resized so far...
9700 images have been resized so far...
9200 images have been resized so far...
9800 images have been resized so far...
9300 images have been resized so far...
9900 images have been resized so far...
9400 images have been resized so far...
10000 images have been resized so far...
9500 images have been resized so far...
###Markdown
224p: (224x224)
###Code
VT7 = Thread(target = resize224x224x224, args = (VTFCMIX, VTFD_224x224x224,))
VT8 = Thread(target = resize224x224x224, args = (VTNFCMIX, VTNFD_224x224x224,))
VT7.start()
VT8.start()
VT7.join()
VT8.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
VT9 = Thread(target = resize224x224x224x224, args = (VTFC43, VTFC169, VTFD_224x224x224x224,))
VT10 = Thread(target = resize224x224x224x224, args = (VTNFC43, VTNFC169, VTNFD_224x224x224x224,))
VT9.start()
VT10.start()
VT9.join()
VT10.join()
###Output
_____no_output_____
###Markdown
240p: (360x240)
###Code
VT11 = Thread(target = resize240p, args = (VTFC43, VTFC169, VTFD_240p,))
VT12 = Thread(target = resize240p, args = (VTNFC43, VTNFC169, VTNFD_240p,))
VT11.start()
VT12.start()
VT11.join()
VT12.join()
###Output
_____no_output_____
###Markdown
360p: (480x360)
###Code
VT13 = Thread(target = resize360p, args = (VTFCALL, VTFD_360p,))
VT14 = Thread(target = resize360p, args = (VTNFCALL, VTNFD_360p,))
VT13.start()
VT14.start()
VT13.join()
VT14.join()
###Output
_____no_output_____
###Markdown
360p: (480x360)
###Code
VT15 = Thread(target = resize480p, args = (VTFC43, VTFD_480p,))
VT16 = Thread(target = resize480p, args = (VTNFC43, VTNFD_480p,))
VT15.start()
VT16.start()
VT15.join()
VT16.join()
###Output
_____no_output_____
###Markdown
360p: (640x360)
###Code
VT17 = Thread(target = resize640p, args = (VTFC43, VTFC169, VTFD_640p,))
VT18 = Thread(target = resize640p, args = (VTNFC43, VTNFC169, VTNFD_640p,))
VT17.start()
VT18.start()
VT17.join()
VT18.join()
###Output
_____no_output_____
###Markdown
480p: (640x480)
###Code
VT19 = Thread(target = resize640x480, args = (VTFCMIX, VTFD_640x480,))
VT20 = Thread(target = resize640x480, args = (VTNFCMIX, VTNFD_640x480,))
VT19.start()
VT20.start()
VT19.join()
VT20.join()
###Output
_____no_output_____
###Markdown
Tiny Dataset Resizing 144p: (240x144)
###Code
T1 = Thread(target = resize144p, args = (TFC169, TFD_144p,))
T2 = Thread(target = resize144p, args = (TNFC169, TNFD_144p,))
T1.start()
T2.start()
T1.join()
T2.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
T3 = Thread(target = resize224p, args = (TFC11, TFC43, TFD_224p,))
T4 = Thread(target = resize224p, args = (TNFC11, TNFC43, TNFD_224p,))
T3.start()
T4.start()
T3.join()
T4.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
T5 = Thread(target = resize224x224, args = (TFCALL, TFD_224x224,))
T6 = Thread(target = resize224x224, args = (TNFCALL, TNFD_224x224,))
T5.start()
T6.start()
T5.join()
T6.join()
###Output
Processing Directory 1 of 7...Processing Directory 1 of 7...
Processing Directory: D:\Food Datasets\Large Training Dataset\Non Food\B-Small\
Processing Directory: D:\Food Datasets\Large Training Dataset\Food\B-Small\
100 images have been resized so far...
100 images have been resized so far...
200 images have been resized so far...
200 images have been resized so far...
300 images have been resized so far...
300 images have been resized so far...
400 images have been resized so far...
400 images have been resized so far...
500 images have been resized so far...
500 images have been resized so far...
600 images have been resized so far...
600 images have been resized so far...
700 images have been resized so far...
700 images have been resized so far...
800 images have been resized so far...
800 images have been resized so far...
900 images have been resized so far...
900 images have been resized so far...
1000 images have been resized so far...
1000 images have been resized so far...
1100 images have been resized so far...
1100 images have been resized so far...
1200 images have been resized so far...
1200 images have been resized so far...
1300 images have been resized so far...
1300 images have been resized so far...
1400 images have been resized so far...
1400 images have been resized so far...
1500 images have been resized so far...
1500 images have been resized so far...
1600 images have been resized so far...
1600 images have been resized so far...
1700 images have been resized so far...
1700 images have been resized so far...
1800 images have been resized so far...
1800 images have been resized so far...
1900 images have been resized so far...
1900 images have been resized so far...
2000 images have been resized so far...
2000 images have been resized so far...
2100 images have been resized so far...
2100 images have been resized so far...
2200 images have been resized so far...
2200 images have been resized so far...
2300 images have been resized so far...
2300 images have been resized so far...
2400 images have been resized so far...
2400 images have been resized so far...
2500 images have been resized so far...
2500 images have been resized so far...
2600 images have been resized so far...
2600 images have been resized so far...
2700 images have been resized so far...
2700 images have been resized so far...
2800 images have been resized so far...
2800 images have been resized so far...
2900 images have been resized so far...
2900 images have been resized so far...
3000 images have been resized so far...
3100 images have been resized so far...
3200 images have been resized so far...
3000 images have been resized so far...
3300 images have been resized so far...
3100 images have been resized so far...
3400 images have been resized so far...
3200 images have been resized so far...
3500 images have been resized so far...
3300 images have been resized so far...
3600 images have been resized so far...
3400 images have been resized so far...
3700 images have been resized so far...
3500 images have been resized so far...
3800 images have been resized so far...
3600 images have been resized so far...
3900 images have been resized so far...
3700 images have been resized so far...
4000 images have been resized so far...
3800 images have been resized so far...
4100 images have been resized so far...
4200 images have been resized so far...
3900 images have been resized so far...
4300 images have been resized so far...
4000 images have been resized so far...
4400 images have been resized so far...
4100 images have been resized so far...
4500 images have been resized so far...
4200 images have been resized so far...
4600 images have been resized so far...
4300 images have been resized so far...
4700 images have been resized so far...
4400 images have been resized so far...
4800 images have been resized so far...
4500 images have been resized so far...
4900 images have been resized so far...
5000 images have been resized so far...
4600 images have been resized so far...
5100 images have been resized so far...
4700 images have been resized so far...
5200 images have been resized so far...
4800 images have been resized so far...
5300 images have been resized so far...
4900 images have been resized so far...
5400 images have been resized so far...
5000 images have been resized so far...
5500 images have been resized so far...
5100 images have been resized so far...
5600 images have been resized so far...
5200 images have been resized so far...
5700 images have been resized so far...
5300 images have been resized so far...
5800 images have been resized so far...
5900 images have been resized so far...
5400 images have been resized so far...
6000 images have been resized so far...
5500 images have been resized so far...
5600 images have been resized so far...
6100 images have been resized so far...
5700 images have been resized so far...
5800 images have been resized so far...
6200 images have been resized so far...
5900 images have been resized so far...
6000 images have been resized so far...
6100 images have been resized so far...
6300 images have been resized so far...
Processing Directory 2 of 7...
Processing Directory: D:\Food Datasets\Large Training Dataset\Food\B-Large\
6400 images have been resized so far...
6200 images have been resized so far...
6500 images have been resized so far...
6600 images have been resized so far...
6700 images have been resized so far...
6800 images have been resized so far...
6900 images have been resized so far...
7000 images have been resized so far...
6300 images have been resized so far...
7100 images have been resized so far...
7200 images have been resized so far...
7300 images have been resized so far...
7400 images have been resized so far...
7500 images have been resized so far...
7600 images have been resized so far...
7700 images have been resized so far...
6400 images have been resized so far...
7800 images have been resized so far...
7900 images have been resized so far...
8000 images have been resized so far...
8100 images have been resized so far...
8200 images have been resized so far...
8300 images have been resized so far...
8400 images have been resized so far...
6500 images have been resized so far...
8500 images have been resized so far...
8600 images have been resized so far...
8700 images have been resized so far...
8800 images have been resized so far...
8900 images have been resized so far...
9000 images have been resized so far...
6600 images have been resized so far...
9100 images have been resized so far...
9200 images have been resized so far...
9300 images have been resized so far...
9400 images have been resized so far...
9500 images have been resized so far...
6700 images have been resized so far...
9600 images have been resized so far...
9700 images have been resized so far...
9800 images have been resized so far...
6800 images have been resized so far...
9900 images have been resized so far...
10000 images have been resized so far...
10100 images have been resized so far...
10200 images have been resized so far...
10300 images have been resized so far...
6900 images have been resized so far...
10400 images have been resized so far...
10500 images have been resized so far...
10600 images have been resized so far...
10700 images have been resized so far...
7000 images have been resized so far...
10800 images have been resized so far...
10900 images have been resized so far...
11000 images have been resized so far...
7100 images have been resized so far...
11100 images have been resized so far...
11200 images have been resized so far...
11300 images have been resized so far...
11400 images have been resized so far...
11500 images have been resized so far...
7200 images have been resized so far...
11600 images have been resized so far...
11700 images have been resized so far...
11800 images have been resized so far...
11900 images have been resized so far...
12000 images have been resized so far...
7300 images have been resized so far...
12100 images have been resized so far...
12200 images have been resized so far...
12300 images have been resized so far...
12400 images have been resized so far...
###Markdown
224p: (224x224)
###Code
T7 = Thread(target = resize224x224x224, args = (TFCMIX, TFD_224x224x224,))
T8 = Thread(target = resize224x224x224, args = (TNFCMIX, TNFD_224x224x224,))
T7.start()
T8.start()
T7.join()
T8.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
T9 = Thread(target = resize224x224x224x224, args = (TFC43, TFC169, TFD_224x224x224x224,))
T10 = Thread(target = resize224x224x224x224, args = (TNFC43, TNFC169, TNFD_224x224x224x224,))
T9.start()
T10.start()
T9.join()
T10.join()
###Output
_____no_output_____
###Markdown
240p: (360x240)
###Code
T11 = Thread(target = resize240p, args = (TFC43, TFC169, TFD_240p,))
T12 = Thread(target = resize240p, args = (TNFC43, TNFC169, TNFD_240p,))
T11.start()
T12.start()
T11.join()
T12.join()
###Output
_____no_output_____
###Markdown
360p: (480x360)
###Code
T13 = Thread(target = resize360p, args = (TFCALL, TFD_360p,))
T14 = Thread(target = resize360p, args = (TNFCALL, TNFD_360p,))
T13.start()
T14.start()
T13.join()
T14.join()
###Output
_____no_output_____
###Markdown
360p: (480x360)
###Code
T15 = Thread(target = resize480p, args = (TFC43, TFD_480p,))
T16 = Thread(target = resize480p, args = (TNFC43, TNFD_480p,))
T15.start()
T16.start()
T15.join()
T16.join()
###Output
_____no_output_____
###Markdown
360p: (640x360)
###Code
T17 = Thread(target = resize640p, args = (TFC43, TFC169, TFD_640p,))
T18 = Thread(target = resize640p, args = (TNFC43, TNFC169, TNFD_640p,))
T17.start()
T18.start()
T17.join()
T18.join()
###Output
_____no_output_____
###Markdown
480p: (640x480)
###Code
T19 = Thread(target = resize640x480, args = (TFCMIX, TFD_640x480,))
T20 = Thread(target = resize640x480, args = (TNFCMIX, TNFD_640x480,))
T19.start()
T20.start()
T19.join()
T20.join()
###Output
_____no_output_____
###Markdown
Small Dataset Resizing 144p: (240x144)
###Code
S1 = Thread(target = resize144p, args = (SFC169, SFD_144p,))
S2 = Thread(target = resize144p, args = (SNFC169, SNFD_144p,))
S1.start()
S2.start()
S1.join()
S2.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
S3 = Thread(target = resize224p, args = (SFC11, SFC43, SFD_224p,))
S4 = Thread(target = resize224p, args = (SNFC11, SNFC43, SNFD_224p,))
S3.start()
S4.start()
S3.join()
S4.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
S5 = Thread(target = resize224x224, args = (SFCALL, SFD_224x224,))
S6 = Thread(target = resize224x224, args = (SNFCALL, SNFD_224x224,))
S5.start()
S6.start()
S5.join()
S6.join()
###Output
Processing Directory 1 of 6...
Processing Directory: D:\Food Datasets\Small Training Dataset\Non Food\B-Large\
Processing Directory 1 of 6...
Processing Directory: D:\Food Datasets\Small Training Dataset\Food\B-Large\
100 images have been resized so far...
100 images have been resized so far...
200 images have been resized so far...
200 images have been resized so far...
300 images have been resized so far...
400 images have been resized so far...
300 images have been resized so far...
500 images have been resized so far...
400 images have been resized so far...
600 images have been resized so far...
500 images have been resized so far...
700 images have been resized so far...
600 images have been resized so far...
800 images have been resized so far...
700 images have been resized so far...
900 images have been resized so far...
800 images have been resized so far...
1000 images have been resized so far...
900 images have been resized so far...
1100 images have been resized so far...
1000 images have been resized so far...
1200 images have been resized so far...
1100 images have been resized so far...
1200 images have been resized so far...
1300 images have been resized so far...
1300 images have been resized so far...
1400 images have been resized so far...
1400 images have been resized so far...
1500 images have been resized so far...
1500 images have been resized so far...
1600 images have been resized so far...
1600 images have been resized so far...
1700 images have been resized so far...
1700 images have been resized so far...
1800 images have been resized so far...
1800 images have been resized so far...
1900 images have been resized so far...
1900 images have been resized so far...
2000 images have been resized so far...
2000 images have been resized so far...
2100 images have been resized so far...
2100 images have been resized so far...
2200 images have been resized so far...
2200 images have been resized so far...
2300 images have been resized so far...
2300 images have been resized so far...
2400 images have been resized so far...
2400 images have been resized so far...
2500 images have been resized so far...
2500 images have been resized so far...
2600 images have been resized so far...
2600 images have been resized so far...
2700 images have been resized so far...
2700 images have been resized so far...
2800 images have been resized so far...
2900 images have been resized so far...
2800 images have been resized so far...
3000 images have been resized so far...
2900 images have been resized so far...
3100 images have been resized so far...
3000 images have been resized so far...
3200 images have been resized so far...
3300 images have been resized so far...
3100 images have been resized so far...
3400 images have been resized so far...
3200 images have been resized so far...
3500 images have been resized so far...
3600 images have been resized so far...
3300 images have been resized so far...
3400 images have been resized so far...
3700 images have been resized so far...
3800 images have been resized so far...
3500 images have been resized so far...
3900 images have been resized so far...
4000 images have been resized so far...
3600 images have been resized so far...
4100 images have been resized so far...
3700 images have been resized so far...
4200 images have been resized so far...
4300 images have been resized so far...
3800 images have been resized so far...
4400 images have been resized so far...
3900 images have been resized so far...
4500 images have been resized so far...
4000 images have been resized so far...
4600 images have been resized so far...
4100 images have been resized so far...
4700 images have been resized so far...
4200 images have been resized so far...
4800 images have been resized so far...
4900 images have been resized so far...
4300 images have been resized so far...
5000 images have been resized so far...
5100 images have been resized so far...
4400 images have been resized so far...
5200 images have been resized so far...
4500 images have been resized so far...
5300 images have been resized so far...
5400 images have been resized so far...
4600 images have been resized so far...
5500 images have been resized so far...
4700 images have been resized so far...
5600 images have been resized so far...
5700 images have been resized so far...
4800 images have been resized so far...
5800 images have been resized so far...
4900 images have been resized so far...
5900 images have been resized so far...
6000 images have been resized so far...
5000 images have been resized so far...
6100 images have been resized so far...
5100 images have been resized so far...
6200 images have been resized so far...
6300 images have been resized so far...
5200 images have been resized so far...
6400 images have been resized so far...
5300 images have been resized so far...
6500 images have been resized so far...
6600 images have been resized so far...
5400 images have been resized so far...
6700 images have been resized so far...
5500 images have been resized so far...
6800 images have been resized so far...
6900 images have been resized so far...
5600 images have been resized so far...
7000 images have been resized so far...
7100 images have been resized so far...
5700 images have been resized so far...
7200 images have been resized so far...
5800 images have been resized so far...
7300 images have been resized so far...
7400 images have been resized so far...
5900 images have been resized so far...
7500 images have been resized so far...
6000 images have been resized so far...
7600 images have been resized so far...
6100 images have been resized so far...
Processing Directory 2 of 6...
Processing Directory: D:\Food Datasets\Small Training Dataset\Food\B-Medium\
7700 images have been resized so far...
7800 images have been resized so far...
6200 images have been resized so far...
7900 images have been resized so far...
8000 images have been resized so far...
8100 images have been resized so far...
8200 images have been resized so far...
8300 images have been resized so far...
6300 images have been resized so far...
8400 images have been resized so far...
8500 images have been resized so far...
8600 images have been resized so far...
8700 images have been resized so far...
6400 images have been resized so far...
8800 images have been resized so far...
8900 images have been resized so far...
9000 images have been resized so far...
6500 images have been resized so far...
9100 images have been resized so far...
9200 images have been resized so far...
9300 images have been resized so far...
9400 images have been resized so far...
9500 images have been resized so far...
6600 images have been resized so far...
9600 images have been resized so far...
9700 images have been resized so far...
9800 images have been resized so far...
6700 images have been resized so far...
9900 images have been resized so far...
10000 images have been resized so far...
10100 images have been resized so far...
10200 images have been resized so far...
6800 images have been resized so far...
10300 images have been resized so far...
10400 images have been resized so far...
10500 images have been resized so far...
10600 images have been resized so far...
6900 images have been resized so far...
10700 images have been resized so far...
10800 images have been resized so far...
10900 images have been resized so far...
7000 images have been resized so far...
11000 images have been resized so far...
11100 images have been resized so far...
11200 images have been resized so far...
11300 images have been resized so far...
11400 images have been resized so far...
7100 images have been resized so far...
11500 images have been resized so far...
11600 images have been resized so far...
11700 images have been resized so far...
11800 images have been resized so far...
11900 images have been resized so far...
12000 images have been resized so far...
7200 images have been resized so far...
12100 images have been resized so far...
12200 images have been resized so far...
12300 images have been resized so far...
12400 images have been resized so far...
12500 images have been resized so far...
###Markdown
224p: (224x224)
###Code
S7 = Thread(target = resize224x224x224, args = (SFCMIX, SFD_224x224x224,))
S8 = Thread(target = resize224x224x224, args = (SNFCMIX, SNFD_224x224x224,))
S7.start()
S8.start()
S7.join()
S8.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
S9 = Thread(target = resize224x224x224x224, args = (SFC43, SFC169, SFD_224x224x224x224,))
S10 = Thread(target = resize224x224x224x224, args = (SNFC43, SNFC169, SNFD_224x224x224x224,))
S9.start()
S10.start()
S9.join()
S10.join()
###Output
_____no_output_____
###Markdown
240p: (360x240)
###Code
S11 = Thread(target = resize240p, args = (SFC43, SFC169, SFD_240p,))
S12 = Thread(target = resize240p, args = (SFC43, SFC169, SFD_240p,))
S11.start()
S12.start()
S11.join()
S12.join()
###Output
_____no_output_____
###Markdown
360p: (480x360)
###Code
S13 = Thread(target = resize360p, args = (SFCALL, SFD_360p,))
S14 = Thread(target = resize360p, args = (SNFCALL, SNFD_360p,))
S13.start()
S14.start()
S13.join()
S14.join()
###Output
_____no_output_____
###Markdown
360p: (480x360)
###Code
S15 = Thread(target = resize480p, args = (SFC43, SFD_480p,))
S16 = Thread(target = resize480p, args = (SNFC43, SNFD_480p,))
S15.start()
S16.start()
S15.join()
S16.join()
###Output
_____no_output_____
###Markdown
360p: (640x360)
###Code
S17 = Thread(target = resize640p, args = (SFC43, SFC169, SFD_640p,))
S18 = Thread(target = resize640p, args = (SNFC43, SNFC169, SNFD_640p,))
S17.start()
S18.start()
S17.join()
S18.join()
###Output
_____no_output_____
###Markdown
480p: (640x480)
###Code
S19 = Thread(target = resize640x480, args = (SFCMIX, SFD_640x480,))
S20 = Thread(target = resize640x480, args = (SNFCMIX, SNFD_640x480,))
S19.start()
S20.start()
S19.join()
S20.join()
###Output
_____no_output_____
###Markdown
Balanced Dataset Resizing 144p: (240x144)
###Code
B1 = Thread(target=resize144p, args = (BFC169, BFD_144p,))
B2 = Thread(target=resize144p, args = (BNFC169, BNFD_144p,))
B1.start()
B2.start()
B1.join()
B2.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
B3 = Thread(target=resize224p, args = (BFC11, BFC43, BFD_224p,))
B4 = Thread(target=resize224p, args = (BNFC11, BNFC43, BNFD_224p,))
B3.start()
B4.start()
B3.join()
B4.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
B5 = Thread(target=resize224x224, args = (BFCALL, BFD_224x224,))
B6 = Thread(target=resize224x224, args = (BNFCALL, BNFD_224x224,))
B5.start()
B6.start()
B5.join()
B6.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
B7 = Thread(target = resize224x224x224, args = (BFCMIX, BFD_224x224x224,))
B8 = Thread(target = resize224x224x224, args = (BNFCMIX, BNFD_224x224x224,))
B7.start()
B8.start()
B7.join()
B8.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
B9 = Thread(target = resize224x224x224x224, args = (BFC43, BFC169, BFD_224x224x224x224,))
B10 = Thread(target = resize224x224x224x224, args = (BNFC43, BNFC169, BNFD_224x224x224x224,))
B9.start()
B10.start()
B9.join()
B10.join()
###Output
_____no_output_____
###Markdown
240p: (360x240)
###Code
B11 = Thread(target = resize240p, args = (BFC43, BFC169, BFD_240p,))
B12 = Thread(target = resize240p, args = (BNFC43, BNFC169, BNFD_240p,))
B11.start()
B12.start()
B11.join()
B12.join()
###Output
_____no_output_____
###Markdown
360p: (480x360)
###Code
B13 = Thread(target = resize360p, args = (BFCALL, BFD_360p,))
B14 = Thread(target = resize360p, args = (BNFCALL, BNFD_360p,))
B13.start()
B14.start()
B13.join()
B14.join()
###Output
_____no_output_____
###Markdown
360p: (480x360)
###Code
B15 = Thread(target = resize480p, args = (BFC43, BFD_480p,))
B16 = Thread(target = resize480p, args = (BNFC43, BNFD_480p,))
B15.start()
B16.start()
B15.join()
B16.join()
###Output
_____no_output_____
###Markdown
360p: (640x360)
###Code
B17 = Thread(target = resize640p, args = (BFC43, BFC169, BFD_640p,))
B18 = Thread(target = resize640p, args = (BNFC43, BNFC169, BNFD_640p,))
B17.start()
B18.start()
B17.join()
B18.join()
###Output
_____no_output_____
###Markdown
480p: (640x480)
###Code
B19 = Thread(target = resize640x480, args = (BFCMIX, BFD_640x480,))
B20 = Thread(target = resize640x480, args = (BNFCMIX, BNFD_640x480,))
B19.start()
B20.start()
B19.join()
B20.join()
###Output
_____no_output_____
###Markdown
Large Dataset Resizing 144p: (240x144)
###Code
L1 = Thread(target = resize144p, args = (LFC169, LFD_144p,))
L2 = Thread(target = resize144p, args = (LNFC169, LNFD_144p,))
L1.start()
L2.start()
L1.join()
L2.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
L3 = Thread(target = resize224p, args = (LFC11, LFC43, LFD_224p,))
L4 = Thread(target = resize224p, args = (LNFC11, LNFC43, LNFD_224p,))
L3.start()
L4.start()
L3.join()
L4.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
L5 = Thread(target = resize224x224, args = (LFCALL, LFD_224x224,))
L6 = Thread(target = resize224x224, args = (LNFCALL, LNFD_224x224,))
L5.start()
L6.start()
L5.join()
L6.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
L7 = Thread(target = resize224x224x224, args = (LFCMIX, LFD_224x224x224,))
L8 = Thread(target = resize224x224x224, args = (LNFCMIX, LNFD_224x224x224,))
L7.start()
L8.start()
L7.join()
L8.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
L9 = Thread(target = resize224x224x224x224, args = (LFC43, LFC169, LFD_224x224x224x224,))
L10 = Thread(target = resize224x224x224x224, args = (LNFC43, LNFC169, LNFD_224x224x224x224,))
L9.start()
L10.start()
L9.join()
L10.join()
###Output
_____no_output_____
###Markdown
240p: (360x240)
###Code
L11 = Thread(target = resize240p, args = (LFC43, LFC169, LFD_240p,))
L12 = Thread(target = resize240p, args = (LNFC43, LNFC169, LNFD_240p,))
L11.start()
L12.start()
L11.join()
L12.join()
###Output
_____no_output_____
###Markdown
360p: (480x360)
###Code
L13 = Thread(target = resize360p, args = (LFCALL, LFD_360p,))
L14 = Thread(target = resize360p, args = (LNFCALL, LNFD_360p,))
L13.start()
L14.start()
L13.join()
L14.join()
###Output
_____no_output_____
###Markdown
360p: (480x360)
###Code
L15 = Thread(target = resize480p, args = (LFC43, LFD_480p,))
L16 = Thread(target = resize480p, args = (LNFC43, LNFD_480p,))
L15.start()
L16.start()
L15.join()
L16.join()
###Output
_____no_output_____
###Markdown
360p: (640x360)
###Code
L17 = Thread(target = resize640p, args = (LFC43, LFC169, LFD_640p,))
L18 = Thread(target = resize640p, args = (LNFC43, LNFC169, LNFD_640p,))
L17.start()
L18.start()
L17.join()
L18.join()
###Output
_____no_output_____
###Markdown
480p: (640x480)
###Code
L19 = Thread(target = resize640x480, args = (LFCMIX, LFD_640x480,))
L20 = Thread(target = resize640x480, args = (LNFCMIX, LNFD_640x480,))
L19.start()
L20.start()
L19.join()
L20.join()
###Output
_____no_output_____
###Markdown
Full Dataset Resizing 144p: (240x144)
###Code
F1 = Thread(target = resize144p, args = (FFC169, FFD_144p,))
F2 = Thread(target = resize144p, args = (FFC169, FFD_144p,))
F1.start()
F2.start()
F1.join()
F2.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
F3 = Thread(target = resize224p, args = (FFC11, FFC43, FFD_224p,))
F4 = Thread(target = resize224p, args = (FFC11, FFC43, FFD_224p,))
F3.start()
F4.start()
F3.join()
F4.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
F5 = Thread(target = resize224x224, args = (FFCALL, FFD_224x224,))
F6 = Thread(target = resize224x224, args = (FNFCALL, FNFD_224x224,))
F5.start()
F6.start()
F5.join()
F6.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
F7 = Thread(target = resize224x224x224, args = (FFCMIX, FFD_224x224x224,))
F8 = Thread(target = resize224x224x224, args = (FNFCMIX, FNFD_224x224x224,))
F7.start()
F8.start()
F7.join()
F8.join()
###Output
_____no_output_____
###Markdown
224p: (224x224)
###Code
F9 = Thread(target = resize224x224x224x224, args = (FFC43, FFC169, FFD_224x224x224x224,))
F10 = Thread(target = resize224x224x224x224, args = (FNFC43, FNFC169, FNFD_224x224x224x224,))
F9.start()
F10.start()
F9.join()
F10.join()
###Output
_____no_output_____
###Markdown
240p: (360x240)
###Code
F11 = Thread(target = resize240p, args = (FFC43, FFC169, FFD_240p,))
F12 = Thread(target = resize240p, args = (FNFC43, FNFC169, FNFD_240p,))
F11.start()
F12.start()
F11.join()
F12.join()
###Output
_____no_output_____
###Markdown
360p: (480x360)
###Code
F13 = Thread(target = resize360p, args = (FFCALL, FFD_360p,))
F14 = Thread(target = resize360p, args = (FNFCALL, FNFD_360p,))
F13.start()
F14.start()
F13.join()
F14.join()
###Output
_____no_output_____
###Markdown
360p: (480x360)
###Code
F15 = Thread(target = resize480p, args = (FFC43, FFD_480p,))
F16 = Thread(target = resize480p, args = (FNFC43, FNFD_480p,))
F15.start()
F16.start()
F15.join()
F16.join()
###Output
_____no_output_____
###Markdown
360p: (640x360)
###Code
F17 = Thread(target = resize640p, args = (FFC43, FFC169, FFD_640p,))
F18 = Thread(target = resize640p, args = (FNFC43, FNFC169, FNFD_640p,))
F17.start()
F18.start()
F17.join()
F18.join()
###Output
_____no_output_____
###Markdown
480p: (640x480)
###Code
F19 = Thread(target = resize640x480, args = (FFCMIX, FFD_640x480,))
F20 = Thread(target = resize640x480, args = (FNFCMIX, FNFD_640x480,))
F19.start()
F20.start()
F19.join()
F20.join()
L5 = Thread(target = resize224x224, args = (LFCALL, LFD_224x224,))
L6 = Thread(target = resize224x224, args = (LNFCALL, LNFD_224x224,))
F5 = Thread(target = resize224x224, args = (FFCALL, FFD_224x224,))
F6 = Thread(target = resize224x224, args = (FNFCALL, FNFD_224x224,))
F7 = Thread(target = resize224x224x224, args = (FFCMIX, FFD_224x224x224,))
F8 = Thread(target = resize224x224x224, args = (FNFCMIX, FNFD_224x224x224,))
F7.start()
F8.start()
L5.start()
L6.start()
F5.start()
F6.start()
F5.join()
F6.join()
F7.join()
F8.join()
L5.join()
L6.join()
###Output
_____no_output_____ |
nmt/Basque-English_eus-eng.ipynb | ###Markdown
基于注意力的神经机器翻译 此笔记本训练一个将巴斯克语翻译为英语的序列到序列(sequence to sequence,简写为 seq2seq)模型。此例子难度较高,需要对序列到序列模型的知识有一定了解。训练完此笔记本中的模型后,你将能够输入一个巴斯克句子,例如 *"Golfak liluratu egiten nau."*,并返回其英语翻译 *"I love golf."*对于一个简单的例子来说,翻译质量令人满意。但是更有趣的可能是生成的注意力图:它显示在翻译过程中,输入句子的哪些部分受到了模型的注意。请注意:运行这个例子用一个 P100 GPU 需要花大约 10 分钟。
###Code
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
###Output
_____no_output_____
###Markdown
下载和准备数据集我们将使用 http://www.manythings.org/anki/ 提供的一个语言数据集。这个数据集包含如下格式的语言翻译对:```May I borrow this book? ¿Puedo tomar prestado este libro?```这个数据集中有很多种语言可供选择。我们将使用英语 - 巴斯克语数据集。为方便使用,我们在谷歌云上提供了此数据集的一份副本。但是你也可以自己下载副本。下载完数据集后,我们将采取下列步骤准备数据:1. 给每个句子添加一个 *开始* 和一个 *结束* 标记(token)。2. 删除特殊字符以清理句子。3. 创建一个单词索引和一个反向单词索引(即一个从单词映射至 id 的词典和一个从 id 映射至单词的词典)。4. 将每个句子填充(pad)到最大长度。
###Code
'''
# 下载文件
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
'''
path_to_file = "./lan/eus.txt"
# 将 unicode 文件转换为 ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# 在单词与跟在其后的标点符号之间插入一个空格
# 例如: "he is a boy." => "he is a boy ."
# 参考:https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# 除了 (a-z, A-Z, ".", "?", "!", ","),将所有字符替换为空格
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.rstrip().strip()
# 给句子加上开始和结束标记
# 以便模型知道何时开始和结束预测
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence).encode('utf-8'))
# 1. 去除重音符号
# 2. 清理句子
# 3. 返回这样格式的单词对:[ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return zip(*word_pairs)
en, sp = create_dataset(path_to_file, None)
print(en[-1])
print(sp[-1])
def max_length(tensor):
return max(len(t) for t in tensor)
def tokenize(lang):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(
filters='')
lang_tokenizer.fit_on_texts(lang)
tensor = lang_tokenizer.texts_to_sequences(lang)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,
padding='post')
return tensor, lang_tokenizer
def load_dataset(path, num_examples=None):
# 创建清理过的输入输出对
targ_lang, inp_lang = create_dataset(path, num_examples)
input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
###Output
_____no_output_____
###Markdown
限制数据集的大小以加快实验速度(可选)在超过 10 万个句子的完整数据集上训练需要很长时间。为了更快地训练,我们可以将数据集的大小限制为 3 万个句子(当然,翻译质量也会随着数据的减少而降低):
###Code
# 尝试实验不同大小的数据集
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples)
# 计算目标张量的最大长度 (max_length)
max_length_targ, max_length_inp = max_length(target_tensor), max_length(input_tensor)
# 采用 80 - 20 的比例切分训练集和验证集
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# 显示长度
print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))
def convert(lang, tensor):
for t in tensor:
if t!=0:
print ("%d ----> %s" % (t, lang.index_word[t]))
print ("Input Language; index to word mapping")
convert(inp_lang, input_tensor_train[0])
print ()
print ("Target Language; index to word mapping")
convert(targ_lang, target_tensor_train[0])
###Output
_____no_output_____
###Markdown
创建一个 tf.data 数据集
###Code
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
embedding_dim = 256
units = 1024
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
###Output
_____no_output_____
###Markdown
编写编码器 (encoder) 和解码器 (decoder) 模型实现一个基于注意力的编码器 - 解码器模型。关于这种模型,你可以阅读 TensorFlow 的 [神经机器翻译 (序列到序列) 教程](https://github.com/tensorflow/nmt)。本示例采用一组更新的 API。此笔记本实现了上述序列到序列教程中的 [注意力方程式](https://github.com/tensorflow/nmtbackground-on-the-attention-mechanism)。下图显示了注意力机制为每个输入单词分配一个权重,然后解码器将这个权重用于预测句子中的下一个单词。下图和公式是 [Luong 的论文](https://arxiv.org/abs/1508.04025v5)中注意力机制的一个例子。输入经过编码器模型,编码器模型为我们提供形状为 *(批大小,最大长度,隐藏层大小)* 的编码器输出和形状为 *(批大小,隐藏层大小)* 的编码器隐藏层状态。下面是所实现的方程式:本教程的编码器采用 [Bahdanau 注意力](https://arxiv.org/pdf/1409.0473.pdf)。在用简化形式编写之前,让我们先决定符号:* FC = 完全连接(密集)层* EO = 编码器输出* H = 隐藏层状态* X = 解码器输入以及伪代码:* `score = FC(tanh(FC(EO) + FC(H)))`* `attention weights = softmax(score, axis = 1)`。 Softmax 默认被应用于最后一个轴,但是这里我们想将它应用于 *第一个轴*, 因为分数 (score) 的形状是 *(批大小,最大长度,隐藏层大小)*。最大长度 (`max_length`) 是我们的输入的长度。因为我们想为每个输入分配一个权重,所以 softmax 应该用在这个轴上。* `context vector = sum(attention weights * EO, axis = 1)`。选择第一个轴的原因同上。* `embedding output` = 解码器输入 X 通过一个嵌入层。* `merged vector = concat(embedding output, context vector)`* 此合并后的向量随后被传送到 GRU每个步骤中所有向量的形状已在代码的注释中阐明:
###Code
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# 样本输入
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))
print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# 隐藏层的形状 == (批大小,隐藏层大小)
# hidden_with_time_axis 的形状 == (批大小,1,隐藏层大小)
# 这样做是为了执行加法以计算分数
hidden_with_time_axis = tf.expand_dims(query, 1)
# 分数的形状 == (批大小,最大长度,1)
# 我们在最后一个轴上得到 1, 因为我们把分数应用于 self.V
# 在应用 self.V 之前,张量的形状是(批大小,最大长度,单位)
score = self.V(tf.nn.tanh(
self.W1(values) + self.W2(hidden_with_time_axis)))
# 注意力权重 (attention_weights) 的形状 == (批大小,最大长度,1)
attention_weights = tf.nn.softmax(score, axis=1)
# 上下文向量 (context_vector) 求和之后的形状 == (批大小,隐藏层大小)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = BahdanauAttention(10)
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units) {}".format(attention_result.shape))
print("Attention weights shape: (batch_size, sequence_length, 1) {}".format(attention_weights.shape))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# 用于注意力
self.attention = BahdanauAttention(self.dec_units)
def call(self, x, hidden, enc_output):
# 编码器输出 (enc_output) 的形状 == (批大小,最大长度,隐藏层大小)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x 在通过嵌入层后的形状 == (批大小,1,嵌入维度)
x = self.embedding(x)
# x 在拼接 (concatenation) 后的形状 == (批大小,1,嵌入维度 + 隐藏层大小)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# 将合并后的向量传送到 GRU
output, state = self.gru(x)
# 输出的形状 == (批大小 * 1,隐藏层大小)
output = tf.reshape(output, (-1, output.shape[2]))
# 输出的形状 == (批大小,vocab)
x = self.fc(output)
return x, state, attention_weights
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _, _ = decoder(tf.random.uniform((64, 1)),
sample_hidden, sample_output)
print ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))
###Output
_____no_output_____
###Markdown
定义优化器和损失函数
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
###Output
_____no_output_____
###Markdown
检查点(基于对象保存)
###Code
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
###Output
_____no_output_____
###Markdown
训练1. 将 *输入* 传送至 *编码器*,编码器返回 *编码器输出* 和 *编码器隐藏层状态*。2. 将编码器输出、编码器隐藏层状态和解码器输入(即 *开始标记*)传送至解码器。3. 解码器返回 *预测* 和 *解码器隐藏层状态*。4. 解码器隐藏层状态被传送回模型,预测被用于计算损失。5. 使用 *教师强制 (teacher forcing)* 决定解码器的下一个输入。6. *教师强制* 是将 *目标词* 作为 *下一个输入* 传送至解码器的技术。7. 最后一步是计算梯度,并将其应用于优化器和反向传播。
###Code
@tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# 教师强制 - 将目标词作为下一个输入
for t in range(1, targ.shape[1]):
# 将编码器输出 (enc_output) 传送至解码器
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# 使用教师强制
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# 每 2 个周期(epoch),保存(检查点)一次模型
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / steps_per_epoch))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
###Output
_____no_output_____
###Markdown
翻译* 评估函数类似于训练循环,不同之处在于在这里我们不使用 *教师强制*。每个时间步的解码器输入是其先前的预测、隐藏层状态和编码器输出。* 当模型预测 *结束标记* 时停止预测。* 存储 *每个时间步的注意力权重*。请注意:对于一个输入,编码器输出仅计算一次。
###Code
def evaluate(sentence):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_inp,
padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
# 存储注意力权重以便后面制图
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.index_word[predicted_id] + ' '
if targ_lang.index_word[predicted_id] == '<end>':
return result, sentence, attention_plot
# 预测的 ID 被输送回模型
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# 注意力权重制图函数
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def translate(sentence):
result, sentence, attention_plot = evaluate(sentence)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
###Output
_____no_output_____
###Markdown
恢复最新的检查点并验证
###Code
# 恢复检查点目录 (checkpoint_dir) 中最新的检查点
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
translate(u'hace mucho frio aqui.')
translate(u'esta es mi vida.')
translate(u'¿todavia estan en casa?')
# 错误的翻译
translate(u'trata de averiguarlo.')
###Output
_____no_output_____ |
S06Pandas/L07GroupBy.ipynb | ###Markdown
PANDAS - GROUPBY
###Code
import numpy as np
import pandas as pd
data = {'Company':['GOOGL','GOOGL','MSFT','MSFT','FB','FB'],
'Person':['Sam','Charlie','Amy','Vanessa','Carl','Sarah'],
'Sales':[200,120,340,124,243,350]}
df=pd.DataFrame(data)
df
bycomp = df.groupby('Company') # pass the column name and it returns groupby object
bycomp
# call aggregate functions on groupby object
bycomp.mean() # it will automatically ignore non-numeric columns lik 'Person'
bycomp.sum()
bycomp.std()
bycomp.min()
bycomp.max()
bycomp.std()
bycomp.sum().loc['FB']
df.groupby('Company').count() # common way to use grouby
df.groupby('Company').describe() # describe returns a dataframe. descriptive statistics
df.groupby('Company').describe().transpose() # switch rows and columns
df.groupby('Company').describe().transpose()['FB'] # select whichever company you are interested in
###Output
_____no_output_____ |
tests/notebooks/lazy_pipeline.ipynb | ###Markdown
Test notebook lazy pipeline
###Code
# Installed packages
import pandas as pd
# Testing
from IPython.utils.capture import capture_output
# Our package
from pandas_profiling import ProfileReport
from pandas_profiling.utils.cache import cache_file
# Read the Titanic Dataset
file_name = cache_file(
"titanic.csv",
"https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv",
)
df = pd.read_csv(file_name)
# Generate the Profiling Report (with progress bar)
with capture_output() as out:
profile = ProfileReport(df, title="Titanic Dataset", progress_bar=True, lazy=False)
assert all(
any(v in s.data["text/plain"] for v in ["%|", "FloatProgress"]) for s in out.outputs
)
assert len(out.outputs) == 2
# Generate the Profiling Report (without progress bar)
with capture_output() as out:
profile = df.profile_report(
title="Titanic Dataset",
html={"style": {"full_width": True}},
progress_bar=True,
lazy=True,
)
assert len(out.outputs) == 0
with capture_output() as out:
_ = profile.to_html()
assert all(
any(v in s.data["text/plain"] for v in ["%|", "FloatProgress"]) for s in out.outputs
)
assert len(out.outputs) == 3
with capture_output() as out:
_ = profile.to_file("/tmp/tmpfile.html")
assert "Export report to file" in out.outputs[0].data["text/plain"]
assert len(out.outputs) == 1
# Test caching of the iterative building process
with capture_output() as out:
profile = ProfileReport(df, title="Titanic Dataset", progress_bar=True, lazy=True)
assert len(out.outputs) == 0
with capture_output() as out:
profile.description_set
assert len(out.outputs) == 1
with capture_output() as out:
profile.report
assert len(out.outputs) == 1
with capture_output() as out:
profile.html
assert len(out.outputs) == 1
with capture_output() as out:
profile.config.html.style.theme = "united"
profile.invalidate_cache("rendering")
profile.to_file("/tmp/cache1.html")
assert len(out.outputs) == 2
with capture_output() as out:
profile.config.pool_size = 1
profile.html
assert len(out.outputs) == 0
with capture_output() as out:
profile.config.pool_size = 0
profile.config.samples.head = 5
profile.config.samples.tail = 15
profile.invalidate_cache()
profile.to_file("/tmp/cache2.html")
assert len(out.outputs) == 4
###Output
_____no_output_____ |
mazes/maze-traversal.ipynb | ###Markdown
Maze Traversal- - -This notebook generates a maze and then populates it with three autonomous agents.Each agent leverages a unique strategy for trying to escape the maze. The Agents The Clueless WalkerThe clueless walker simply walks in a straight line until encountering a wall.Once a wall is hit, the agent tries to turn right. If it can't, then it tries to turn left. If it cannot, then it turns around. The Wall FollowerThe wall follower leverages the [wall following](https://en.wikipedia.org/wiki/Maze-solving_algorithmWall_follower) algorithm. The Path FinderThe path finder uses the [A* algorithm](https://en.wikipedia.org/wiki/A*_search_algorithm) to chart a path to the exit. It does not consider the entrance. **Resources**- [Smoothstep](https://smoothstep.io/)- Breadth First Search- Dijkstra's algorithm/Fast Marching Method for solving the Eikonal equation?- BFS is generalized as Disjkstra, which is generalized as Fast Marching, then as Ordered Upwind method, then as Anisotropic Fast Marching- Bellman-Ford Algorithm- Fast Marching Algorithm (FMM), Eikonal equation- [Maze Art](https://troika.uk.com/work/troika-labyrinth/)- [Lee Algorithm](https://en.wikipedia.org/wiki/Lee_algorithm)- [Procedural Content Generation: Mazes](http://pcg.wikidot.com/pcg-algorithm:maze)- [Wikipedia Maze Generation Algorithms](https://en.wikipedia.org/wiki/Maze_generation_algorithm)- [Smart Move: Intelligent Path Finding](https://www.gamedeveloper.com/programming/smart-move-intelligent-path-finding)- [Toward more Realistic Path Finding](https://www.gamedeveloper.com/programming/toward-more-realistic-pathfinding)- [AI Wisdom A* Articles](http://www.aiwisdom.com/ai_astar.html)- [Maze Solving Algorithm](https://en.wikipedia.org/wiki/Maze-solving_algorithm)- [Maze Routing Algorithm](https://en.wikipedia.org/wiki/Maze-solving_algorithmMaze-routing_algorithm)- [Shortest Path Algorithms](https://en.wikipedia.org/wiki/Maze-solving_algorithmShortest_path_algorithm)Game Programming Gems 1 (PDF)- Simple Implementation: Page 248- Optimized Implementation: Page 279 - Fuzzy Logic for Video Games: Page 313- A Neural Net Primer: Page 324
###Code
# Load python code.
%load_ext autoreload
%autoreload 2
# All imports
from __future__ import annotations
from IPython.display import display
from ipycanvas import Canvas, hold_canvas, MultiCanvas
import time
from typing import List
from generation.structures import Corner, Point
from generation.maze import Maze
from generation.generators.random_backtracer import generate_maze_walls
from generation.npc import Agent
from generation.renderers.units import AGENT_SIZE, ROOM_SIZE_WIDTH, ROOM_SIZE_HEIGHT
from generation.renderers.wall_drawer import draw_maze
from generation.renderers.agents_renderer import draw_agents
from generation.direction import Direction
from generation.walkers.clueless import clueless_walk
from generation.walkers.wall_follower import wall_follower_walk
from generation.walkers.a_star import find_path as find_path_with_a_star, build_path_walker
# Rendering Functions
def draw_path(path: List[Point], canvas: Canvas, line_color: str) -> None:
"""Renders a list of points as a solid line"""
canvas_points = []
# Create an array of tuples for canvas to render in a single draw call.
print(f'Path has {len(path)} steps')
for location in path:
# Find the upper left corner for the room.
upper_left_corner = Corner(location.x * ROOM_SIZE_WIDTH, location.y * ROOM_SIZE_HEIGHT)
# Find the midpoint of the room.
horizontal_offset = ROOM_SIZE_WIDTH/2.0
vertical_offset = ROOM_SIZE_HEIGHT/2.0
midpoint = Point(upper_left_corner.x + horizontal_offset, upper_left_corner.y + vertical_offset)
canvas_points.append((midpoint.x, midpoint.y))
canvas.stroke_style = line_color
canvas.stroke_lines(canvas_points)
def draw_legend(canvas: Canvas, frame) -> None:
"""Renders a legend for the maze."""
LINE_HEIGHT = 14
FIRST_LINE = 20
HORIZONTAL_OFFSET = 450
with hold_canvas(canvas):
canvas.text_baseline = "top"
canvas.clear()
# Draw the frame count
canvas.fill_style = 'black'
canvas.fill_text(f'Frame: {frame}', HORIZONTAL_OFFSET, FIRST_LINE)
# Draw Wall Walker Legend
canvas.fill_style = 'blue'
canvas.fill_rect(HORIZONTAL_OFFSET,FIRST_LINE+LINE_HEIGHT, AGENT_SIZE)
canvas.fill_style = 'black'
canvas.fill_text(f'- Wall Follower', HORIZONTAL_OFFSET + AGENT_SIZE + 5, FIRST_LINE+LINE_HEIGHT)
# Draw Clueless Walker Legend
canvas.fill_style = 'green'
canvas.fill_rect(HORIZONTAL_OFFSET,FIRST_LINE+LINE_HEIGHT*2, AGENT_SIZE)
canvas.fill_style = 'black'
canvas.fill_text(f'- Clueless Walker', HORIZONTAL_OFFSET + AGENT_SIZE + 5, FIRST_LINE+LINE_HEIGHT*2)
# Draw A* Walker Legend
canvas.fill_style = 'yellow'
canvas.fill_rect(HORIZONTAL_OFFSET,FIRST_LINE+LINE_HEIGHT*3, AGENT_SIZE)
canvas.fill_style = 'black'
canvas.fill_text(f'- Path Finder', HORIZONTAL_OFFSET + AGENT_SIZE + 5, FIRST_LINE+LINE_HEIGHT*3)
# The Main Cell
# 1000/125 = 8 FPS
SLEEP_TIME_SEC:float = 0.125
# Generate a maze.
maze: Maze = Maze(20, 20)
generate_maze_walls(maze)
# Create 4 layers of canvases. 0: Maze, 1: A* Path, 2: Agents, 3: HUD
mc = MultiCanvas(n_canvases=4, width=800, height=400)
display(mc)
# Create NPCs with different strategies
common_starting_point = Point(int(maze.width/2), int(maze.height/2))
wall_follower = Agent('blue')
wall_follower.maze_strategy(wall_follower_walk)
wall_follower.move_to(common_starting_point)
wall_follower.face(Direction.SOUTH)
random_walker = Agent('green')
random_walker.maze_strategy(clueless_walk)
random_walker.move_to(common_starting_point)
random_walker.face(Direction.SOUTH)
# Calculate a path using A*
path_finder = Agent('yellow')
path_finder.move_to(common_starting_point)
path_finder.face(Direction.SOUTH)
found_path, escape_path = find_path_with_a_star(path_finder, maze, maze.exit_cell.location)
if not found_path:
raise Exception('Failed to find a path.')
path_walker = build_path_walker(escape_path)
path_finder.maze_strategy(path_walker)
agents = [wall_follower, random_walker, path_finder]
# Initial Render
time.sleep(2)
draw_maze(maze, mc[0])
draw_path(escape_path, mc[1], 'red')
draw_agents(agents, mc[2])
time.sleep(2)
for frame in range(200):
for agent in agents:
agent.explore(maze)
draw_agents(agents, mc[2])
draw_legend(mc[3],frame)
time.sleep(SLEEP_TIME_SEC)
###Output
Exit cell: Point(x=17, y=19)
|
aata/domains-sage.ipynb | ###Markdown
**Important:** to view this notebook properly you will need to execute the cell above, which assumes you have an Internet connection. It should already be selected, or place your cursor anywhere above to select. Then press the "Run" button in the menu bar above (the right-pointing arrowhead), or press Shift-Enter on your keyboard. $\newcommand{\identity}{\mathrm{id}}\newcommand{\notdivide}{\nmid}\newcommand{\notsubset}{\not\subset}\newcommand{\lcm}{\operatorname{lcm}}\newcommand{\gf}{\operatorname{GF}}\newcommand{\inn}{\operatorname{Inn}}\newcommand{\aut}{\operatorname{Aut}}\newcommand{\Hom}{\operatorname{Hom}}\newcommand{\cis}{\operatorname{cis}}\newcommand{\chr}{\operatorname{char}}\newcommand{\Null}{\operatorname{Null}}\newcommand{\lt}{<}\newcommand{\gt}{>}\newcommand{\amp}{&}$ Section18.5Sage¶ We have already seen some integral domains and unique factorizations in the previous two chapters. In addition to what we have already seen, Sage has support for some of the topics from this section, but the coverage is limited. Some functions will work for some rings and not others, while some functions are not yet part of Sage. So we will give some examples, but this is far from comprehensive. SubsectionField of Fractions Sage is frequently able to construct a field of fractions, or identify a certain field as the field of fractions. For example, the ring of integers and the field of rational numbers are both implemented in Sage, and the integers “know” that the rationals is it's field of fractions.
###Code
Q = ZZ.fraction_field(); Q
Q == QQ
###Output
_____no_output_____
###Markdown
In other cases Sage will construct a fraction field, in the spirit of Lemma 18.3. So it is then possible to do basic calculations in the constructed field.
###Code
R.<x> = ZZ[]
P = R.fraction_field();P
f = P((x^2+3)/(7*x+4))
g = P((4*x^2)/(3*x^2-5*x+4))
h = P((-2*x^3+4*x^2+3)/(x^2+1))
((f+g)/h).numerator()
((f+g)/h).denominator()
###Output
_____no_output_____
###Markdown
SubsectionPrime Subfields Corollary 18.7 says every field of characteristic $p$ has a subfield isomorphic to ${\mathbb Z}_p\text{.}$ For a finite field, the exact nature of this subfield is not a surprise, but Sage will allow us to extract it easily.
###Code
F.<c> = FiniteField(3^5)
F.characteristic()
G = F.prime_subfield(); G
G.list()
###Output
_____no_output_____
###Markdown
More generally, the fields mentioned in the conclusions of Corollary 18.6 and Corollary 18.7 are known as the “prime subfield” of the ring containing them. Here is an example of the characteristic zero case.
###Code
K.<y>=QuadraticField(-7); K
K.prime_subfield()
###Output
_____no_output_____
###Markdown
In a rough sense, every characteristic zero field contains a copy of the rational numbers (the fraction field of the integers), which can explain Sage's extensive support for rings and fields that extend the integers and the rationals. SubsectionIntegral Domains Sage can determine if some rings are integral domains and we can test products in them. However, notions of units, irreducibles or prime elements are not generally supported (outside of what we have seen for polynomials in the previous chapter). Worse, the construction below creates a ring within a larger field and so some functions (such as .is_unit()) pass through and give misleading results. This is because the construction below creates a ring known as an “order in a number field.”
###Code
K.<x> = ZZ[sqrt(-3)]; K
K.is_integral_domain()
K.basis()
x
(1+x)*(1-x) == 2*2
###Output
_____no_output_____
###Markdown
The following is a bit misleading, since $4\text{,}$ as an element of ${\mathbb Z}[\sqrt{3}i]$ does not have a multiplicative inverse, though seemingly we can compute one.
###Code
four = K(4)
four.is_unit()
four^-1
###Output
_____no_output_____
###Markdown
SubsectionPrincipal Ideals When a ring is a principal ideal domain, such as the integers, or polynomials over a field, Sage works well. Beyond that, support begins to weaken.
###Code
T.<x>=ZZ[]
T.is_integral_domain()
J = T.ideal(5, x); J
Q = T.quotient(J); Q
J.is_principal()
Q.is_field()
###Output
_____no_output_____ |
notebooks/timing.ipynb | ###Markdown
Testing Order of Growth [Click here to run this chapter on Colab](https://colab.research.google.com/github/AllenDowney/DSIRP/blob/main/notebooks/timing.ipynb) Analysis of algorithms makes it possible to predict how run time will grow as the size of a problem increases.But this kind of analysis ignores leading coefficients and non-leading terms.So the behavior for small and medium problems might not be what the analysis predicts.To see how run time really behaves for a range of problem sizes, we can run the algorithm and measure.To do the measurement, we'll use the [times](https://docs.python.org/3/library/os.htmlos.times) function from the `os` module.
###Code
import os
def etime():
"""Measures user and system time this process has used.
Returns the sum of user and system time."""
user, sys, chuser, chsys, real = os.times()
return user+sys
start = etime()
t = [x**2 for x in range(10000)]
end = etime()
end - start
###Output
_____no_output_____
###Markdown
Exercise: Use `etime` to measure the computation time used by `sleep`.
###Code
from time import sleep
sleep(1)
def time_func(func, n):
"""Run a function and return the elapsed time.
func: function
n: problem size, passed as an argument to func
returns: user+sys time in seconds
"""
start = etime()
func(n)
end = etime()
elapsed = end - start
return elapsed
###Output
_____no_output_____
###Markdown
One of the things that makes timing tricky is that many operations are too fast to measure accurately.`%timeit` handles this by running enough times get a precise estimate, even for things that run very fast.We'll handle it by running over a wide range of problem sizes, hoping to find sizes that run long enough to measure, but not more than a few seconds. The following function takes a size, `n`, creates an empty list, and calls `list.append` `n` times.
###Code
def list_append(n):
t = []
[t.append(x) for x in range(n)]
###Output
_____no_output_____
###Markdown
`timeit` can time this function accurately.
###Code
%timeit list_append(10000)
###Output
_____no_output_____
###Markdown
But our `time_func` is not that smart.
###Code
time_func(list_append, 10000)
###Output
_____no_output_____
###Markdown
Exercise: Increase the number of iterations until the run time is measureable. List appendThe following function gradually increases `n` and records the total time.
###Code
def run_timing_test(func, max_time=1):
"""Tests the given function with a range of values for n.
func: function object
returns: list of ns and a list of run times.
"""
ns = []
ts = []
for i in range(10, 28):
n = 2**i
t = time_func(func, n)
print(n, t)
if t > 0:
ns.append(n)
ts.append(t)
if t > max_time:
break
return ns, ts
ns, ts = run_timing_test(list_append)
import matplotlib.pyplot as plt
plt.plot(ns, ts, 'o-')
plt.xlabel('Problem size (n)')
plt.ylabel('Runtime (seconds)');
###Output
_____no_output_____
###Markdown
This one looks pretty linear, but it won't always be so clear.It will help to plot a straight line that goes through the last data point.
###Code
def fit(ns, ts, exp=1.0, index=-1):
"""Fits a curve with the given exponent.
ns: sequence of problem sizes
ts: sequence of times
exp: exponent of the fitted curve
index: index of the element the fitted line should go through
returns: sequence of fitted times
"""
# Use the element with the given index as a reference point,
# and scale all other points accordingly.
nref = ns[index]
tref = ts[index]
tfit = []
for n in ns:
ratio = n / nref
t = ratio**exp * tref
tfit.append(t)
return tfit
ts_fit = fit(ns, ts)
ts_fit
###Output
_____no_output_____
###Markdown
The following function plots the actual results and the fitted line.
###Code
def plot_timing_test(ns, ts, label='', color='C0', exp=1.0, scale='log'):
"""Plots data and a fitted curve.
ns: sequence of n (problem size)
ts: sequence of t (run time)
label: string label for the data curve
color: string color for the data curve
exp: exponent (slope) for the fitted curve
scale: string passed to xscale and yscale
"""
ts_fit = fit(ns, ts, exp)
fit_label = 'exp = %d' % exp
plt.plot(ns, ts_fit, label=fit_label, color='0.7', linestyle='dashed')
plt.plot(ns, ts, 'o-', label=label, color=color, alpha=0.7)
plt.xlabel('Problem size (n)')
plt.ylabel('Runtime (seconds)')
plt.xscale(scale)
plt.yscale(scale)
plt.legend()
plot_timing_test(ns, ts, scale='linear')
plt.title('list append');
###Output
_____no_output_____
###Markdown
From these results, what can we conclude about the order of growth of `list.append`? Before we go on, let's also look at the results on a log-log scale.
###Code
plot_timing_test(ns, ts, scale='log')
plt.title('list append');
###Output
_____no_output_____
###Markdown
Why might we prefer this scale? List popNow let's do the same for `list.pop` (which pops from the end of the list by default).Notice that we have to make the list before we pop things from it, so we will have to think about how to interpret the results.
###Code
def list_pop(n):
t = []
[t.append(x) for x in range(n)]
[t.pop() for _ in range(n)]
ns, ts = run_timing_test(list_pop)
plot_timing_test(ns, ts, scale='log')
plt.title('list pop');
###Output
_____no_output_____
###Markdown
What can we conclude?What about `pop(0)`, which pops from the beginning of the list?Note: You might have to adjust `exp` to make the fitted line fit.
###Code
def list_pop0(n):
t = []
[t.append(x) for x in range(n)]
[t.pop(0) for _ in range(n)]
ns, ts = run_timing_test(list_pop0)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list pop(0)');
###Output
_____no_output_____
###Markdown
Searching a list`list.index` searches a list and returns the index of the first element that matches the target.What do we expect if we always search for the first element?
###Code
def list_index0(n):
t = []
[t.append(x) for x in range(n)]
[t.index(0) for _ in range(n)]
ns, ts = run_timing_test(list_index0)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list index(0)');
###Output
_____no_output_____
###Markdown
What if we always search for the last element?
###Code
def list_index_n(n):
t = []
[t.append(x) for x in range(n)]
[t.index(n-1) for _ in range(n)]
ns, ts = run_timing_test(list_index_n)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list index(n-1)');
###Output
_____no_output_____
###Markdown
Dictionary add
###Code
def dict_add(n):
d = {}
[d.setdefault(x, x) for x in range(n)]
ns, ts = run_timing_test(dict_add)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('dict add');
###Output
_____no_output_____
###Markdown
Dictionary lookup
###Code
def dict_lookup(n):
d = {}
[d.setdefault(x, x) for x in range(n)]
[d[x] for x in range(n)]
ns, ts = run_timing_test(dict_lookup)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('dict lookup');
###Output
_____no_output_____
###Markdown
Testing Order of Growth *Data Structures and Information Retrieval in Python*Copyright 2021 Allen DowneyLicense: [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/) [Click here to run this chapter on Colab](https://colab.research.google.com/github/AllenDowney/DSIRP/blob/main/chapters/timing.ipynb) Read the [documentation of os.times](https://docs.python.org/3/library/os.htmlos.times)
###Code
import os
def etime():
"""Measures user and system time this process has used.
Returns the sum of user and system time."""
user, sys, chuser, chsys, real = os.times()
return user+sys
start = etime()
t = [x**2 for x in range(10000)]
end = etime()
end - start
###Output
_____no_output_____
###Markdown
Exercise: Use `etime` to measure the computation time used by `sleep`.
###Code
from time import sleep
sleep(1)
# Solution goes here
def time_func(func, n):
"""Run a function and return the elapsed time.
func: function
n: problem size, passed as an argument to func
returns: user+sys time in seconds
"""
start = etime()
func(n)
end = etime()
elapsed = end - start
return elapsed
###Output
_____no_output_____
###Markdown
One of the things that makes timing tricky is that many operations are too fast to measure accurately.`%timeit` handles this by running enough times get a precise estimate, even for things that run very fast.We'll handle it by running over a wide range of problem sizes, hoping to sizes that run long enough to measure, but not more than a few seconds. The following function takes a size, `n`, creates an empty list, and calls `list.append` `n` times.
###Code
def list_append(n):
t = []
[t.append(x) for x in range(n)]
###Output
_____no_output_____
###Markdown
`timeit` can time this function accurately.
###Code
%timeit list_append(10000)
###Output
_____no_output_____
###Markdown
But our `time_func` is not that smart.
###Code
time_func(list_append, 10000)
###Output
_____no_output_____
###Markdown
Exercise: Increase the number of iterations until the run time is measureable.
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
List appendThe following function gradually increases `n` and records the total time.
###Code
def run_timing_test(func, max_time=1):
"""Tests the given function with a range of values for n.
func: function object
returns: list of ns and a list of run times.
"""
ns = []
ts = []
for i in range(10, 28):
n = 2**i
t = time_func(func, n)
print(n, t)
if t > 0:
ns.append(n)
ts.append(t)
if t > max_time:
break
return ns, ts
ns, ts = run_timing_test(list_append)
import matplotlib.pyplot as plt
plt.plot(ns, ts, 'o-')
plt.xlabel('Problem size (n)')
plt.ylabel('Runtime (seconds)');
###Output
_____no_output_____
###Markdown
This one looks pretty linear, but it won't always be so clear.It will help to plot a straight line that goes through the last data point.
###Code
def fit(ns, ts, exp=1.0, index=-1):
"""Fits a curve with the given exponent.
ns: sequence of problem sizes
ts: sequence of times
exp: exponent of the fitted curve
index: index of the element the fitted line should go through
returns: sequence of fitted times
"""
# Use the element with the given index as a reference point,
# and scale all other points accordingly.
nref = ns[index]
tref = ts[index]
tfit = []
for n in ns:
ratio = n / nref
t = ratio**exp * tref
tfit.append(t)
return tfit
ts_fit = fit(ns, ts)
ts_fit
###Output
_____no_output_____
###Markdown
The following function plots the actual results and the fitted line.
###Code
def plot_timing_test(ns, ts, label='', color='C0', exp=1.0, scale='log'):
"""Plots data and a fitted curve.
ns: sequence of n (problem size)
ts: sequence of t (run time)
label: string label for the data curve
color: string color for the data curve
exp: exponent (slope) for the fitted curve
scale: string passed to xscale and yscale
"""
ts_fit = fit(ns, ts, exp)
fit_label = 'exp = %d' % exp
plt.plot(ns, ts_fit, label=fit_label, color='0.7', linestyle='dashed')
plt.plot(ns, ts, 'o-', label=label, color=color, alpha=0.7)
plt.xlabel('Problem size (n)')
plt.ylabel('Runtime (seconds)')
plt.xscale(scale)
plt.yscale(scale)
plt.legend()
plot_timing_test(ns, ts, scale='linear')
plt.title('list append');
###Output
_____no_output_____
###Markdown
From these results, what can we conclude about the order of growth of `list.append`? Before we go on, let's also look at the results on a log-log scale.
###Code
plot_timing_test(ns, ts, scale='log')
plt.title('list append');
###Output
_____no_output_____
###Markdown
Why might we prefer this scale? List popNow let's do the same for `list.pop` (which pops from the end of the list by default).Notice that we have to make the list before we pop things from it, so we will have to think about how to interpret the results.
###Code
def list_pop(n):
t = []
[t.append(x) for x in range(n)]
[t.pop() for _ in range(n)]
ns, ts = run_timing_test(list_pop)
plot_timing_test(ns, ts, scale='log')
plt.title('list pop');
###Output
_____no_output_____
###Markdown
What can we conclude?What about `pop(0)`, which pops from the beginning of the list?Note: You might have to adjust `exp` to make the fitted line fit.
###Code
def list_pop0(n):
t = []
[t.append(x) for x in range(n)]
[t.pop(0) for _ in range(n)]
ns, ts = run_timing_test(list_pop0)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list pop(0)');
###Output
_____no_output_____
###Markdown
Searching a list`list.index` searches a list and returns the index of the first element that matches the target.What do we expect if we always search for the first element?
###Code
def list_index0(n):
t = []
[t.append(x) for x in range(n)]
[t.index(0) for _ in range(n)]
ns, ts = run_timing_test(list_index0)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list index(0)');
###Output
_____no_output_____
###Markdown
What if we always search for the last element?
###Code
def list_index_n(n):
t = []
[t.append(x) for x in range(n)]
[t.index(n-1) for _ in range(n)]
ns, ts = run_timing_test(list_index_n)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list index(n-1)');
###Output
_____no_output_____
###Markdown
Dictionary add
###Code
def dict_add(n):
d = {}
[d.setdefault(x, x) for x in range(n)]
ns, ts = run_timing_test(dict_add)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('dict add');
###Output
_____no_output_____
###Markdown
Dictionary lookup
###Code
def dict_lookup(n):
d = {}
[d.setdefault(x, x) for x in range(n)]
[d[x] for x in range(n)]
ns, ts = run_timing_test(dict_lookup)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('dict lookup');
###Output
_____no_output_____
###Markdown
Testing Order of Growth [Click here to run this chapter on Colab](https://colab.research.google.com/github/AllenDowney/DSIRP/blob/main/notebooks/timing.ipynb) Analysis of algorithms makes it possible to predict how run time will grow as the size of a problem increases.But this kind of analysis ignores leading coefficients and non-leading terms.So the behavior for small and medium problems might not be what the analysis predicts.To see how run time really behaves for a range of problem sizes, we can run the algorithm and measure.To do the measurement, we'll use the [times](https://docs.python.org/3/library/os.htmlos.times) function from the `os` module.
###Code
import os
def etime():
"""Measures user and system time this process has used.
Returns the sum of user and system time."""
user, sys, chuser, chsys, real = os.times()
return user+sys
start = etime()
t = [x**2 for x in range(10000)]
end = etime()
end - start
###Output
_____no_output_____
###Markdown
Exercise: Use `etime` to measure the computation time used by `sleep`.
###Code
from time import sleep
sleep(1)
def time_func(func, n):
"""Run a function and return the elapsed time.
func: function
n: problem size, passed as an argument to func
returns: user+sys time in seconds
"""
start = etime()
func(n)
end = etime()
elapsed = end - start
return elapsed
###Output
_____no_output_____
###Markdown
One of the things that makes timing tricky is that many operations are too fast to measure accurately.`%timeit` handles this by running enough times get a precise estimate, even for things that run very fast.We'll handle it by running over a wide range of problem sizes, hoping to sizes that run long enough to measure, but not more than a few seconds. The following function takes a size, `n`, creates an empty list, and calls `list.append` `n` times.
###Code
def list_append(n):
t = []
[t.append(x) for x in range(n)]
###Output
_____no_output_____
###Markdown
`timeit` can time this function accurately.
###Code
%timeit list_append(10000)
###Output
_____no_output_____
###Markdown
But our `time_func` is not that smart.
###Code
time_func(list_append, 10000)
###Output
_____no_output_____
###Markdown
Exercise: Increase the number of iterations until the run time is measureable. List appendThe following function gradually increases `n` and records the total time.
###Code
def run_timing_test(func, max_time=1):
"""Tests the given function with a range of values for n.
func: function object
returns: list of ns and a list of run times.
"""
ns = []
ts = []
for i in range(10, 28):
n = 2**i
t = time_func(func, n)
print(n, t)
if t > 0:
ns.append(n)
ts.append(t)
if t > max_time:
break
return ns, ts
ns, ts = run_timing_test(list_append)
import matplotlib.pyplot as plt
plt.plot(ns, ts, 'o-')
plt.xlabel('Problem size (n)')
plt.ylabel('Runtime (seconds)');
###Output
_____no_output_____
###Markdown
This one looks pretty linear, but it won't always be so clear.It will help to plot a straight line that goes through the last data point.
###Code
def fit(ns, ts, exp=1.0, index=-1):
"""Fits a curve with the given exponent.
ns: sequence of problem sizes
ts: sequence of times
exp: exponent of the fitted curve
index: index of the element the fitted line should go through
returns: sequence of fitted times
"""
# Use the element with the given index as a reference point,
# and scale all other points accordingly.
nref = ns[index]
tref = ts[index]
tfit = []
for n in ns:
ratio = n / nref
t = ratio**exp * tref
tfit.append(t)
return tfit
ts_fit = fit(ns, ts)
ts_fit
###Output
_____no_output_____
###Markdown
The following function plots the actual results and the fitted line.
###Code
def plot_timing_test(ns, ts, label='', color='C0', exp=1.0, scale='log'):
"""Plots data and a fitted curve.
ns: sequence of n (problem size)
ts: sequence of t (run time)
label: string label for the data curve
color: string color for the data curve
exp: exponent (slope) for the fitted curve
scale: string passed to xscale and yscale
"""
ts_fit = fit(ns, ts, exp)
fit_label = 'exp = %d' % exp
plt.plot(ns, ts_fit, label=fit_label, color='0.7', linestyle='dashed')
plt.plot(ns, ts, 'o-', label=label, color=color, alpha=0.7)
plt.xlabel('Problem size (n)')
plt.ylabel('Runtime (seconds)')
plt.xscale(scale)
plt.yscale(scale)
plt.legend()
plot_timing_test(ns, ts, scale='linear')
plt.title('list append');
###Output
_____no_output_____
###Markdown
From these results, what can we conclude about the order of growth of `list.append`? Before we go on, let's also look at the results on a log-log scale.
###Code
plot_timing_test(ns, ts, scale='log')
plt.title('list append');
###Output
_____no_output_____
###Markdown
Why might we prefer this scale? List popNow let's do the same for `list.pop` (which pops from the end of the list by default).Notice that we have to make the list before we pop things from it, so we will have to think about how to interpret the results.
###Code
def list_pop(n):
t = []
[t.append(x) for x in range(n)]
[t.pop() for _ in range(n)]
ns, ts = run_timing_test(list_pop)
plot_timing_test(ns, ts, scale='log')
plt.title('list pop');
###Output
_____no_output_____
###Markdown
What can we conclude?What about `pop(0)`, which pops from the beginning of the list?Note: You might have to adjust `exp` to make the fitted line fit.
###Code
def list_pop0(n):
t = []
[t.append(x) for x in range(n)]
[t.pop(0) for _ in range(n)]
ns, ts = run_timing_test(list_pop0)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list pop(0)');
###Output
_____no_output_____
###Markdown
Searching a list`list.index` searches a list and returns the index of the first element that matches the target.What do we expect if we always search for the first element?
###Code
def list_index0(n):
t = []
[t.append(x) for x in range(n)]
[t.index(0) for _ in range(n)]
ns, ts = run_timing_test(list_index0)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list index(0)');
###Output
_____no_output_____
###Markdown
What if we always search for the last element?
###Code
def list_index_n(n):
t = []
[t.append(x) for x in range(n)]
[t.index(n-1) for _ in range(n)]
ns, ts = run_timing_test(list_index_n)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('list index(n-1)');
###Output
_____no_output_____
###Markdown
Dictionary add
###Code
def dict_add(n):
d = {}
[d.setdefault(x, x) for x in range(n)]
ns, ts = run_timing_test(dict_add)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('dict add');
###Output
_____no_output_____
###Markdown
Dictionary lookup
###Code
def dict_lookup(n):
d = {}
[d.setdefault(x, x) for x in range(n)]
[d[x] for x in range(n)]
ns, ts = run_timing_test(dict_lookup)
plot_timing_test(ns, ts, scale='log', exp=1)
plt.title('dict lookup');
###Output
_____no_output_____ |
nCoV-counts-data.ipynb | ###Markdown
Data sources of 2019 nCoVAcknowledgement: - API from https://lab.isaaclin.cn/nCoV/- https://github.com/jianxu305/nCov2019_analysis/blob/master/src/demo.ipynb 实时获取需要的数据
###Code
def parse_time(data):
df = pd.DataFrame(data)
try:
if np.any(['Time' in name for name in df.columns]):
for time_name in df.columns[['Time' in name for name in df.columns]]:
df[time_name] = pd.to_datetime(df.loc[:, time_name], unit='ms')
except Exception as e:
print(e)
# add new column of the updating "Date" instead of concrete time
if 'updateTime' in df.columns:
df['updateDate'] = pd.Series([pd.to_datetime(item).date() for item in df['updateTime']])
elif 'pubDate' in df.columns:
df['pubTime'] = pd.to_datetime(df.loc[:, 'pubDate'], unit='ms')
df['pubDate'] = pd.Series([pd.to_datetime(item).date() for item in df['pubTime']])
return df
def query_counts_data(category='area', archival=False, province='all'):
'''
API for retrieving data from https://lab.isaaclin.cn/nCoV/.
Parameters:
Category (str): available options are 'overall', 'area'.
Check the above website for more.
archival (bool): whether retrieve archival time-series data.
Default is False, only retrieve today's data.
province (str): name of specific province. Use 'all' to get data from all provinces and countries.
Notice: full name is required ("湖北省", instead of "湖北").
Returns:
df (pandas.DataFrame): dataframe object.
'''
import requests
import pandas as pd
assert isinstance(category, str), 'Input "catecory" must be a string!'
url = 'https://lab.isaaclin.cn/nCoV/api/' + category
url += '?latest={}'.format(int(not archival))
if province is not 'all':
url += '&province=' + province
req = requests.get(url)
if req.status_code != 200 or req.json()['success'] is False:
raise ValueError('The connection fails! Please check input arguments.')
return False
else:
results = req.json()['results']
df = parse_time(results)
return df
def aggregate_Daily(df):
frm_list = []
for key, frm in df.sort_values(['updateDate']).groupby(['provinceName', 'updateDate']):
frm_list.append(frm.sort_values(['updateTime'])[-1:])
return pd.concat(frm_list).sort_values(['updateTime', 'provinceName']).loc[::-1]
def parse_city(city_row):
return pd.DataFrame(city_row.values[0])
area = query_counts_data(category='area', archival=True, province='湖北省') # not very slow
area[:3]
df = aggregate_Daily(area)[::-1]
df[::-1]
mask = df['updateDate'] == datetime.date(year=2020, month=2, day=5)
city = parse_city(df[mask]['cities'])
city
plt.rcParams['font.size'] = 12.0
fig, [ax1, ax2] = plt.subplots(2, 1, figsize=(13, 10))
df.plot(y=['confirmedCount'], x='updateDate', style='-*', ax=ax1, grid=True, logy=False, color='black', marker='o')
ax1.set_ylabel("Confirmed")
df.plot(y=['deadCount', 'curedCount'], x='updateDate', style='-*', grid=True, ax=ax2, sharex=True)
ax2.set_ylabel("Counts")
plt.subplots_adjust(hspace=0.0)
ax1.tick_params(direction='in')
ax2.tick_params(direction='in')
###Output
_____no_output_____
###Markdown
利用已有的数据
###Code
df = pd.read_csv('https://github.com/BlankerL/DXY-2019-nCoV-Data/raw/master/csv/DXYArea.csv')
df = parse_time(df)
def aggregate_daily_csv(df):
frm_list = []
for key, frm in df.sort_values(['updateDate']).groupby(['provinceName', 'cityName', 'updateDate']):
frm_list.append(frm.sort_values(['updateTime'])[-1:])
return pd.concat(frm_list).sort_values(['updateTime', 'provinceName', 'cityName']).loc[::-1]
jingmen = df[df['cityName'] == '荆门']
jingmen_daily = aggregate_daily_csv(jingmen)
jingmen_daily.plot(y='city_confirmedCount', x='updateDate', style='-*', figsize=(10, 6), title='Jingmen')
df = pd.read_csv('./data/DXYOverall.csv')
df.columns
df = df[['currentConfirmedCount', 'confirmedCount', 'updateTime']]
df.iloc[719]
###Output
_____no_output_____
###Markdown
新闻报道??
###Code
def query_news_data(category='news', num='all', province='all'):
'''
API for retrieving news data from https://lab.isaaclin.cn/nCoV/.
Parameters:
Category (str): available options are 'news'.
Check the above website for more.
archival (bool): whether retrieve archival time-series data.
Default is False, only retrieve today's data.
province (str): name of specific province. Use 'all' to get data from all provinces and countries.
Notice: full name is required ("湖北省", instead of "湖北").
Returns:
df (pandas.DataFrame): dataframe object.
'''
import requests
import pandas as pd
assert isinstance(category, str), 'Input "catecory" must be a string!'
url = 'https://lab.isaaclin.cn/nCoV/api/' + category
url += '?num={}'.format(num)
if province is not 'all':
url += '&province=' + province
req = requests.get(url)
if req.status_code != 200 or req.json()['success'] is False:
raise ValueError('The connection fails! Please check input arguments.')
return False
else:
results = req.json()['results']
df = parse_time(results)
return df
news = query_news_data(category='news', num='all')
news.groupby('provinceName').count().sort_values('title')[::-1]
# today's news
mask = (news['pubDate'] == datetime.date(year=2020, month=2, day=8))
news[mask]
###Output
_____no_output_____ |
notebooks/Coverage Analysis.ipynb | ###Markdown
Coverage Analysis
###Code
import sys
import os
sys.path.insert(0, os.path.abspath('../'))
import numpy as np
import matplotlib.pyplot as plt
import astropy.units as u
import poliastro
import CtllDes
from CtllDes.core import ctll, satellite
###Output
WARNING: AstropyDeprecationWarning: The private astropy._erfa module has been made into its own package, pyerfa, which is a dependency of astropy and can be imported directly using "import erfa" [astropy._erfa]
###Markdown
Building test satellite
###Code
from poliastro.bodies import Earth
sat = satellite.Sat.from_vectors([8000,0,0]*u.km,
[0,5,2.5]*u.km/u.s,
attractor=Earth)
###Output
_____no_output_____
###Markdown
Add Coverage Instrument Camera symmetric FOV
###Code
from CtllDes.core.instrument import Instrument, Camera
cam = Camera(10,3)
sat.update_instruments(cam,f=True)
#check if Camera is a Coverage instrument, more on this later
sat.cov_instruments
###Output
_____no_output_____
###Markdown
Push Broom Instrument
###Code
from CtllDes.core.instrument import PushBroom
pixel_width = 7*1E-6*u.m
n_pixels = 12288
sensor_width = n_pixels*pixel_width
f_length = 0.42*u.m
broom = PushBroom(f_length, sensor_width)
sat.update_instruments(broom,f=True)
#check if PushBroom is a Coverage instrument, more on this later
sat.cov_instruments
###Output
_____no_output_____
###Markdown
Defining targets
###Code
#In order to do a coverage analysis you must have targets. The module targets is the one in charge of that.
from CtllDes.targets.targets import Targets, Target
from shapely.geometry import Point
#simple target
tgt = Target(0,0)
#multiple targets
tgts = Targets([Target(i,i) for i in range(0,180,10)],tag='linear targets')
#define targets from country, administration level 0.
tgts = Targets.from_country('Argentina')
figc = tgts.plot()
plt.title("Argentina, N=50")
plt.grid()
plt.xlabel("longitude [°]")
plt.ylabel("latitude [°]")
plt.show()
#define targets from state name, administration level 1
tgts = Targets.from_state('Río Negro', N=100)
figs = tgts.plot()
plt.title("Río Negro, N=100")
plt.xlabel("longitude [°]")
plt.ylabel("latitude [°]")
plt.grid()
plt.show()
#define single Target from city name
bs_as = Target.from_city('Buenos Aires',country='AR')
# less points for country targets
tgts = Targets.from_country('Peru', N=6)
###Output
_____no_output_____
###Markdown
Building CoveragesCoverages is the main container for Coverage analysis, it consist on Coverage (singular) objects. This objects are defined by covs, an array with length = T*3600*24/dt containing ones or zeroes depending if the target is on sight or not. Targets described earlier in this notebook T == Time of propagation analysis dt == Time interval of integration Merit figuresIf you want more information on the merit figures calculated for each target, I recommend reading the chapter 9 of O.C.D.M. from James R. Wertz.
###Code
from CtllDes.requests.coverage import Coverages
#Build Coverages from satellite and single target
covs = Coverages.from_sat(sat, tgt, 10, dt=10, J2=True, drag=False)
#transform coverages into dataframe
covs.to_df()
from CtllDes.requests.coverage import Coverages
import time
covs = Coverages.from_sat(sat, tgts, 10, dt=100, J2=True, drag=False)
dfcov = covs.to_df()
dfcov
lons,lats = sat.ssps(10, dt=5, J2=True, drag=False)
lons = lons*180/np.pi
lats = lats*180/np.pi
%matplotlib qt5
target_lons = [tgts.targets[i].lon + 180 for i in range(len(tgts.targets))]
target_lats = [tgts.targets[i].lat for i in range(len(tgts.targets))]
plt.figure(figsize=(10,10))
plt.scatter(lons,lats,c='red',s=1)
plt.ylim(-90,90)
plt.scatter(target_lons,target_lats,c='k', s=5)
accum = dfcov['accumulated'].values
accum = np.array([float(i) for i in accum])
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')
ax.plot_trisurf(target_lons, target_lats, accum,
antialiased=False)
ax.set_xlim(min(target_lons),max(target_lons))
ax.set_ylim(min(target_lats),max(target_lats))
#ax.scatter(lons,lats, np.zeros(len(lats)),s=1)
ax.scatter(target_lons,target_lats, np.zeros(len(target_lons)),s=100,c='k')
###Output
_____no_output_____
###Markdown
What is a Coverage Instrument?In order to be a Coverage Instrument first of all the object must be an Instrument. The coverage ability is defined by the interface of the library, i.e. a coverage method must be overwritten. See the example below.
###Code
#first lets check out the coverage method requirements to be correcly overwritten.
help(Instrument.coverage)
#So if you want to build a taylor made instrument, first you must specify
#the correct arguments to the coverage method. And most importantly, return
#an Iterable containing ones or zeroes depending on the target being seen or not
#at that r,v.
class GodInstrument(Instrument):
def __init__(self):
super().__init__()
def coverage(self, lons, lats, r, v, target, R):
return [1 for _ in range(len(r))]
#as you can see this is a silly example, God sees it all.
from CtllDes.requests.coverage import symmetric_disk
#What does exactly symmetric_disk do?
help(symmetric_disk)
#A more realistic Instrument that uses one of the few coverage methods already written.
class DiskInstrument(Instrument):
def __init__(self):
super().__init__()
self.FOV_min = 0.1*u.rad
self.FOV_max = 0.2*u.rad
def coverage(self, lons, lats, r, v, target, R):
return coverage.symmetric_disk(self.FOV_min,
self.FOV_max,
lons,
lats,
r,
v,
target,
R)
###Output
Help on function symmetric_disk in module CtllDes.requests.coverage:
symmetric_disk(FOV_min, FOV_max, lons, lats, r, target, R)
coverage method.
Disk of coverage centered on subsatellite point.
Parameters
----------
FOV_min : ~astropy.units.quantity.Quantity
minimum field of view in radians
FOV_max : ~astropy.units.quantity.Quantity
maximum field of view in radians
* : default coverage parameters
help(CtllDes.request.coverage.Instrument.coverage) for more
info.
###Markdown
So the intuition here you must get is that the interface is the coverage method, with the default parameters needed to compute coverage figures.If you have extra parameters that define the coverage, for example, a roll angle allowed, this must be included as a parameter of the specific Instrument child class.
###Code
#Define your own parameters.
from CtllDes.utils import trigsf
class OnOffCamera(Instrument):
def __init__(self, thresh):
super().__init__()
self.threshold = thresh
self._FOV = np.pi*u.rad/8
@property
def threshold(self):
return self._threshold
@threshold.setter
def threshold(self,thresh):
if not isinstance(thresh,u.Quantity):
thresh = thresh * u.km
elif thresh.unit.physical_type != 'length':
raise ValueError("threshold must be length quantity")
self._threshold = thresh.to(u.km)
@property
def FOV(self):
return self._FOV
def coverage(self,lons,lats,r,v,target,R):
lams = trigsf.get_lam(r,self.FOV,R)
angles = trigsf.get_angles(lons,lats,(target.x*u.deg).to(u.rad),
(target.y*u.deg).to(u.rad))
radiis = np.sqrt(np.sum(r**2,axis=1))
cov = []
for lam,angle,radii in zip(lams,angles,radiis):
if angle < lam:
if self.threshold < radii < 2*R :
cov.append(1)
else:
cov.append(0)
else:
cov.append(0)
return cov
onoffcam = OnOffCamera(300)
sat.update_instruments(onoffcam,f=True)
sat.instruments
newcovs = Coverages.from_sat(sat, tgts, 5, dt=5, J2=True)
###Output
/home/vancii/Documents/Instituto Balseiro/6to semestre/pi/CtllDes/venv/lib/python3.8/site-packages/astropy/units/quantity.py:477: RuntimeWarning: invalid value encountered in arcsin
result = super().__array_ufunc__(function, method, *arrays, **kwargs)
###Markdown
More spherical trigonometry calculations will be added (in development right now) to create coverage methods easier and faster.
###Code
newcovs.to_df()
constellation = ctll.Ctll.from_sats(sat)
help(Camera)
from CtllDes.requests.coverage import symmetric_with_roll
#What does exactly symmetric_disk do?
help(symmetric_with_roll)
#A more realistic Instrument that uses one of the few coverage methods already written.
class RollCamera(Instrument):
def __init__(self,FOV,roll_angle):
"""Constructor for RollCamera.
Parameters
----------
FOV : ~astropy.units.quantity.Quantity
field of view, angle quantity
roll_angle : ~astropy.units.quantity.Quantity
maximum rolling angle
"""
super().__init__()
self.FOV = FOV.to(u.rad)
self.roll = roll_angle.to(u.rad)
def coverage(self, lons, lats, r, v, target, R):
return symmetric_with_roll(self.FOV,
lons,
lats,
r,
v,
target,
R,
roll_angle = self.roll)
roll_cam = RollCamera(0.15*u.rad,15*u.deg)
sat.update_instruments(roll_cam, f=True)
sat.instruments[0]
roll_cov = Coverages.from_sat(sat,tgts,20, dt=5, drag=False, J2=True)
rollcovdf = roll_cov.to_df()
rollcovdf
roll_accum = rollcovdf['accumulated'].to_numpy(dtype=float)
roll_accum /= max(roll_accum)
response_time = rollcovdf['response time'].to_numpy()
response_time = 1/response_time
response_time -= min(response_time)
response_time /= max(response_time)
roll_avg = rollcovdf['average time gap'].to_numpy(dtype=float)
roll_avg = 1/roll_avg
roll_avg -= min(roll_avg)
roll_avg /= max(roll_avg)
fig1 = plt.figure(figsize=(10,10))
ax1 = fig1.add_subplot(projection='3d')
ax1.plot_trisurf(target_lons, target_lats, roll_accum,
antialiased=False,cmap='viridis')
ax1.set_title("Coverage over Perú")
ax1.set_xlabel("longitude [°]")
ax1.set_ylabel("latitude [°]")
ax1.set_zlabel("Accumulated time of coverage, normalized")
ax1.set_xlim(min(target_lons),max(target_lons))
ax1.set_ylim(min(target_lats),max(target_lats))
ax1.set_zlim(min(roll_accum),max(roll_accum))
ax1.scatter(target_lons,target_lats, np.zeros(len(target_lons)),
s=100,c='k')
fig2 = plt.figure(figsize=(10,10))
ax2 = fig2.add_subplot(projection='3d')
ax2.plot_trisurf(target_lons, target_lats, response_time,
antialiased=False,cmap='viridis')
ax2.set_title("Response time over Perú")
ax2.set_xlabel("longitude [°]")
ax2.set_ylabel("latitude [°]")
ax2.set_zlabel("1/tᵣ normalized")
ax2.set_xlim(min(target_lons),max(target_lons))
ax2.set_ylim(min(target_lats),max(target_lats))
ax2.scatter(target_lons,target_lats, np.zeros(len(target_lons)),
s=100,c='k')
fig3 = plt.figure(figsize=(10,10))
ax3 = fig3.add_subplot(projection='3d')
ax3.plot_trisurf(target_lons, target_lats, roll_avg,
antialiased=False,cmap='viridis')
ax3.set_title("Average time gap over Perú")
ax3.set_xlabel("longitude [°]")
ax3.set_ylabel("latitude [°]")
ax3.set_zlabel("Averaget time gap normalized")
ax3.set_xlim(min(target_lons),max(target_lons))
ax3.set_ylim(min(target_lats),max(target_lats))
ax3.scatter(target_lons,target_lats, np.zeros(len(target_lons)),
s=100,c='k')
###Output
_____no_output_____ |
data-science-tutorial-for-beginners.ipynb | ###Markdown
\*Contents in this Jupyter Notebook are from and can be found in [DATAI's kaggle kernel](https://www.kaggle.com/kanncaa1). The order of the content were changed for this workshop session.Data scientist need to have these skills:1. Basic Tools: Like python, R or SQL. You do not need to know everything. What you only need is to learn how to use **python**1. Basic Statistics: Like mean, median or standart deviation. If you know basic statistics, you can use **python** easily. 1. Data Munging: Working with messy and difficult data. Like a inconsistent date and string formatting. As you guess, **python** helps us.1. Data Visualization: Title is actually explanatory. We will visualize the data with **python** like matplot and seaborn libraries.1. Machine Learning: You do not need to understand math behind the machine learning technique. You only need is understanding basics of machine learning and learning how to implement it while using **python**.**Content:**1. [Introduction to Python:](1) 1. [Dictionaries ](3) 1. [Loop data structures](6) 1. [User defined function](8) 1. [Scope](9) 1. [Nested function](10) 1. [Default and flexible arguments](11) 1. [Lambda function](12) 1. [Anonymous function](13) 1. [Iterators](14) 1. [List comprehension](15)1. [Python Data Science Toolbox:](7) 1. [Pandas](4) 1. [Data types](23) 1. [Logic, control flow and filtering](5) 1. [Matplotlib](2)1. [Cleaning Data](16) 1. [Exploratory data analysis](18) 1. [Visual exploratory data analysis](19) 1. [Diagnose data for cleaning](17) 1. [Tidy data](20) 1. [Pivoting data](21) 1. [Concatenating data](22) 1. [Missing data and testing with assert](24)1. [Pandas Foundation](25) 1. [Review of pandas](26) 1. [Building data frames from scratch](27) 1. [Visual exploratory data analysis](28) 1. [Statistical explatory data analysis](29) 1. [Indexing pandas time series](30) 1. [Resampling pandas time series](31)1. [Manipulating Data Frames with Pandas](32) 1. [Indexing data frames](33) 1. [Slicing data frames](34) 1. [Filtering data frames](35) 1. [Transforming data frames](36) 1. [Index objects and labeled data](37) 1. [Hierarchical indexing](38) 1. [Pivoting data frames](39) 1. [Stacking and unstacking data frames](40) 1. [Melting data frames](41) 1. [Categoricals and groupby](42)1. Data Visualization 1. Seaborn: https://www.kaggle.com/kanncaa1/seaborn-for-beginners 1. Bokeh 1: https://www.kaggle.com/kanncaa1/interactive-bokeh-tutorial-part-1 1. Rare Visualization: https://www.kaggle.com/kanncaa1/rare-visualization-tools 1. Plotly: https://www.kaggle.com/kanncaa1/plotly-tutorial-for-beginners1. Machine Learning 1. https://www.kaggle.com/kanncaa1/machine-learning-tutorial-for-beginners/1. Deep Learning 1. https://www.kaggle.com/kanncaa1/deep-learning-tutorial-for-beginners1. Time Series Prediction 1. https://www.kaggle.com/kanncaa1/time-series-prediction-tutorial-with-eda1. Statistic 1. https://www.kaggle.com/kanncaa1/basic-statistic-tutorial-for-beginners1. Deep Learning with Pytorch 1. Artificial Neural Network: https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers 1. Convolutional Neural Network: https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers 1. Recurrent Neural Network: https://www.kaggle.com/kanncaa1/recurrent-neural-network-with-pytorch 1. INTRODUCTION TO PYTHON * Basic dictionary features* While and for loops* User defined function * Scope* Nested function* Default and flexible arguments* Lambda function* Anonymous function* Iterators* List comprehension DICTIONARYWhy we need dictionary?* It has 'key' and 'value'* Faster than listsWhat is key and value. Example:* dictionary = {'spain' : 'madrid'}* Key is spain.* Values is madrid.**It's that easy.**Lets practice some other properties like keys(), values(), update, add, check, remove key, remove all entries and remove dicrionary.
###Code
#create dictionary and look its keys and values
dictionary = {'spain' : 'madrid','usa' : 'vegas'}
print(dictionary.keys())
print(dictionary.values())
# Keys have to be immutable objects like string, boolean, float, integer or tubles
# List is not immutable
# Keys are unique
dictionary['spain'] = "barcelona" # update existing entry
print(dictionary)
dictionary['france'] = "paris" # Add new entry
print(dictionary)
del dictionary['spain'] # remove entry with key 'spain'
print(dictionary)
print('france' in dictionary) # check include or not
dictionary.clear() # remove all entries in dict
print(dictionary)
# In order to run all code you need to take comment this line
# del dictionary # delete entire dictionary
print(dictionary) # it gives error because dictionary is deleted
###Output
_____no_output_____
###Markdown
WHILE and FOR LOOPSWe will learn most basic while and for loops
###Code
# Stay in loop if condition( i is not equal 5) is true
i = 0
while i != 5 :
print('i is: ',i)
i +=1
print(i,' is equal to 5')
# Stay in loop if condition( i is not equal 5) is true
lis = [1,2,3,4,5]
for i in lis:
print('i is: ',i)
print('')
# Enumerate index and value of list
# index : value = 0:1, 1:2, 2:3, 3:4, 4:5
for index, value in enumerate(lis):
print(index," : ",value)
print('')
# For dictionaries
# We can use for loop to achive key and value of dictionary. We learnt key and value at dictionary part.
dictionary = {'spain':'madrid','france':'paris'}
for key,value in dictionary.items():
print(key," : ",value)
print('')
###Output
_____no_output_____
###Markdown
USER DEFINED FUNCTIONWhat we need to know about functions:* docstrings: documentation for functions. Example:for f(): """This is docstring for documentation of function f"""* tuble: sequence of immutable python objects. cant modify valuestuble uses paranthesis like tuble = (1,2,3)unpack tuble into several variables like a,b,c = tuble
###Code
# example of what we learn above
def tuble_ex():
""" return defined t tuble"""
t = (1,2,3)
return t
a,b,c = tuble_ex()
print(a,b,c)
###Output
_____no_output_____
###Markdown
SCOPEWhat we need to know about scope:* global: defined main body in script* local: defined in a function* built in scope: names in predefined built in scope module such as print, lenLets make some basic examples
###Code
# guess print what
x = 2
def f():
x = 3
return x
print(x) # x = 2 global scope
print(f()) # x = 3 local scope
# What if there is no local scope
x = 5
def f():
y = 2*x # there is no local scope x
return y
print(f()) # it uses global scope x
# First local scopesearched, then global scope searched, if two of them cannot be found lastly built in scope searched.
# How can we learn what is built in scope
import builtins
dir(builtins)
###Output
_____no_output_____
###Markdown
NESTED FUNCTION* function inside function.* There is a LEGB rule that is search local scope, enclosing function, global and built in scopes, respectively.
###Code
#nested function
def square():
""" return square of value """
def add():
""" add two local variable """
x = 2
y = 3
z = x + y
return z
return add()**2
print(square())
###Output
_____no_output_____
###Markdown
DEFAULT and FLEXIBLE ARGUMENTS* Default argument example: def f(a, b=1): """ b = 1 is default argument"""* Flexible argument example: def f(*args): """ *args can be one or more"""def f(** kwargs) """ **kwargs is a dictionary""" lets write some code to practice
###Code
# default arguments
def f(a, b = 1, c = 2):
y = a + b + c
return y
print(f(5))
# what if we want to change default arguments
print(f(5,4,3))
# flexible arguments *args
def f(*args):
for i in args:
print(i)
f(1)
print("")
f(1,2,3,4)
# flexible arguments **kwargs that is dictionary
def f(**kwargs):
""" print key and value of dictionary"""
for key, value in kwargs.items(): # If you do not understand this part turn for loop part and look at dictionary in for loop
print(key, " ", value)
f(country = 'spain', capital = 'madrid', population = 123456)
###Output
_____no_output_____
###Markdown
LAMBDA FUNCTIONFaster way of writing function
###Code
# lambda function
square = lambda x: x**2 # where x is name of argument
print(square(4))
tot = lambda x,y,z: x+y+z # where x,y,z are names of arguments
print(tot(1,2,3))
###Output
_____no_output_____
###Markdown
ANONYMOUS FUNCTİONLike lambda function but it can take more than one arguments.* map(func,seq) : applies a function to all the items in a list
###Code
number_list = [1,2,3]
y = map(lambda x:x**2,number_list)
print(list(y))
###Output
_____no_output_____
###Markdown
ITERATORS* iterable is an object that can return an iterator* iterable: an object with an associated iter() method example: list, strings and dictionaries* iterator: produces next value with next() method
###Code
# iteration example
name = "ronaldo"
it = iter(name)
print(next(it)) # print next iteration
print(*it) # print remaining iteration
###Output
_____no_output_____
###Markdown
zip(): zip lists
###Code
# zip example
list1 = [1,2,3,4]
list2 = [5,6,7,8]
z = zip(list1,list2)
print(z)
z_list = list(z)
print(z_list)
un_zip = zip(*z_list)
un_list1,un_list2 = list(un_zip) # unzip returns tuble
print(un_list1)
print(un_list2)
print(type(un_list2))
###Output
_____no_output_____
###Markdown
LIST COMPREHENSİON**One of the most important topic of this kernel**We use list comprehension for data analysis often. list comprehension: collapse for loops for building lists into a single lineEx: num1 = [1,2,3] and we want to make it num2 = [2,3,4]. This can be done with for loop. However it is unnecessarily long. We can make it one line code that is list comprehension.
###Code
# Example of list comprehension
num1 = [1,2,3]
num2 = [i + 1 for i in num1 ]
print(num2)
###Output
_____no_output_____
###Markdown
[i + 1 for i in num1 ]: list of comprehension i +1: list comprehension syntax for i in num1: for loop syntax i: iterator num1: iterable object
###Code
# Conditionals on iterable
num1 = [5,10,15]
num2 = [i**2 if i == 10 else i-5 if i < 7 else i+5 for i in num1]
print(num2)
###Output
_____no_output_____
###Markdown
2. PYTHON DATA SCIENCE TOOLBOX In this part, you learn:* how to import csv file* data types* basic pandas features like filtering that is actually something always used and main for being data scientist* plotting line,scatter and histogram In programming, a [module](https://www.learnpython.org/en/Modules_and_Packages) is a piece of software that has a specific functionality. For example, when building a ping pong game, one module would be responsible for the game logic, andanother module would be responsible for drawing the game on the screen. Each module is a different file, which can be edited separately.Packages are namespaces which contain multiple packages and modules themselves.
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns # visualization tool
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
# Any results you write to the current directory are saved as output.
###Output
_____no_output_____
###Markdown
PANDAS* CSV: comma - separated values We can use head, tail, columns, shape and info methods to overview the data
###Code
data = pd.read_csv('../input/pokemon.csv')
series = data['Defense'] # data['Defense'] = series
print(type(series))
data_frame = data[['Defense']] # data[['Defense']] = data frame
print(type(data_frame))
# head shows first 5 rows
data.head()
# tail shows last 5 rows
data.tail()
# columns gives column names of features
data.columns
# shape gives number of rows and columns in a tuble
data.shape
# info gives data type like dataframe, number of sample or row, number of feature or column, feature types and memory usage
data.info()
# We can also use describe to see the basic statistics about the data
data.describe()
###Output
_____no_output_____
###Markdown
DATA TYPESThere are 5 basic data types: object(string),booleab, integer, float and categorical. We can make conversion data types like from str to categorical or from int to float Why is category important: * make dataframe smaller in memory * can be utilized for anlaysis especially for sklearn(we will learn later)
###Code
data.dtypes
# lets convert object(str) to categorical and int to float.
data['Type 1'] = data['Type 1'].astype('category')
data['Speed'] = data['Speed'].astype('float')
# As you can see Type 1 is converted from object to categorical
# And Speed ,s converted from int to float
data.dtypes
###Output
_____no_output_____
###Markdown
Before continue with pandas, we need to learn **logic, control flow** and **filtering.**Comparison operator: ==, , <=Boolean operators: and, or ,not Filtering pandas
###Code
# Comparison operator
print(3 > 2)
print(3!=2)
# Boolean operators
print(True and False)
print(True or False)
# 1 - Filtering Pandas data frame
x = data['Defense']>200 # There are only 3 pokemons who have higher defense value than 200
data[x]
# 2 - Filtering pandas with logical_and
# There are only 2 pokemons who have higher defence value than 2oo and higher attack value than 100
data[np.logical_and(data['Defense']>200, data['Attack']>100 )]
# This is also same with previous code line. Therefore we can also use '&' for filtering.
data[(data['Defense']>200) & (data['Attack']>100)]
###Output
_____no_output_____
###Markdown
We can use what we learned so far to do some calculatation and manipulation on the data.
###Code
# lets return pokemon csv and make one more list comprehension example
# lets classify pokemons whether they have high or low speed. Our threshold is average speed.
threshold = sum(data.Speed)/len(data.Speed)
data["speed_level"] = ["high" if i > threshold else "low" for i in data.Speed]
data.loc[:10,["speed_level","Speed"]] # we will learn loc more detailed later
###Output
_____no_output_____
###Markdown
MATPLOTLIBMatplot is a python library that help us to plot data. The easiest and basic plots are line, scatter and histogram plots.* Line plot is better when x axis is time.* Scatter is better when there is correlation between two variables* Histogram is better when we need to see distribution of numerical data.* Customization: Colors,labels,thickness of line, title, opacity, grid, figsize, ticks of axis and linestyle
###Code
data.corr()
###Output
_____no_output_____
###Markdown
See `matplotlib` documentation for using [`.subplots()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplots.html)
###Code
#correlation map
f,ax = plt.subplots(figsize=(18, 18)) # create a canvas
sns.heatmap(data.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax) #
plt.show()
# Line Plot
# color = color, label = label, linewidth = width of line, alpha = opacity, grid = grid, linestyle = sytle of line
data['Speed'].plot(kind = 'line', color = 'g',label = 'Speed',linewidth=1,alpha = 0.5,grid = True,linestyle = ':')
data['Defense'].plot(color = 'r',label = 'Defense',linewidth=1, alpha = 0.5,grid = True,linestyle = '-.')
plt.legend(loc='upper right') # legend = puts label into plot
plt.xlabel('x axis') # label = name of label
plt.ylabel('y axis')
plt.title('Line Plot') # title = title of plot
plt.show()
# Scatter Plot
# x = attack, y = defense
data.plot(kind='scatter', x='Attack', y='Defense',alpha = 0.5,color = 'red')
plt.xlabel('Attack') # label = name of label
plt.ylabel('Defence')
plt.title('Attack Defense Scatter Plot') # title = title of plot
# Histogram
# bins = number of bar in figure
data['Speed'].plot(kind = 'hist',bins = 50,figsize = (12,12))
plt.show()
# clf() = cleans it up again you can start a fresh
data['Speed'].plot(kind = 'hist',bins = 50)
plt.clf()
# We cannot see plot due to clf()
###Output
_____no_output_____
###Markdown
3. EXPLORING AND CLEANING DATA In this part, you will learn:* Exploratory data analysis* Visual exploratory data analysis* Diagnose data for cleaning* Tidy data* Pivoting data* Concatenating data* Missing data and testing with assert EXPLORATORY DATA ANALYSISvalue_counts(): Frequency countsoutliers: the value that is considerably higher or lower from rest of the data* Lets say value at 75% is Q3 and value at 25% is Q1. * Outlier are smaller than Q1 - 1.5(Q3-Q1) and bigger than Q3 + 1.5(Q3-Q1). (Q3-Q1) = IQRWe will use describe() method. Describe method includes:* count: number of entries* mean: average of entries* std: standart deviation* min: minimum entry* 25%: first quantile* 50%: median or second quantile* 75%: third quantile* max: maximum entry What is quantile?* 1,4,5,6,8,9,11,12,13,14,15,16,17* The median is the number that is in **middle** of the sequence. In this case it would be 11.* The lower quartile is the median in between the smallest number and the median i.e. in between 1 and 11, which is 6.* The upper quartile, you find the median between the median and the largest number i.e. between 11 and 17, which will be 14 according to the question above.
###Code
# For example lets look frequency of pokemom types
print(data['Type 1'].value_counts(dropna =False)) # if there are nan values that also be counted
# As it can be seen below there are 112 water pokemon or 70 grass pokemon
# For example max HP is 255 or min defense is 5
data.describe() #ignore null entries
###Output
_____no_output_____
###Markdown
VISUAL EXPLORATORY DATA ANALYSIS* Box plots: visualize basic statistics like outliers, min/max or quantiles
###Code
# For example: compare attack of pokemons that are legendary or not
# Black line at top is max
# Blue line at top is 75%
# Red line is median (50%)
# Blue line at bottom is 25%
# Black line at bottom is min
# There are no outliers
data.boxplot(column='Attack',by = 'Legendary')
###Output
_____no_output_____
###Markdown
DIAGNOSE DATA for CLEANINGUnclean data:* Column name inconsistency like upper-lower case letter or space between words* missing data* different language* extream values* ...Another short [tutorial](https://realpython.com/python-data-cleaning-numpy-pandas/) you can read about cleaning data in python. TIDY DATAWe tidy data with melt().Describing melt is confusing. Therefore lets make example to understand it.
###Code
# Firstly I create new data from pokemons data to explain melt nore easily.
data_new = data.head() # I only take 5 rows into new data
data_new
# lets melt
# id_vars = what we do not wish to melt
# value_vars = what we want to melt
melted = pd.melt(frame=data_new,id_vars = 'Name', value_vars= ['Attack','Defense'])
melted
###Output
_____no_output_____
###Markdown
PIVOTING DATAReverse of melting.
###Code
# Index is name
# I want to make that columns are variable
# Finally values in columns are value
melted.pivot(index = 'Name', columns = 'variable',values='value')
###Output
_____no_output_____
###Markdown
CONCATENATING DATAWe can concatenate two dataframe
###Code
# Firstly lets create 2 data frame
data1 = data.head()
data2= data.tail()
conc_data_row = pd.concat([data1,data2],axis =0,ignore_index =True) # axis = 0 : adds dataframes in row
conc_data_row
data1 = data['Attack'].head()
data2= data['Defense'].head()
conc_data_col = pd.concat([data1,data2],axis =1) # axis = 0 : adds dataframes in row
conc_data_col
###Output
_____no_output_____
###Markdown
MISSING DATA and TESTING WITH ASSERTIf we encounter with missing data, what we can do:* leave as is* drop them with dropna()* fill missing value with fillna()* fill missing values with test statistics like meanAssert statement: check that you can turn on or turn off when you are done with your testing of the program
###Code
# Lets look at does pokemon data have nan value
# As you can see there are 800 entries. However Type 2 has 414 non-null object so it has 386 null object.
data.info()
# Lets chech Type 2
data["Type 2"].value_counts(dropna =False)
# As you can see, there are 386 NAN value
# Lets drop nan values
data1=data # also we will use data to fill missing value so I assign it to data1 variable
data1["Type 2"].dropna(inplace = True) # inplace = True means we do not assign it to new variable. Changes automatically assigned to data
# So does it work ?
# Lets check with assert statement
# Assert statement:
assert 1==1 # return nothing because it is true
# In order to run all code, we need to make this line comment
# assert 1==2 # return error because it is false
assert data['Type 2'].notnull().all() # returns nothing because we drop nan values
data["Type 2"].fillna('empty',inplace = True)
assert data['Type 2'].notnull().all() # returns nothing because we do not have nan values
# # With assert statement we can check a lot of thing. For example
# assert data.columns[1] == 'Name'
# assert data.Speed.dtypes == np.int
###Output
_____no_output_____
###Markdown
4. PANDAS FOUNDATION REVİEW of PANDASAs you notice, I do not give all idea in a same time. Although, we learn some basics of pandas, we will go deeper in pandas.* single column = series* NaN = not a number* dataframe.values = numpy BUILDING DATA FRAMES FROM SCRATCH* We can build data frames from csv as we did earlier.* Also we can build dataframe from dictionaries * zip() method: This function returns a list of tuples, where the i-th tuple contains the i-th element from each of the argument sequences or iterables.* Adding new column* Broadcasting: Create new column and assign a value to entire column
###Code
# data frames from dictionary
country = ["Spain","France"]
population = ["11","12"]
list_label = ["country","population"]
list_col = [country,population]
zipped = list(zip(list_label,list_col))
data_dict = dict(zipped)
df = pd.DataFrame(data_dict)
df
# Add new columns
df["capital"] = ["madrid","paris"]
df
# Broadcasting
df["income"] = 0 #Broadcasting entire column
df
###Output
_____no_output_____
###Markdown
VISUAL EXPLORATORY DATA ANALYSIS* Plot* Subplot* Histogram: * bins: number of bins * range(tuble): min and max values of bins * normed(boolean): normalize or not * cumulative(boolean): compute cumulative distribution
###Code
# Plotting all data
data1 = data.loc[:,["Attack","Defense","Speed"]]
data1.plot()
# it is confusing
# subplots
data1.plot(subplots = True)
plt.show()
# scatter plot
data1.plot(kind = "scatter",x="Attack",y = "Defense")
plt.show()
# hist plot
data1.plot(kind = "hist",y = "Defense",bins = 50,range= (0,250),normed = True)
# histogram subplot with non cumulative and cumulative
fig, axes = plt.subplots(nrows=2,ncols=1)
data1.plot(kind = "hist",y = "Defense",bins = 50,range= (0,250),normed = True,ax = axes[0])
data1.plot(kind = "hist",y = "Defense",bins = 50,range= (0,250),normed = True,ax = axes[1],cumulative = True)
plt.savefig('graph.png')
plt
###Output
_____no_output_____
###Markdown
STATISTICAL EXPLORATORY DATA ANALYSISI already explained it at previous parts. However lets look at one more time.* count: number of entries* mean: average of entries* std: standart deviation* min: minimum entry* 25%: first quantile* 50%: median or second quantile* 75%: third quantile* max: maximum entry
###Code
data.describe()
###Output
_____no_output_____
###Markdown
INDEXING PANDAS TIME SERIES* datetime = object* parse_dates(boolean): Transform date to ISO 8601 (yyyy-mm-dd hh:mm:ss ) format
###Code
time_list = ["1992-03-08","1992-04-12"]
print(type(time_list[1])) # As you can see date is string
# however we want it to be datetime object
datetime_object = pd.to_datetime(time_list)
print(type(datetime_object))
# close warning
import warnings
warnings.filterwarnings("ignore")
# In order to practice lets take head of pokemon data and add it a time list
data2 = data.head()
date_list = ["1992-01-10","1992-02-10","1992-03-10","1993-03-15","1993-03-16"]
datetime_object = pd.to_datetime(date_list)
data2["date"] = datetime_object
# lets make date as index
data2= data2.set_index("date")
data2
# Now we can select according to our date index
print(data2.loc["1993-03-16"])
print(data2.loc["1992-03-10":"1993-03-16"])
###Output
_____no_output_____
###Markdown
RESAMPLING PANDAS TIME SERIES* Resampling: statistical method over different time intervals * Needs string to specify frequency like "M" = month or "A" = year* Downsampling: reduce date time rows to slower frequency like from daily to weekly* Upsampling: increase date time rows to faster frequency like from daily to hourly* Interpolate: Interpolate values according to different methods like ‘linear’, ‘time’ or index’ * https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.interpolate.html
###Code
data2.resample("A")
# We will use data2 that we create at previous part
data2.resample("A").mean()
# Lets resample with month
data2.resample("M").mean()
# As you can see there are a lot of nan because data2 does not include all months
# In real life (data is real. Not created from us like data2) we can solve this problem with interpolate
# We can interpolete from first value
resampled = data2.resample("M").first()
resampled.interpolate("ffill")
# Or we can interpolate with mean()
data2.resample("M").mean().interpolate("linear")
###Output
_____no_output_____
###Markdown
MANIPULATING DATA FRAMES WITH PANDAS INDEXING DATA FRAMES* Indexing using square brackets* Using column attribute and row label* Using loc accessor* Selecting only some columns
###Code
# read data
data = pd.read_csv('../input/pokemon.csv')
data= data.set_index("#")
data.head()
# indexing using square brackets
data["HP"][1]
# using column attribute and row label
data.HP[1]
# using loc accessor
data.loc[1,["HP"]]
# Selecting only some columns
data[["HP","Attack"]]
###Output
_____no_output_____
###Markdown
SLICING DATA FRAME* Difference between selecting columns * Series and data frames* Slicing and indexing series* Reverse slicing * From something to end
###Code
# Difference between selecting columns: series and dataframes
print(type(data["HP"])) # series
print(type(data[["HP"]])) # data frames
# Slicing and indexing series
data.loc[1:10,"HP":"Defense"] # 10 and "Defense" are inclusive
# Reverse slicing
data.loc[10:1:-1,"HP":"Defense"]
# From something to end
data.loc[1:10,"Speed":]
###Output
_____no_output_____
###Markdown
FILTERING DATA FRAMESCreating boolean seriesCombining filtersFiltering column based others
###Code
# Creating boolean series
boolean = data.HP > 200
data[boolean]
# Combining filters
first_filter = data.HP > 150
second_filter = data.Speed > 35
data[first_filter & second_filter]
# Filtering column based others
data.HP[data.Speed<15]
###Output
_____no_output_____
###Markdown
TRANSFORMING DATA* Plain python functions* Lambda function: to apply arbitrary python function to every element* Defining column using other columns
###Code
# Plain python functions
def div(n):
return n/2
data.HP.apply(div)
# Or we can use lambda function
data.HP.apply(lambda n : n/2)
# Defining column using other columns
data["total_power"] = data.Attack + data.Defense
data.head()
###Output
_____no_output_____
###Markdown
INDEX OBJECTS AND LABELED DATAindex: sequence of label
###Code
# our index name is this:
print(data.index.name)
# lets change it
data.index.name = "index_name"
data.head()
# Overwrite index
# if we want to modify index we need to change all of them.
data.head()
# first copy of our data to data3 then change index
data3 = data.copy()
# lets make index start from 100. It is not remarkable change but it is just example
data3.index = range(100,900,1)
data3.head()
# We can make one of the column as index. I actually did it at the beginning of manipulating data frames with pandas section
# It was like this
# data= data.set_index("#")
# also you can use
# data.index = data["#"]
###Output
_____no_output_____
###Markdown
HIERARCHICAL INDEXING* Setting indexing
###Code
# lets read data frame one more time to start from beginning
data = pd.read_csv('../input/pokemon.csv')
data.head()
# As you can see there is index. However we want to set one or more column to be index
# Setting index : type 1 is outer type 2 is inner index
data1 = data.set_index(["Type 1","Type 2"])
data1.head(100)
# data1.loc["Fire","Flying"] # howw to use indexes
###Output
_____no_output_____
###Markdown
PIVOTING DATA FRAMES* pivoting: reshape tool
###Code
dic = {"treatment":["A","A","B","B"],"gender":["F","M","F","M"],"response":[10,45,5,9],"age":[15,4,72,65]}
df = pd.DataFrame(dic)
df
# pivoting
df.pivot(index="treatment",columns = "gender",values="response")
###Output
_____no_output_____
###Markdown
STACKING and UNSTACKING DATAFRAME* deal with multi label indexes* level: position of unstacked index* swaplevel: change inner and outer level index positionRead more about unstacking [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html)> Pivot a level of the (necessarily hierarchical) index labels, returning a DataFrame having a new level of column labels whose inner-most level consists of the pivoted index labels.
###Code
df1 = df.set_index(["treatment","gender"])
df1
# lets unstack it
# level determines indexes
df1.unstack(level=0)
df1.unstack(level=1)
# change inner and outer level index position
df2 = df1.swaplevel(0,1)
df2
###Output
_____no_output_____
###Markdown
MELTING DATA FRAMES* Reverse of pivoting
###Code
df
# df.pivot(index="treatment",columns = "gender",values="response")
pd.melt(df,id_vars="treatment",value_vars=["age","response"])
###Output
_____no_output_____
###Markdown
CATEGORICALS AND GROUPBY
###Code
# We will use df
df
# according to treatment take means of other features
df.groupby("treatment").mean() # mean is aggregation / reduction method
# there are other methods like sum, std,max or min
# we can only choose one of the feature
df.groupby("treatment").age.max()
# Or we can choose multiple features
df.groupby("treatment")[["age","response"]].min()
df.info()
# as you can see gender is object
# However if we use groupby, we can convert it categorical data.
# Because categorical data uses less memory, speed up operations like groupby
#df["gender"] = df["gender"].astype("category")
#df["treatment"] = df["treatment"].astype("category")
#df.info()
###Output
_____no_output_____ |
RNN_Lab.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
Part-of-Speech Tagging with Recurrent Neural Networks Your task in this assignment is to implement a simple part-of-speech tagger based on recurrent neural networks. Problem specification Your task in this assignment is1. to build a part-of-speech tagger based on a recurrent neural network architecture2. to train this tagger on the provided training data and identify a good model2. to evaluate the performance of this model on the provided test dataTo identify a good model, you can use the provided development (validation) data. Part-of-speech tagging Part-of-speech (POS) tagging is the task of labelling words (tokens) with [parts of speech](https://en.wikipedia.org/wiki/Part_of_speech). To give an example, consider the sentence *Parker hates parsnips*. In this sentence, the word *Parker* should be labelled as a proper noun (a noun that is the name of a person), *hates* should be labelled as a verb, and *parsnips* should be labelled as a (common) noun. Part-of-speech tagging is an essential ingredient of many state-of-the-art natural language understanding systems.Part-of-speech tagging can be cast as a supervised machine learning problem where the gold-standard data consists of sentences whose words have been manually annotated with parts of speech. For the present assignment you will be using a corpus built over the source material of the [English Web Treebank](https://catalog.ldc.upenn.edu/ldc2012t13), consisting of approximately 16,000 sentences with 254,000 tokens. The corpus has been released by the [Universal Dependencies Project](http://universaldependencies.org).To make it easier to compare systems, the gold-standard data has been split into three parts: training, development (validation), and test. The following cell provides a function that can be used to load the data.
###Code
def read_data(path):
with open(path, encoding='utf-8') as fp:
result = []
for line in fp:
line = line.rstrip()
if len(line) == 0:
yield result
result = []
elif not line.startswith('#'):
columns = line.split()
if columns[0].isdigit():
result.append((columns[1], columns[3]))
###Output
_____no_output_____
###Markdown
The next cell loads the data:
###Code
train_data = list(read_data('/content/drive/My Drive/Colab Notebooks/RNN/en_ewt-ud-train.conllu'))
print('Number of sentences in the training data: {}'.format(len(train_data)))
dev_data = list(read_data('/content/drive/My Drive/Colab Notebooks/RNN/en_ewt-ud-dev.conllu'))
print('Number of sentences in the development data: {}'.format(len(dev_data)))
test_data = list(read_data('/content/drive/My Drive/Colab Notebooks/RNN/en_ewt-ud-test.conllu'))
print('Number of sentences in the test data: {}'.format(len(test_data)))
###Output
Number of sentences in the training data: 12543
Number of sentences in the development data: 2002
Number of sentences in the test data: 2077
###Markdown
From a Python perspective, each of the data sets is a list of what we shall refer to as *tagged sentences*. A tagged sentence, in turn, is a list of pairs $(w,t)$, where $w$ is a word token and $t$ is the word’s POS tag. Here is an example from the training data to show you how this looks like:
###Code
train_data[42]
###Output
_____no_output_____
###Markdown
You will see part-of-speech tags such as `VERB` for verb, `NOUN` for noun, and `ADV` for adverb. If you are interested in learning more about the tag set used in the gold-standard data, you can have a look at the documentation of the [Universal POS tags](http://universaldependencies.org/u/pos/all.html). However, you do not need to understand the meaning of the POS tags to solve this assignment; you can simply treat them as labels drawn from a finite set of alternatives. Network architecture The proposed network architecture for your tagger is a sequential model with three layers, illustrated below: an embedding, a bidirectional LSTM, and a softmax layer. The embedding turns word indexes (integers representing words) into fixed-size dense vectors which are then fed into the bidirectional LSTM. The output of the LSTM at each position of the sentence is passed to a softmax layer which predicts the POS tag for the word at that position.To implement the network architecture, you will use [Keras](https://keras.io/). Keras comes with an extensive online documentation, and reading the relevant parts of this documentation will be essential when working on this assignment. We suggest to start with the tutorial [Getting started with the Keras Sequential model](https://keras.io/getting-started/sequential-model-guide/). After that, you should have a look at some of the examples mentioned in that tutorial, and in particular the [Bidirectional LSTM](https://keras.io/examples/imdb_bidirectional_lstm/) example. Evaluation The most widely-used evaluation measure for part-of-speech tagging is per-word accuracy, which is the percentage of words to which the tagger assigns the correct tag (according to the gold standard). This is one of the default metrics in Keras.One problem that you will encounter during evaluation is that the evaluation data contains words that you did not see (and did not add to your index) during training. The simplest solution to this problem is to introduce a special ‘word’ `` and replace each unknown word with this pseudoword. Part 1: Pre-process the data Before you can start to implement the network architecture as such, you will have to bring the tagged sentences from the gold-standard data into a form that can be used with the network. One important step in this is to map the words and tags (strings) to integers. Here is code that illustrates the idea:
###Code
word_to_index = {}
for tagged_sentence in train_data:
for word, tag in tagged_sentence:
if word not in word_to_index:
word_to_index[word] = len(word_to_index)
print('Number of unique words in the training data: {}'.format(len(word_to_index)))
print('Index of the word "hates": {}'.format(word_to_index['hates']))
###Output
Number of unique words in the training data: 19672
Index of the word "hates": 4579
###Markdown
Once you have indexes for the words and the tags, you can construct the input and the gold-standard output tensor required to train the network. Constructing the input tensorThe input tensor should be of shape $(N, n)$ where $N$ is the total number of sentences in the training data and $n$ is the length of the longest sentence. Note that Keras requires all sequences in an input tensor to have the same length, which means that you will have to pad all sequences to that length. You can use the helper function [`pad_sequences`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) for this, which by default will front-pad sequences with the value 0. It is essential then that you do not use this special padding value as the index of actual words. Constructing the target output tensorThe target output tensor should be of shape $(N, n, T)$ where $T$ is the number of unique tags in the training data, plus one to cater for the special padding value. The additional dimension corresponds to the fact that the softmax layer of the network will output one $T$-dimensional vector for each position of an input sentence. To construct this vector, you can use the helper function [`to_categorical`](https://www.tensorflow.org/api_docs/python/tf/keras/utils/to_categorical).
###Code
# Define a help function to build index from a list of words or tags, each word / tag will have a unique number
def build_index(strings, init=[]):
string_to_index = {s: i for i, s in enumerate(init)}
# Loop over strings in 'strings'
for string in strings:
# Check if string exists in variable 'string_to_index',
# if string does not exist, add a new element to 'string_to_index': the current length of 'string_to_index'
if string not in string_to_index:
string_to_index[string]=len(string_to_index)
return string_to_index
# Convert all words and tags in train_data to lists, start with empty lists and use '.append()'
# to add one word / tag at a time, similar to the cell below 'pre-process the data'
words, tags = [], []
for tagged_sentence in train_data:
for word,tag in tagged_sentence:
words.append(word)
tags.append(tag)
# Call the help function you made, to build an index for words (word_to_index), and one index for tags (tag_to_index)
word_to_index=build_index(words,['<pad>','<unk>'])
tag_to_index=build_index(tags,['<pad>'])
# Check number of words and tags
num_words = len(word_to_index)
num_tags = len(tag_to_index)
print(f'Number of unique words in the training data: {num_words}')
print(f'Number of unique tags in the training_data: {num_tags}')
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical
# Make a function that converts the tagged sentences, word indices and tag indices to
# X and Y, that can be used when training the RNN
def encode(tagged_sentences, word_to_index, tag_to_index):
# Start with empty lists that will contain all training examples and corresponding output
X, Y = [], []
# Loop over tagged sentences
for current_tagged_sentence in tagged_sentences:
Xcurrent, Ycurrent = [], []
for word,tag in current_tagged_sentence:# Loop over words and tags in current sentence
if word not in word_to_index:
Xcurrent.append(word_to_index.get('<unk>'))#adding an unkown word index
else:
Xcurrent.append(word_to_index.get(word))#adding the index of the word
if tag not in tag_to_index:
Ycurrent.append(tag_to_index.get('<unk>'))#adding an unkown tag index
else:
Ycurrent.append(tag_to_index.get(tag))#adding the index of an exitsing tag
# Append X with Xcurrent, and Y with Ycurrent
X.append(Xcurrent)
Y.append(Ycurrent)
# Pad the sequences, so that all have the same length
X=pad_sequences(sequences=X,padding='post')
Y=pad_sequences(sequences=Y,padding='post')
# Convert labels to categorical, as you did in the CNN lab
Y=to_categorical(Y,num_classes=num_tags,dtype= 'float32')
return X, Y
# Use your 'encode' function to create X and Y from train_data, word_to_index, tag_to_index
X,Y=encode(train_data,word_to_index,tag_to_index)
# Print the shape of X and Y
print('Shape of X:',X.shape)
print('Shape of Y:',Y.shape)
###Output
Shape of X: (12543, 159)
Shape of Y: (12543, 159, 18)
###Markdown
Part 2: Construct the model To implement the network architecture, you need to find and instantiate the relevant building blocks from the Keras library. Note that Keras layers support a large number of optional parameters; use the default values unless you have a good reason not to. Two mandatory parameters that you will have to specify are the dimensionality of the embedding and the dimensionality of the output of the LSTM layer. The following values are reasonable starting points, but do try a number of different settings.* dimensionality of the embedding: 100* dimensionality of the output of the bidirectional LSTM layer: 100You will also have to choose an appropriate loss function. For training we recommend the Adam optimiser.
###Code
from tensorflow.keras import Sequential
# Import necessary layers
from tensorflow.keras.layers import Dense,Embedding, LSTM, Bidirectional
from keras.losses import categorical_crossentropy
embedding_dim = 100
hidden_dim = 100
model = Sequential()
model.add(Embedding(input_dim=num_words,output_dim=embedding_dim))
model.add(Bidirectional(LSTM(units=hidden_dim)))
model.add(Dense(num_tags, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])
# Print a summary of the model
model.summary()
type(dev_data)
type(train_data)
###Output
_____no_output_____
###Markdown
Part 3: Train the network The next step is to train the network. Use the following parameters:* number of epochs: 10* batch size: 32Training will print the average running loss on the training data after each minibatch. In addition to that, we ask you to also print the loss and accuracy on the development data after each epoch. You can do so by providing the `validation_data` argument to the `fit` method.Note that the `fit` method returns a [`History`](https://keras.io/callbacks/history) object that contains useful information about the training. We will use that information in the next step.
###Code
# Encode the development (validation data) using the 'encode' function you created before
batch_size=32
epochs=10
#splitting the dev data into Xval and Yval to train the network
Xval,Yval=encode(dev_data,word_to_index,tag_to_index)
# Train the model and save the history, as you did in the DNN and CNN labs, provide validation data
history=model.fit(X,Y,validation_data = (Xval, Yval),batch_size = batch_size, epochs = epochs)
###Output
_____no_output_____
###Markdown
Part 4: Identify a good model The following code will plot the loss on the training data and the loss on the validation data after each epoch:
###Code
# Lets define a help function for plotting the training results
import matplotlib.pyplot as plt
def plot_results(history):
val_loss = history.history['val_loss']
acc = history.history['accuracy']
loss = history.history['loss']
val_acc = history.history['val_accuracy']
plt.figure(figsize=(10,4))
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.plot(loss)
plt.plot(val_loss)
plt.legend(['Training','Validation'])
plt.figure(figsize=(10,4))
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.plot(acc)
plt.plot(val_acc)
plt.legend(['Training','Validation'])
plt.show()
plot_results(history)
###Output
_____no_output_____
###Markdown
Look at the plot and determine the epoch after which the model starts to overfit. Then, re-train your model using that many epochs and compute the accuracy of the tagger on the test data.
###Code
# Encode the test_data using the 'encode' function you created before
# Evaluate the model on test data, as you did in the DNN and CNN lab
###Output
_____no_output_____ |
recursion_dynamic/n_pairs_parentheses/n_pairs_parentheses_solution.ipynb | ###Markdown
This notebook was prepared by [Rishi Rajasekaran](https://github.com/rishihot55). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Print all valid combinations of n-pairs of parentheses.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* None Test Cases* 0 -> ' '* 1 -> ()* 2 -> (()), ()()* 3 -> ((())), (()()), (())(), ()(()), ()()() AlgorithmLet `l` and `r` denote the number of left and right parentheses remaining at any given point. The algorithm makes use of the following conditions applied recursively:* Left braces can be inserted any time, as long as we do not exhaust them i.e. `l > 0`.* Right braces can be inserted, as long as the number of right braces remaining is greater than the left braces remaining i.e. `r > l`. Violation of the aforementioned condition produces an unbalanced string of parentheses.* If both left and right braces have been exhausted i.e. `l = 0 and r = 0`, then the resultant string produced is balanced.The algorithm can be rephrased as:* Base case: `l = 0 and r = 0` - Add the string generated to the result set* Case 1: `l > 0` - Add a left parenthesis to the parentheses string. - Call parentheses_util(l - 1, r, new_string, result_set)* Case 2: `r > l` - Add a right parenthesis to the parentheses string. - Call parentheses_util(l, r - 1, new_string, result_set)Complexity:* Time: `O(4^n/n^(3/2))`. See [Catalan numbers](https://en.wikipedia.org/wiki/Catalan_numberApplications_in_combinatorics)* Space complexity: `O(n)` (Due to the implicit call stack storing a maximum of 2n function calls) Code
###Code
def parentheses_util(no_left, no_right, pair_string, result):
if no_left == 0 and no_right == 0:
result.add(pair_string)
else:
if no_left > 0:
parentheses_util(no_left - 1, no_right, pair_string + '(', result)
if no_right > no_left:
parentheses_util(no_left, no_right - 1, pair_string + ')', result)
def pair_parentheses(n):
result_set = set()
if n == 0:
return result_set
parentheses_util(n, n, '', result_set)
return result_set
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_n_pairs_parentheses.py
from nose.tools import assert_equal
class TestPairParentheses(object):
def test_pair_parentheses(self, solution):
assert_equal(solution(0), set([]))
assert_equal(solution(1), set(['()']))
assert_equal(solution(2), set(['(())',
'()()']))
assert_equal(solution(3), set(['((()))',
'(()())',
'(())()',
'()(())',
'()()()']))
print('Success: test_pair_parentheses')
def main():
test = TestPairParentheses()
test.test_pair_parentheses(pair_parentheses)
if __name__ == '__main__':
main()
%run -i test_n_pairs_parentheses.py
###Output
Success: test_pair_parentheses
###Markdown
Unit Test
###Code
%%writefile test_n_pairs_parentheses.py
from nose.tools import assert_equal
class TestPairParentheses(object):
def test_pair_parentheses(self, solution):
assert_equal(solution(0), set([]))
assert_equal(solution(1), set(['()']))
assert_equal(solution(2), set(['(())',
'()()']))
assert_equal(solution(3), set(['((()))',
'(()())',
'(())()',
'()(())',
'()()()']))
print('Success: test_pair_parentheses')
def main():
test = TestPairParentheses()
test.test_pair_parentheses(pair_parentheses)
if __name__ == '__main__':
main()
%run -i test_n_pairs_parentheses.py
###Output
Success: test_pair_parentheses
###Markdown
This notebook was prepared by [Rishi Rajasekaran](https://github.com/rishihot55). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Find all valid combinations of n-pairs of parentheses.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Is the input an integer representing the number of pairs? * Yes* Can we assume the inputs are valid? * No* Is the output a list of valid combinations? * Yes* Should the output have duplicates? * No* Can we assume this fits memory? * Yes Test Cases* None -> Exception* Negative -> Exception* 0 -> []* 1 -> ['()']* 2 -> ['(())', '()()']* 3 -> ['((()))', '(()())', '(())()', '()(())', '()()()'] AlgorithmLet `l` and `r` denote the number of left and right parentheses remaining at any given point. The algorithm makes use of the following conditions applied recursively:* Left braces can be inserted any time, as long as we do not exhaust them i.e. `l > 0`.* Right braces can be inserted, as long as the number of right braces remaining is greater than the left braces remaining i.e. `r > l`. Violation of the aforementioned condition produces an unbalanced string of parentheses.* If both left and right braces have been exhausted i.e. `l = 0 and r = 0`, then the resultant string produced is balanced.The algorithm can be rephrased as:* Base case: `l = 0 and r = 0` - Add the string generated to the result set* Case 1: `l > 0` - Add a left parenthesis to the parentheses string. - Recurse (l - 1, r, new_string, result_set)* Case 2: `r > l` - Add a right parenthesis to the parentheses string. - Recurse (l, r - 1, new_string, result_set)Complexity:* Time: `O(4^n/n^(3/2))`, see [Catalan numbers](https://en.wikipedia.org/wiki/Catalan_numberApplications_in_combinatorics) - 1, 1, 2, 5, 14, 42, 132...* Space complexity: `O(n)`, due to the implicit call stack storing a maximum of 2n function calls) Code
###Code
class Parentheses(object):
def find_pair(self, num_pairs):
if num_pairs is None:
raise TypeError('num_pairs cannot be None')
if num_pairs < 0:
raise ValueError('num_pairs cannot be < 0')
if not num_pairs:
return []
results = []
curr_results = []
self._find_pair(num_pairs, num_pairs, curr_results, results)
return results
def _find_pair(self, nleft, nright, curr_results, results):
if nleft == 0 and nright == 0:
results.append(''.join(curr_results))
else:
if nleft >= 0:
self._find_pair(nleft-1, nright, curr_results+['('], results)
if nright > nleft:
self._find_pair(nleft, nright-1, curr_results+[')'], results)
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_n_pairs_parentheses.py
import unittest
class TestPairParentheses(unittest.TestCase):
def test_pair_parentheses(self):
parentheses = Parentheses()
self.assertRaises(TypeError, parentheses.find_pair, None)
self.assertRaises(ValueError, parentheses.find_pair, -1)
self.assertEqual(parentheses.find_pair(0), [])
self.assertEqual(parentheses.find_pair(1), ['()'])
self.assertEqual(parentheses.find_pair(2), ['(())',
'()()'])
self.assertEqual(parentheses.find_pair(3), ['((()))',
'(()())',
'(())()',
'()(())',
'()()()'])
print('Success: test_pair_parentheses')
def main():
test = TestPairParentheses()
test.test_pair_parentheses()
if __name__ == '__main__':
main()
%run -i test_n_pairs_parentheses.py
###Output
Success: test_pair_parentheses
###Markdown
This notebook was prepared by [Rishi Rajasekaran](https://github.com/rishihot55). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Print all valid combinations of n-pairs of parentheses.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* None Test Cases* 0 -> ' '* 1 -> ()* 2 -> (()), ()()* 3 -> ((())), (()()), (())(), ()(()), ()()() AlgorithmLet `l` and `r` denote the number of left and right parentheses remaining at any given point. The algorithm makes use of the following conditions applied recursively:* Left braces can be inserted any time, as long as we do not exhaust them i.e. `l > 0`.* Right braces can be inserted, as long as the number of right braces remaining is greater than the left braces remaining i.e. `r > l`. Violation of the aforementioned condition produces an unbalanced string of parentheses.* If both left and right braces have been exhausted i.e. `l = 0 and r = 0`, then the resultant string produced is balanced.The algorithm can be rephrased as:* Base case: `l = 0 and r = 0` - Add the string generated to the result set* Case 1: `l > 0` - Add a left parenthesis to the parentheses string. - Call parentheses_util(l - 1, r, new_string, result_set)* Case 2: `r > l` - Add a right parenthesis to the parentheses string. - Call parentheses_util(l, r - 1, new_string, result_set)Complexity:* Time: `O(4^n/n^(3/2))`. See [Catalan numbers](https://en.wikipedia.org/wiki/Catalan_numberApplications_in_combinatorics)* Space complexity: `O(n)` (Due to the implicit call stack storing a maximum of 2n function calls) Code
###Code
def parentheses_util(no_left, no_right, pair_string, result):
if no_left == 0 and no_right == 0:
result.add(pair_string)
else:
if no_left > 0:
parentheses_util(no_left - 1, no_right, pair_string + '(', result)
if no_right > no_left:
parentheses_util(no_left, no_right - 1, pair_string + ')', result)
def pair_parentheses(n):
result_set = set()
if n == 0:
return result_set
parentheses_util(n, n, '', result_set)
return result_set
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Rishi Rajasekaran](https://github.com/rishihot55). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Find all valid combinations of n-pairs of parentheses.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Is the input an integer representing the number of pairs? * Yes* Can we assume the inputs are valid? * No* Is the output a list of valid combinations? * Yes* Should the output have duplicates? * No* Can we assume this fits memory? * Yes Test Cases* None -> Exception* Negative -> Exception* 0 -> []* 1 -> ['()']* 2 -> ['(())', '()()']* 3 -> ['((()))', '(()())', '(())()', '()(())', '()()()'] AlgorithmLet `l` and `r` denote the number of left and right parentheses remaining at any given point. The algorithm makes use of the following conditions applied recursively:* Left braces can be inserted any time, as long as we do not exhaust them i.e. `l > 0`.* Right braces can be inserted, as long as the number of right braces remaining is greater than the left braces remaining i.e. `r > l`. Violation of the aforementioned condition produces an unbalanced string of parentheses.* If both left and right braces have been exhausted i.e. `l = 0 and r = 0`, then the resultant string produced is balanced.The algorithm can be rephrased as:* Base case: `l = 0 and r = 0` - Add the string generated to the result set* Case 1: `l > 0` - Add a left parenthesis to the parentheses string. - Recurse (l - 1, r, new_string, result_set)* Case 2: `r > l` - Add a right parenthesis to the parentheses string. - Recurse (l, r - 1, new_string, result_set)Complexity:* Time: `O(4^n/n^(3/2))`, see [Catalan numbers](https://en.wikipedia.org/wiki/Catalan_numberApplications_in_combinatorics) - 1, 1, 2, 5, 14, 42, 132...* Space complexity: `O(n)`, due to the implicit call stack storing a maximum of 2n function calls) Code
###Code
class Parentheses(object):
def find_pair(self, num_pairs):
if num_pairs is None:
raise TypeError('num_pairs cannot be None')
if num_pairs < 0:
raise ValueError('num_pairs cannot be < 0')
if not num_pairs:
return []
results = []
curr_results = []
self._find_pair(num_pairs, num_pairs, curr_results, results)
return results
def _find_pair(self, nleft, nright, curr_results, results):
if nleft == 0 and nright == 0:
results.append(''.join(curr_results))
else:
if nleft >= 0:
self._find_pair(nleft-1, nright, curr_results+['('], results)
if nright > nleft:
self._find_pair(nleft, nright-1, curr_results+[')'], results)
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_n_pairs_parentheses.py
from nose.tools import assert_equal, assert_raises
class TestPairParentheses(object):
def test_pair_parentheses(self):
parentheses = Parentheses()
assert_raises(TypeError, parentheses.find_pair, None)
assert_raises(ValueError, parentheses.find_pair, -1)
assert_equal(parentheses.find_pair(0), [])
assert_equal(parentheses.find_pair(1), ['()'])
assert_equal(parentheses.find_pair(2), ['(())',
'()()'])
assert_equal(parentheses.find_pair(3), ['((()))',
'(()())',
'(())()',
'()(())',
'()()()'])
print('Success: test_pair_parentheses')
def main():
test = TestPairParentheses()
test.test_pair_parentheses()
if __name__ == '__main__':
main()
%run -i test_n_pairs_parentheses.py
###Output
Success: test_pair_parentheses
|
Sistema Completo.ipynb | ###Markdown
__Treinamento do sistema ajustado por usuário.__O sistema deve treinar e ajustar um modelo para cada usuário. O modelo treinado será salvo em uma pasta.
###Code
models = []
for USER in manager.users:
print(f"Training models for user {USER}")
# 1. Treina os modelos de classificação e regressão.
# Aqui os modelos serão especificados manualmente, mas eles poderiam ser escolhidos utilizado os scripts
# do sistema de avaliação de modelos.
print('Training classification and regression models.')
classifier, regressor = train_evaluation_system(manager, USER)
evaluator_model = Valuer(classification_model = classifier, regression_model = regressor)
# 2. Treina o modelo generativo.
print('Training generative model.')
skorch_model, scaler = train_generative_system(manager, USER, verbose = False)
models.append(skorch_model) # para gerar as curvas de aprendizado
generative_model = skorch_model.module_
# 3. Criação do sistema de recomendação adaptado ao usuário
print("Creating recommender model.")
recommender = Recommender(generativeModel = generative_model,
evaluationModel = evaluator_model,
scaler = scaler,
user = USER)
print()
# 4. Salva o modelo
path = Path(OUTPUT_PATH) / USER
path.mkdir(parents = True, exist_ok = True)
print(f"Saving recommender model to: {path}")
filehandler = open(path / 'recommender.pickle', "wb")
pickle.dump(recommender, filehandler)
print(f"Finishing model adjusment for user {USER}")
print()
plt.style.use('ggplot')
fig, axs = plt.subplots(ncols = 2, figsize = (15, 5))
train_loss = models[0].history[:, 'train_loss']
valid_loss = models[0].history[:, 'valid_loss']
X = range(len(train_loss))
axs[0].plot(X, train_loss, label = 'Erro de Treinamento', linewidth = 3.5)
axs[0].plot(X, valid_loss, label = 'Erro de Validação', linewidth = 3.5)
axs[0].set_ylabel("Custo", fontsize = 22)
axs[0].set_xlabel("Iteração", fontsize = 22)
axs[0].set_title(f"Curvas de aprendizado do modelo generativo. {manager.users[0]}", fontsize = 15)
axs[0].tick_params(axis='both', which='major', labelsize=20)
axs[0].legend(fontsize = 15)
train_loss = models[1].history[:, 'train_loss']
valid_loss = models[1].history[:, 'valid_loss']
X = range(len(train_loss))
axs[1].plot(X, train_loss, label = 'Erro de Treinamento', linewidth = 3.5)
axs[1].plot(X, valid_loss, label = 'Erro de Validação', linewidth = 3.5)
axs[1].set_ylabel("Custo", fontsize = 22)
axs[1].set_xlabel("Iteração", fontsize = 22)
axs[1].set_title(f"Curvas de aprendizado do modelo generativo. {manager.users[1]}", fontsize = 15)
axs[1].tick_params(axis='both', which='major', labelsize=20)
axs[1].legend(fontsize = 15)
fig.savefig("curvas_aprendizado_autoencoder.png", bbox_inches = 'tight')
###Output
_____no_output_____
###Markdown
__Teste do sistema generativo carregado da memória após o ajuste.__
###Code
USER = manager.users[0]
model_path = Path(OUTPUT_PATH) / manager.users[0] / 'recommender.pickle'
file = open(model_path, "rb")
recommender = pickle.load(file)
recommendation_list = recommender.getMusicList(20, manager.data.drop(columns = ['id_cliente', 'data_curtida', 'n_reproducao', 'gostou']))
recommendation_list.drop_duplicates()
fig, ax = plt.subplots(ncols = 2, figsize = (15, 5))
user_data_liked = manager.user_data(USER)
user_data_liked = user_data_liked[user_data_liked['gostou'] == 1]
recommendation_list['VolMedio'].plot.density(ax = ax[0], label = 'Lista de Recomendação')
user_data_liked['VolMedio'].plot.density(ax = ax[0], label = 'Dados do Usuário')
recommendation_list['PctCantada'].plot.density(ax = ax[1], label = 'Lista de Recomendação')
user_data_liked['PctCantada'].plot.density(ax = ax[1], label = 'Dados do Usuário')
ax[0].legend(fontsize = 12)
ax[1].legend(fontsize = 12)
ax[0].set_xlabel("Volume Médio", fontsize = 22)
ax[1].set_xlabel("Porcentagem com Vocal", fontsize = 22)
ax[0].set_xlim([0, 30.0])
ax[1].set_xlim([0, 1.0])
ax[0].tick_params(axis='both', which='major', labelsize=20)
ax[1].tick_params(axis='both', which='major', labelsize=20)
ax[0].set_title(f"Curvas de densidade do Volume Médio. {manager.users[0]}", fontsize = 15)
ax[1].set_title(f"Curvas de densidade da Porcentagem de Vocal. {manager.users[0]}", fontsize = 15)
fig.savefig("densidades_lista_recomendacao.png", bbox_inches = 'tight')
###Output
_____no_output_____
###Markdown
__Avaliação do desempenho do sistema.__
###Code
USER = manager.users[0]
model_path = Path(OUTPUT_PATH) / manager.users[0] / 'recommender.pickle'
file = open(model_path, "rb")
recommender = pickle.load(file)
classification_errors = []
regression_errors = []
for iteration in range(500):
a, b = recommender.test_model(manager.user_data(USER))
classification_errors.append(a)
regression_errors.append(b)
np.array(classification_errors).mean(), np.array(classification_errors).std()
np.array(regression_errors).mean(), np.array(regression_errors).std()
USER = manager.users[1]
model_path = Path(OUTPUT_PATH) / manager.users[1] / 'recommender.pickle'
file = open(model_path, "rb")
recommender = pickle.load(file)
classification_errors = []
regression_errors = []
for iteration in range(500):
a, b = recommender.test_model(manager.user_data(USER))
classification_errors.append(a)
regression_errors.append(b)
np.array(classification_errors).mean(), np.array(classification_errors).std()
np.array(regression_errors).mean(), np.array(regression_errors).std()
###Output
_____no_output_____ |
Lec9.ipynb | ###Markdown
Installed Python Libraries now import them
###Code
import pymongo
from pymongo import MongoClient
import json
import tweepy
import twitter
from pprint import pprint
import configparser
import pandas as pd
###Output
_____no_output_____
###Markdown
Load Authorization Information
###Code
config = configparser.ConfigParser()
config.read('config.ini')
CONSUMER_KEY = config['mytwitter']['api_key']
CONSUMER_SECRET = config['mytwitter']['api_secrete']
OAUTH_TOKEN = config['mytwitter']['access_token']
OATH_TOKEN_SECRET = config['mytwitter']['access_secrete']
mongod_connect = config['mymongo']['connection']
###Output
_____no_output_____
###Markdown
Connect to MongoDB Cluster
###Code
client = MongoClient(mongod_connect)
db = client.demo # use or create a database named demo
tweet_collection = db.tweet_collection #use or create a collection named tweet_collection
tweet_collection.create_index([("id", pymongo.ASCENDING)],unique = True) # make sure the collected tweets are unique
###Output
_____no_output_____
###Markdown
Use the Rest API to Collect Tweets Authorization
###Code
rest_auth = twitter.oauth.OAuth(OAUTH_TOKEN,OATH_TOKEN_SECRET,CONSUMER_KEY,CONSUMER_SECRET)
rest_api = twitter.Twitter(auth=rest_auth)
###Output
_____no_output_____
###Markdown
Define query for rest API
###Code
count = 100 #number of returned tweets, default and max is 100
geocode = "38.4392897,-78.9412224,50mi" # defin the location, in Harrisonburg, VA
q = "election" #define the keywords, tweets contain election
###Output
_____no_output_____
###Markdown
The retained tweets will contain election and be located in harrisonburg
###Code
search_results = rest_api.search.tweets( count=count,q=q, geocode=geocode) #you can use both q and geocode
statuses = search_results["statuses"]
since_id_new = statuses[-1]['id']
for statuse in statuses:
try:
tweet_collection.insert_one(statuse)
pprint(statuse['created_at'])# print the date of the collected tweets
except:
pass
###Output
'Tue Nov 02 19:42:13 +0000 2021'
'Tue Nov 02 19:37:15 +0000 2021'
'Tue Nov 02 19:35:47 +0000 2021'
'Tue Nov 02 19:32:12 +0000 2021'
'Tue Nov 02 19:32:10 +0000 2021'
'Tue Nov 02 19:30:32 +0000 2021'
'Tue Nov 02 19:30:00 +0000 2021'
'Tue Nov 02 19:27:38 +0000 2021'
'Tue Nov 02 19:17:19 +0000 2021'
'Tue Nov 02 19:12:16 +0000 2021'
'Tue Nov 02 19:03:15 +0000 2021'
'Tue Nov 02 18:49:27 +0000 2021'
'Tue Nov 02 18:49:27 +0000 2021'
'Tue Nov 02 18:44:51 +0000 2021'
'Tue Nov 02 18:42:59 +0000 2021'
'Tue Nov 02 18:38:57 +0000 2021'
'Tue Nov 02 18:38:01 +0000 2021'
'Tue Nov 02 18:35:28 +0000 2021'
'Tue Nov 02 18:33:48 +0000 2021'
'Tue Nov 02 18:31:06 +0000 2021'
'Tue Nov 02 18:30:28 +0000 2021'
'Tue Nov 02 18:26:18 +0000 2021'
'Tue Nov 02 18:24:35 +0000 2021'
'Tue Nov 02 18:12:19 +0000 2021'
'Tue Nov 02 18:07:27 +0000 2021'
'Tue Nov 02 18:05:11 +0000 2021'
'Tue Nov 02 18:02:30 +0000 2021'
'Tue Nov 02 18:00:54 +0000 2021'
'Tue Nov 02 17:53:10 +0000 2021'
'Tue Nov 02 17:50:08 +0000 2021'
'Tue Nov 02 17:47:18 +0000 2021'
'Tue Nov 02 17:44:44 +0000 2021'
'Tue Nov 02 17:41:43 +0000 2021'
'Tue Nov 02 17:35:11 +0000 2021'
'Tue Nov 02 17:25:32 +0000 2021'
'Tue Nov 02 17:25:25 +0000 2021'
'Tue Nov 02 17:23:34 +0000 2021'
'Tue Nov 02 17:18:47 +0000 2021'
'Tue Nov 02 17:15:23 +0000 2021'
'Tue Nov 02 17:14:14 +0000 2021'
'Tue Nov 02 17:12:22 +0000 2021'
'Tue Nov 02 17:08:41 +0000 2021'
'Tue Nov 02 17:00:02 +0000 2021'
'Tue Nov 02 16:59:07 +0000 2021'
'Tue Nov 02 16:57:43 +0000 2021'
'Tue Nov 02 16:55:00 +0000 2021'
'Tue Nov 02 16:54:39 +0000 2021'
'Tue Nov 02 16:53:24 +0000 2021'
'Tue Nov 02 16:53:22 +0000 2021'
'Tue Nov 02 16:52:52 +0000 2021'
'Tue Nov 02 16:46:40 +0000 2021'
'Tue Nov 02 16:44:01 +0000 2021'
'Tue Nov 02 16:43:00 +0000 2021'
'Tue Nov 02 16:41:30 +0000 2021'
'Tue Nov 02 16:37:30 +0000 2021'
'Tue Nov 02 16:35:28 +0000 2021'
'Tue Nov 02 16:13:30 +0000 2021'
'Tue Nov 02 16:12:32 +0000 2021'
'Tue Nov 02 16:11:17 +0000 2021'
'Tue Nov 02 16:05:59 +0000 2021'
'Tue Nov 02 16:02:08 +0000 2021'
'Tue Nov 02 16:01:17 +0000 2021'
'Tue Nov 02 16:00:22 +0000 2021'
'Tue Nov 02 15:58:41 +0000 2021'
'Tue Nov 02 15:56:47 +0000 2021'
'Tue Nov 02 15:54:42 +0000 2021'
'Tue Nov 02 15:53:50 +0000 2021'
'Tue Nov 02 15:53:08 +0000 2021'
'Tue Nov 02 15:52:09 +0000 2021'
'Tue Nov 02 15:48:39 +0000 2021'
'Tue Nov 02 15:44:57 +0000 2021'
'Tue Nov 02 15:40:04 +0000 2021'
'Tue Nov 02 15:34:46 +0000 2021'
'Tue Nov 02 15:30:35 +0000 2021'
'Tue Nov 02 15:28:56 +0000 2021'
'Tue Nov 02 15:22:59 +0000 2021'
'Tue Nov 02 15:19:52 +0000 2021'
'Tue Nov 02 15:19:44 +0000 2021'
'Tue Nov 02 15:10:47 +0000 2021'
'Tue Nov 02 15:07:50 +0000 2021'
'Tue Nov 02 15:07:12 +0000 2021'
'Tue Nov 02 15:02:17 +0000 2021'
'Tue Nov 02 15:01:29 +0000 2021'
'Tue Nov 02 15:00:52 +0000 2021'
'Tue Nov 02 14:54:28 +0000 2021'
'Tue Nov 02 14:51:39 +0000 2021'
'Tue Nov 02 14:49:11 +0000 2021'
'Tue Nov 02 14:48:25 +0000 2021'
'Tue Nov 02 14:46:30 +0000 2021'
'Tue Nov 02 14:44:41 +0000 2021'
'Tue Nov 02 14:44:29 +0000 2021'
'Tue Nov 02 14:44:25 +0000 2021'
'Tue Nov 02 14:43:03 +0000 2021'
'Tue Nov 02 14:41:22 +0000 2021'
'Tue Nov 02 14:41:18 +0000 2021'
'Tue Nov 02 14:28:51 +0000 2021'
'Tue Nov 02 14:26:02 +0000 2021'
'Tue Nov 02 14:24:42 +0000 2021'
'Tue Nov 02 14:23:49 +0000 2021'
'Tue Nov 02 14:20:57 +0000 2021'
###Markdown
Use Rest API to Collect Tweets
###Code
rest_auth = twitter.oauth.OAuth(OAUTH_TOKEN,OATH_TOKEN_SECRET,CONSUMER_KEY,CONSUMER_SECRET)
rest_api = twitter.Twitter(auth=rest_auth)
count = 100 #number of returned tweets, default and max is 100
geocode = "38.4392897,-78.9412224,50mi" # defin the location, in Harrisonburg, VA
q = "election" #define the keywords, tweets contain election
search_results = rest_api.search.tweets( count=count,q=q, geocode=geocode) #you can use both q and geocode
statuses = search_results["statuses"]
since_id_new = statuses[-1]['id']
for statuse in statuses:
try:
tweet_collection.insert_one(statuse)
pprint(statuse['created_at'])# print the date of the collected tweets
except:
pass
###Output
'Mon Nov 01 21:00:59 +0000 2021'
'Mon Nov 01 20:53:30 +0000 2021'
'Mon Nov 01 20:53:02 +0000 2021'
'Mon Nov 01 20:52:26 +0000 2021'
'Mon Nov 01 20:37:07 +0000 2021'
'Mon Nov 01 20:22:42 +0000 2021'
'Mon Nov 01 20:05:11 +0000 2021'
'Mon Nov 01 19:25:39 +0000 2021'
'Mon Nov 01 19:13:57 +0000 2021'
'Mon Nov 01 19:08:46 +0000 2021'
'Mon Nov 01 19:07:19 +0000 2021'
'Mon Nov 01 19:00:47 +0000 2021'
'Mon Nov 01 18:49:22 +0000 2021'
'Mon Nov 01 18:33:51 +0000 2021'
'Mon Nov 01 18:30:09 +0000 2021'
'Mon Nov 01 18:17:19 +0000 2021'
'Mon Nov 01 18:08:49 +0000 2021'
'Mon Nov 01 18:02:33 +0000 2021'
'Mon Nov 01 18:00:10 +0000 2021'
'Mon Nov 01 17:59:53 +0000 2021'
'Mon Nov 01 17:57:53 +0000 2021'
'Mon Nov 01 17:27:06 +0000 2021'
'Mon Nov 01 17:18:56 +0000 2021'
'Mon Nov 01 17:12:46 +0000 2021'
'Mon Nov 01 17:07:01 +0000 2021'
'Mon Nov 01 16:57:09 +0000 2021'
'Mon Nov 01 16:48:57 +0000 2021'
'Mon Nov 01 16:45:07 +0000 2021'
'Mon Nov 01 16:42:37 +0000 2021'
'Mon Nov 01 16:40:18 +0000 2021'
'Mon Nov 01 16:29:51 +0000 2021'
'Mon Nov 01 16:29:08 +0000 2021'
'Mon Nov 01 16:28:58 +0000 2021'
'Mon Nov 01 16:05:00 +0000 2021'
'Mon Nov 01 16:02:05 +0000 2021'
'Mon Nov 01 15:38:09 +0000 2021'
'Mon Nov 01 15:35:33 +0000 2021'
'Mon Nov 01 15:12:24 +0000 2021'
'Mon Nov 01 15:04:06 +0000 2021'
'Mon Nov 01 14:59:37 +0000 2021'
'Mon Nov 01 14:32:26 +0000 2021'
'Mon Nov 01 14:30:26 +0000 2021'
'Mon Nov 01 13:51:08 +0000 2021'
'Mon Nov 01 13:47:27 +0000 2021'
'Mon Nov 01 13:38:55 +0000 2021'
'Mon Nov 01 13:36:56 +0000 2021'
'Mon Nov 01 13:13:38 +0000 2021'
'Mon Nov 01 13:09:25 +0000 2021'
'Mon Nov 01 13:03:43 +0000 2021'
'Mon Nov 01 12:56:32 +0000 2021'
'Mon Nov 01 12:55:41 +0000 2021'
'Mon Nov 01 12:47:27 +0000 2021'
'Mon Nov 01 12:47:25 +0000 2021'
'Mon Nov 01 12:43:24 +0000 2021'
'Mon Nov 01 12:42:01 +0000 2021'
'Mon Nov 01 12:41:47 +0000 2021'
'Mon Nov 01 12:29:40 +0000 2021'
'Mon Nov 01 12:28:45 +0000 2021'
'Mon Nov 01 12:18:11 +0000 2021'
'Mon Nov 01 12:12:53 +0000 2021'
'Mon Nov 01 12:04:15 +0000 2021'
'Mon Nov 01 12:02:05 +0000 2021'
'Mon Nov 01 10:00:31 +0000 2021'
'Mon Nov 01 06:55:33 +0000 2021'
'Mon Nov 01 02:52:58 +0000 2021'
'Mon Nov 01 02:52:55 +0000 2021'
'Mon Nov 01 02:52:41 +0000 2021'
'Mon Nov 01 02:52:40 +0000 2021'
'Mon Nov 01 02:40:00 +0000 2021'
'Mon Nov 01 01:19:06 +0000 2021'
'Mon Nov 01 00:48:40 +0000 2021'
'Mon Nov 01 00:31:28 +0000 2021'
'Sun Oct 31 23:42:36 +0000 2021'
'Sun Oct 31 23:41:28 +0000 2021'
'Sun Oct 31 23:36:00 +0000 2021'
'Sun Oct 31 23:34:29 +0000 2021'
'Sun Oct 31 23:24:49 +0000 2021'
'Sun Oct 31 23:23:36 +0000 2021'
'Sun Oct 31 23:22:06 +0000 2021'
'Sun Oct 31 22:47:38 +0000 2021'
'Sun Oct 31 21:56:21 +0000 2021'
'Sun Oct 31 21:45:29 +0000 2021'
'Sun Oct 31 21:36:54 +0000 2021'
'Sun Oct 31 21:33:15 +0000 2021'
'Sun Oct 31 20:21:24 +0000 2021'
'Sun Oct 31 20:06:35 +0000 2021'
'Sun Oct 31 19:13:01 +0000 2021'
'Sun Oct 31 19:05:07 +0000 2021'
'Sun Oct 31 18:53:15 +0000 2021'
'Sun Oct 31 18:43:43 +0000 2021'
'Sun Oct 31 18:41:39 +0000 2021'
'Sun Oct 31 18:16:00 +0000 2021'
'Sun Oct 31 17:57:46 +0000 2021'
'Sun Oct 31 17:24:00 +0000 2021'
'Sun Oct 31 17:11:33 +0000 2021'
'Sun Oct 31 17:08:45 +0000 2021'
'Sun Oct 31 16:36:12 +0000 2021'
'Sun Oct 31 16:21:43 +0000 2021'
'Sun Oct 31 16:20:05 +0000 2021'
'Sun Oct 31 16:07:03 +0000 2021'
###Markdown
Collect Tweets into MongoDB Install Python librariesYou may need to restart your Jupyter Notebook instance after installed those libraries.
###Code
!pip install pymongo
!pip install pymongo[srv]
!pip install dnspython
!pip install tweepy
!pip install twitter
###Output
Collecting twitter
Downloading twitter-1.18.0-py2.py3-none-any.whl (54 kB)
[K |████████████████████████████████| 54 kB 491 kB/s eta 0:00:01
[?25hInstalling collected packages: twitter
Successfully installed twitter-1.18.0
[33mWARNING: You are using pip version 20.0.2; however, version 20.2.4 is available.
You should consider upgrading via the '/home/ec2-user/anaconda3/envs/python3/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
Import Python libraries
###Code
import pymongo
from pymongo import MongoClient
import json
import tweepy
import twitter
from pprint import pprint
import configparser
import pandas as pd
###Output
_____no_output_____
###Markdown
Load the Authorization Info Save database connection info and API Keys in a config.ini file and use the configparse to load the authorization info.
###Code
config = configparser.ConfigParser()
config.read('config.ini')
CONSUMER_KEY = config['mytwitter']['api_key']
CONSUMER_SECRET = config['mytwitter']['api_secrete']
OAUTH_TOKEN = config['mytwitter']['access_token']
OATH_TOKEN_SECRET = config['mytwitter']['access_secrete']
mongod_connect = config['mymongo']['connection']
###Output
_____no_output_____
###Markdown
Connect to the MongoDB Cluster
###Code
client = MongoClient(mongod_connect)
db = client.gp25 # use or create a database named demo
tweet_collection = db.tweet_collection #use or create a collection named tweet_collection
tweet_collection.create_index([("id", pymongo.ASCENDING)],unique = True) # make sure the collected tweets are unique
###Output
_____no_output_____
###Markdown
Use the Streaming API to Collect Tweets Authorize the Stream API
###Code
stream_auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
stream_auth.set_access_token(OAUTH_TOKEN, OATH_TOKEN_SECRET)
strem_api = tweepy.API(stream_auth)
###Output
_____no_output_____
###Markdown
Define the query for the Stream API
###Code
track = ['election'] # define the keywords, tweets contain election
locations = [-78.9326449,38.4150904,-78.8816972,38.4450731] #defin the location, in Harrisonburg, VA
###Output
_____no_output_____
###Markdown
The collected tweets will contain 'election' OR are located in Harrisonburg, VA
###Code
class MyStreamListener(tweepy.StreamListener):
def on_status(self, status):
print (status.id_str)
try:
tweet_collection.insert_one(status._json)
except:
pass
def on_error(self, status_code):
if status_code == 420:
#returning False in on_data disconnects the stream
return False
myStreamListener = MyStreamListener()
myStream = tweepy.Stream(auth = strem_api.auth, listener=myStreamListener)
myStream.filter(track=track)# (locations = locations) #Use either track or locations
###Output
1326627846752768000
1326627846756818944
1326627846807146496
1326627847168020482
1326627847046303746
1326627847209951239
1326627847172202501
1326627847134302208
1326627847482593280
1326627847482433536
1326627847390322689
1326627847386099721
1326627847570681858
1326627847654563851
1326627847897800708
1326627847847342080
1326627848002523136
1326627847960662018
1326627848170295296
1326627848258543617
1326627848329814017
1326627848329850881
1326627848187224065
1326627848459853825
1326627848493309952
1326627848564641793
1326627848539549697
1326627848833159172
1326627848912859136
1326627848983994369
1326627848954634242
1326627849034403841
1326627849105629185
1326627849185488897
1326627849252593666
1326627849051246593
1326627849332092928
1326627849386647552
1326627849365823488
1326627849416175616
1326627849416085506
1326627849642663936
1326627849659314176
1326627849709740033
1326627849852235776
1326627849868988417
1326627849848016896
1326627849760083970
1326627849923538944
1326627848002613261
1326627850124980227
1326627850187927552
1326627850150154240
1326627850204700672
1326627850284363776
1326627850246479872
1326627850393415680
1326627850405998595
1326627850569588736
1326627850707984387
1326627850766733317
1326627850724597760
1326627850770808834
1326627850900926464
1326627850829492224
1326627850854797316
1326627851127414784
1326627851139952641
1326627851324502016
1326627851387461637
1326627851471302659
1326627851567845378
1326627851362213889
1326627851391684614
1326627851639119872
1326627851580436481
1326627851626569728
1326627851731423239
1326627851827773443
1326627851907571714
1326627851840364545
1326627851810955264
1326627851882405890
1326627852066967553
1326627852041777153
1326627852129771520
1326627851651739648
1326627852117274625
1326627852259889154
1326627852410761216
1326627852578660352
1326627852008136704
1326627852750626816
1326627852855382017
1326627852884828160
1326627852909858816
1326627852779982852
1326627852998078466
1326627852926607361
1326627853035859970
1326627854357041159
1326627854411583498
1326627854449209344
1326627854487080961
1326627854638059521
1326627854612881410
1326627854650650627
1326627854717612032
1326627854357057537
1326627854671601666
1326627854281551873
1326627854877032448
1326627854923259906
1326627854801571844
1326627855191695362
1326627855288164353
1326627855225253890
1326627855330136067
1326627855384645632
1326627855418060801
1326627855346905091
1326627855514562560
1326627855573397507
1326627855527256071
1326627855573381121
1326627855669719042
1326627855581802496
1326627855544053766
1326627855866925065
1326627855912980480
1326627855833440257
1326627855988613120
1326627856047337474
1326627856101695488
1326627856005308416
1326627856164663296
1326627856198361094
1326627856273846272
1326627856244383744
1326627856273842180
1326627856164728833
1326627856403722241
1326627856391303169
1326627856567382019
1326627856592609282
1326627856592457729
1326627856684888064
1326627856898777095
1326627857007845378
1326627857083330565
1326627858421313536
1326627858446491649
1326627858391863296
1326627858467450881
1326627858416988160
1326627858626867201
1326627858794438656
1326627857779585025
1326627859197276163
1326627859037827074
1326627859302141952
1326627859411177473
1326627859478302722
1326627859432148992
1326627859503476739
1326627859650080768
1326627859717386242
1326627859868217344
1326627860056952832
1326627860149395464
1326627860069617665
1326627860094701568
1326627860216504320
1326627860480741376
1326627860371689472
1326627860518461440
1326627860598190083
1326627860724011011
1326627860778344450
1326627860795318273
1326627860862349312
1326627860908548096
1326627860937928707
1326627861013389312
1326627861013426187
1326627861084704769
1326627861097140224
1326627861189423105
1326627861235720192
1326627861248303105
1326627861176872961
1326627860849676288
1326627861437042688
1326627861369741312
1326627861445320704
1326627861575458817
1326627861445406720
1326627861470601228
1326627861613211648
1326627861696970752
1326627862732918784
1326627862632402946
1326627862745583618
1326627862762442752
1326627862854688770
1326627862582079488
1326627862934253568
1326627863106359300
1326627863047647234
1326627863181844480
1326627863584518145
1326627863588724736
1326627863508873216
1326627863592841222
1326627863655690240
1326627863697747968
1326627863651553289
1326627863760674821
1326627863747964929
1326627863869665281
1326627863978708999
1326627863836110853
1326627864087687168
1326627864192671744
1326627864196882433
1326627864226099200
1326627864347750400
1326627864419201029
1326627864477773824
1326627864519831553
1326627864553299969
1326627864700219392
1326627864670838786
1326627864670834688
1326627864712794112
1326627864817627137
1326627864981204994
1326627865060765696
1326627865304051713
1326627865232896001
1326627865312415744
1326627865337737216
1326627865341857793
1326627865358643200
1326627865484546055
1326627865555767296
1326627865480355840
1326627865681670144
1326627865681649664
1326627865530675201
1326627866826600449
1326627866935701504
1326627866935635973
1326627866889641990
1326627867061592067
1326627867007053831
1326627867149656065
1326627867220992001
1326627867308994560
1326627867367624705
1326627867577495554
1326627867619450880
1326627867564838914
1326627867669696518
1326627867585875969
1326627867669786631
1326627867770433537
1326627867673944064
1326627867883663363
1326627867950804995
1326627867980148738
1326627867858448385
1326627867954962432
1326627868030365696
1326627868206641153
1326627868160483328
1326627868382670848
1326627868361678848
1326627868416208896
1326627868416356352
1326627868554694662
1326627868382797826
1326627868588306440
1326627868613472256
1326627868990955520
1326627869066485761
1326627868965744646
1326627869188034561
1326627869284511745
1326627869242617857
1326627869158727686
1326627869351579649
1326627869343309826
1326627869422985217
1326627869490094081
1326627869691432962
1326627869506801664
1326627869771128834
1326627869792067587
1326627869968248833
1326627871029391362
1326627871071367172
1326627871151038465
1326627871104802816
1326627871222161408
1326627871159422976
1326627871373332486
1326627871432073216
1326627871557890055
1326627871683715072
1326627871671029760
1326627871859695616
1326627871838892034
1326627871977320451
1326627871880867843
1326627872015065096
1326627872061202434
1326627871855685636
1326627872161853448
1326627872073674752
1326627872199507968
1326627872262426624
1326627872157528064
1326627872157675529
1326627872304472064
1326627872295903232
1326627872363130884
1326627872132493317
1326627872489021441
1326627872463855617
1326627872325373952
1326627872665112579
1326627872673468416
1326627872661000199
1326627872598073346
1326627872799404032
1326627872786812930
1326627872811835392
1326627872899956737
1326627872887496706
1326627872153481216
1326627872929320961
1326627872845537283
1326627872895868929
1326627873059450883
1326627872954601477
1326627873071960067
1326627872996548612
1326627873004818432
1326627873025912833
1326627875328385024
1326627875169198083
1326627875311792129
1326627875441815557
1326627875487961088
1326627875420827648
1326627875567513601
1326627875513114624
1326627875588501504
1326627875684904960
1326627875668205568
1326627875571773440
1326627875886428162
1326627876125487104
1326627876129697793
1326627875890618368
1326627876347678720
1326627876507152387
1326627876557479938
1326627876599369740
1326627875886346245
1326627876687339520
1326627876670640128
1326627876775616514
1326627876595175426
1326627876763033601
1326627876557500416
1326627876821737473
1326627876997885956
1326627876846891015
1326627877203271680
1326627877127925761
1326627877291495425
1326627877203349505
1326627877283098630
1326627877379575814
1326627877480263687
1326627877505232897
1326627877496893441
1326627877652205572
1326627877727531008
1326627877593473034
1326627877878640641
1326627878037934081
1326627877962600450
1326627878017126401
1326627878142844928
1326627878025351168
1326627878184775680
1326627878428160003
1326627879401254915
1326627879879278593
1326627879757688833
1326627879942230017
1326627879954735104
1326627879988330496
1326627879908593665
1326627880005144578
1326627879829073921
1326627880294522880
###Markdown
Use the REST API to Collect Tweets Authorize the REST API
###Code
rest_auth = twitter.oauth.OAuth(OAUTH_TOKEN,OATH_TOKEN_SECRET,CONSUMER_KEY,CONSUMER_SECRET)
rest_api = twitter.Twitter(auth=rest_auth)
###Output
_____no_output_____
###Markdown
Define the query for the REST API
###Code
count = 100 #number of returned tweets, default and max is 100
geocode = "38.4392897,-78.9412224,50mi" # defin the location, in Harrisonburg, VA
q = "election" #define the keywords, tweets contain election
###Output
_____no_output_____
###Markdown
The collected tweets will contain 'election' AND are located in Harrisonburg, VA
###Code
search_results = rest_api.search.tweets( count=count,q=q, geocode=geocode) #you can use both q and geocode
statuses = search_results["statuses"]
since_id_new = statuses[-1]['id']
for statuse in statuses:
try:
tweet_collection.insert_one(statuse)
pprint(statuse['created_at'])# print the date of the collected tweets
except:
pass
###Output
'Wed Nov 11 20:52:30 +0000 2020'
'Wed Nov 11 20:51:50 +0000 2020'
'Wed Nov 11 20:51:04 +0000 2020'
'Wed Nov 11 20:49:04 +0000 2020'
'Wed Nov 11 20:46:29 +0000 2020'
'Wed Nov 11 20:44:41 +0000 2020'
'Wed Nov 11 20:42:18 +0000 2020'
'Wed Nov 11 20:38:24 +0000 2020'
'Wed Nov 11 20:37:59 +0000 2020'
'Wed Nov 11 20:36:48 +0000 2020'
'Wed Nov 11 20:35:38 +0000 2020'
'Wed Nov 11 20:34:41 +0000 2020'
'Wed Nov 11 20:34:35 +0000 2020'
'Wed Nov 11 20:33:18 +0000 2020'
'Wed Nov 11 20:33:18 +0000 2020'
'Wed Nov 11 20:32:59 +0000 2020'
'Wed Nov 11 20:32:47 +0000 2020'
'Wed Nov 11 20:32:26 +0000 2020'
'Wed Nov 11 20:29:56 +0000 2020'
'Wed Nov 11 20:28:32 +0000 2020'
'Wed Nov 11 20:27:26 +0000 2020'
'Wed Nov 11 20:25:01 +0000 2020'
'Wed Nov 11 20:24:05 +0000 2020'
'Wed Nov 11 20:21:47 +0000 2020'
'Wed Nov 11 20:20:17 +0000 2020'
'Wed Nov 11 20:16:41 +0000 2020'
'Wed Nov 11 20:16:05 +0000 2020'
'Wed Nov 11 20:16:05 +0000 2020'
'Wed Nov 11 20:15:31 +0000 2020'
'Wed Nov 11 20:14:42 +0000 2020'
'Wed Nov 11 20:13:35 +0000 2020'
'Wed Nov 11 20:12:58 +0000 2020'
'Wed Nov 11 20:12:51 +0000 2020'
'Wed Nov 11 20:10:04 +0000 2020'
'Wed Nov 11 20:06:46 +0000 2020'
'Wed Nov 11 20:06:29 +0000 2020'
'Wed Nov 11 20:05:36 +0000 2020'
'Wed Nov 11 20:04:29 +0000 2020'
###Markdown
Continue fetching early tweets with the same query. YOU WILL REACH YOUR RATE LIMIT VERY FAST
###Code
since_id_old = 0
while(since_id_new != since_id_old):
since_id_old = since_id_new
search_results = rest_api.search.tweets( count=count,q=q,
geocode=geocode, max_id= since_id_new)
statuses = search_results["statuses"]
since_id_new = statuses[-1]['id']
for statuse in statuses:
try:
tweet_collection.insert_one(statuse)
pprint(statuse['created_at']) # print the date of the collected tweets
except:
pass
###Output
'Wed Nov 11 20:03:40 +0000 2020'
'Wed Nov 11 20:03:28 +0000 2020'
'Wed Nov 11 20:02:35 +0000 2020'
'Wed Nov 11 20:01:59 +0000 2020'
'Wed Nov 11 20:01:12 +0000 2020'
'Wed Nov 11 20:01:11 +0000 2020'
'Wed Nov 11 20:00:57 +0000 2020'
'Wed Nov 11 20:00:28 +0000 2020'
'Wed Nov 11 19:54:33 +0000 2020'
'Wed Nov 11 19:52:54 +0000 2020'
'Wed Nov 11 19:51:20 +0000 2020'
'Wed Nov 11 19:48:32 +0000 2020'
'Wed Nov 11 19:48:13 +0000 2020'
'Wed Nov 11 19:48:01 +0000 2020'
'Wed Nov 11 19:47:31 +0000 2020'
'Wed Nov 11 19:47:06 +0000 2020'
'Wed Nov 11 19:45:40 +0000 2020'
'Wed Nov 11 19:45:10 +0000 2020'
'Wed Nov 11 19:44:43 +0000 2020'
'Wed Nov 11 19:44:29 +0000 2020'
'Wed Nov 11 19:41:12 +0000 2020'
'Wed Nov 11 19:41:12 +0000 2020'
'Wed Nov 11 19:41:05 +0000 2020'
'Wed Nov 11 19:38:37 +0000 2020'
'Wed Nov 11 19:36:27 +0000 2020'
'Wed Nov 11 19:33:42 +0000 2020'
'Wed Nov 11 19:33:05 +0000 2020'
'Wed Nov 11 19:32:20 +0000 2020'
'Wed Nov 11 19:30:45 +0000 2020'
'Wed Nov 11 19:30:07 +0000 2020'
'Wed Nov 11 19:29:47 +0000 2020'
'Wed Nov 11 19:28:00 +0000 2020'
'Wed Nov 11 19:26:42 +0000 2020'
'Wed Nov 11 19:25:33 +0000 2020'
'Wed Nov 11 19:25:11 +0000 2020'
'Wed Nov 11 19:25:00 +0000 2020'
'Wed Nov 11 19:24:20 +0000 2020'
'Wed Nov 11 19:21:37 +0000 2020'
'Wed Nov 11 19:21:19 +0000 2020'
'Wed Nov 11 19:20:49 +0000 2020'
'Wed Nov 11 19:20:10 +0000 2020'
'Wed Nov 11 19:18:11 +0000 2020'
'Wed Nov 11 19:18:09 +0000 2020'
'Wed Nov 11 19:17:35 +0000 2020'
'Wed Nov 11 19:15:34 +0000 2020'
'Wed Nov 11 19:15:11 +0000 2020'
'Wed Nov 11 19:14:58 +0000 2020'
'Wed Nov 11 19:14:46 +0000 2020'
'Wed Nov 11 19:14:22 +0000 2020'
'Wed Nov 11 19:13:53 +0000 2020'
'Wed Nov 11 19:13:46 +0000 2020'
'Wed Nov 11 19:13:30 +0000 2020'
'Wed Nov 11 19:13:25 +0000 2020'
'Wed Nov 11 19:13:03 +0000 2020'
'Wed Nov 11 19:11:54 +0000 2020'
'Wed Nov 11 19:11:20 +0000 2020'
'Wed Nov 11 19:11:17 +0000 2020'
'Wed Nov 11 19:11:15 +0000 2020'
'Wed Nov 11 19:10:49 +0000 2020'
'Wed Nov 11 19:10:06 +0000 2020'
'Wed Nov 11 19:09:51 +0000 2020'
'Wed Nov 11 19:09:43 +0000 2020'
'Wed Nov 11 19:09:09 +0000 2020'
'Wed Nov 11 19:08:14 +0000 2020'
'Wed Nov 11 19:08:00 +0000 2020'
'Wed Nov 11 19:06:12 +0000 2020'
'Wed Nov 11 19:03:30 +0000 2020'
'Wed Nov 11 19:03:29 +0000 2020'
'Wed Nov 11 19:00:47 +0000 2020'
'Wed Nov 11 19:00:18 +0000 2020'
'Wed Nov 11 18:55:44 +0000 2020'
'Wed Nov 11 18:55:16 +0000 2020'
'Wed Nov 11 18:54:21 +0000 2020'
'Wed Nov 11 18:53:17 +0000 2020'
'Wed Nov 11 18:53:16 +0000 2020'
'Wed Nov 11 18:53:09 +0000 2020'
'Wed Nov 11 18:52:46 +0000 2020'
'Wed Nov 11 18:52:37 +0000 2020'
'Wed Nov 11 18:51:49 +0000 2020'
'Wed Nov 11 18:51:41 +0000 2020'
'Wed Nov 11 18:48:52 +0000 2020'
'Wed Nov 11 18:48:13 +0000 2020'
'Wed Nov 11 18:47:39 +0000 2020'
'Wed Nov 11 18:46:42 +0000 2020'
'Wed Nov 11 18:46:40 +0000 2020'
'Wed Nov 11 18:46:23 +0000 2020'
'Wed Nov 11 18:45:55 +0000 2020'
'Wed Nov 11 18:45:16 +0000 2020'
'Wed Nov 11 18:45:05 +0000 2020'
'Wed Nov 11 18:44:39 +0000 2020'
'Wed Nov 11 18:44:34 +0000 2020'
'Wed Nov 11 18:43:30 +0000 2020'
'Wed Nov 11 18:43:13 +0000 2020'
'Wed Nov 11 18:43:03 +0000 2020'
'Wed Nov 11 18:41:34 +0000 2020'
'Wed Nov 11 18:40:17 +0000 2020'
'Wed Nov 11 18:40:06 +0000 2020'
'Wed Nov 11 18:39:55 +0000 2020'
'Wed Nov 11 18:39:53 +0000 2020'
'Wed Nov 11 18:39:45 +0000 2020'
'Wed Nov 11 18:39:25 +0000 2020'
'Wed Nov 11 18:39:11 +0000 2020'
'Wed Nov 11 18:39:06 +0000 2020'
'Wed Nov 11 18:38:55 +0000 2020'
'Wed Nov 11 18:38:46 +0000 2020'
'Wed Nov 11 18:38:39 +0000 2020'
'Wed Nov 11 18:38:32 +0000 2020'
'Wed Nov 11 18:38:26 +0000 2020'
'Wed Nov 11 18:38:03 +0000 2020'
'Wed Nov 11 18:37:33 +0000 2020'
'Wed Nov 11 18:37:30 +0000 2020'
'Wed Nov 11 18:36:59 +0000 2020'
'Wed Nov 11 18:36:52 +0000 2020'
'Wed Nov 11 18:36:48 +0000 2020'
'Wed Nov 11 18:36:42 +0000 2020'
'Wed Nov 11 18:36:41 +0000 2020'
'Wed Nov 11 18:36:32 +0000 2020'
'Wed Nov 11 18:35:17 +0000 2020'
'Wed Nov 11 18:34:26 +0000 2020'
'Wed Nov 11 18:34:21 +0000 2020'
'Wed Nov 11 18:34:09 +0000 2020'
'Wed Nov 11 18:33:59 +0000 2020'
'Wed Nov 11 18:33:47 +0000 2020'
'Wed Nov 11 18:32:40 +0000 2020'
'Wed Nov 11 18:32:34 +0000 2020'
'Wed Nov 11 18:32:02 +0000 2020'
'Wed Nov 11 18:31:58 +0000 2020'
'Wed Nov 11 18:31:46 +0000 2020'
'Wed Nov 11 18:31:43 +0000 2020'
'Wed Nov 11 18:31:00 +0000 2020'
'Wed Nov 11 18:30:32 +0000 2020'
'Wed Nov 11 18:30:18 +0000 2020'
'Wed Nov 11 18:29:30 +0000 2020'
'Wed Nov 11 18:28:11 +0000 2020'
'Wed Nov 11 18:28:06 +0000 2020'
'Wed Nov 11 18:27:59 +0000 2020'
'Wed Nov 11 18:27:45 +0000 2020'
'Wed Nov 11 18:27:42 +0000 2020'
'Wed Nov 11 18:27:37 +0000 2020'
'Wed Nov 11 18:27:32 +0000 2020'
'Wed Nov 11 18:27:06 +0000 2020'
'Wed Nov 11 18:27:05 +0000 2020'
'Wed Nov 11 18:26:45 +0000 2020'
'Wed Nov 11 18:26:27 +0000 2020'
'Wed Nov 11 18:26:13 +0000 2020'
'Wed Nov 11 18:26:11 +0000 2020'
'Wed Nov 11 18:25:33 +0000 2020'
'Wed Nov 11 18:25:30 +0000 2020'
'Wed Nov 11 18:25:29 +0000 2020'
'Wed Nov 11 18:25:24 +0000 2020'
'Wed Nov 11 18:25:07 +0000 2020'
'Wed Nov 11 18:24:59 +0000 2020'
'Wed Nov 11 18:24:55 +0000 2020'
'Wed Nov 11 18:24:44 +0000 2020'
'Wed Nov 11 18:24:44 +0000 2020'
'Wed Nov 11 18:24:37 +0000 2020'
'Wed Nov 11 18:24:21 +0000 2020'
'Wed Nov 11 18:24:16 +0000 2020'
'Wed Nov 11 18:24:16 +0000 2020'
'Wed Nov 11 18:24:14 +0000 2020'
'Wed Nov 11 18:24:12 +0000 2020'
'Wed Nov 11 18:23:52 +0000 2020'
'Wed Nov 11 18:23:50 +0000 2020'
'Wed Nov 11 18:23:35 +0000 2020'
'Wed Nov 11 18:23:34 +0000 2020'
'Wed Nov 11 18:23:29 +0000 2020'
'Wed Nov 11 18:23:28 +0000 2020'
'Wed Nov 11 18:23:24 +0000 2020'
'Wed Nov 11 18:23:16 +0000 2020'
'Wed Nov 11 18:23:07 +0000 2020'
'Wed Nov 11 18:23:03 +0000 2020'
'Wed Nov 11 18:23:00 +0000 2020'
'Wed Nov 11 18:22:50 +0000 2020'
'Wed Nov 11 18:22:48 +0000 2020'
'Wed Nov 11 18:22:44 +0000 2020'
'Wed Nov 11 18:22:41 +0000 2020'
'Wed Nov 11 18:22:37 +0000 2020'
'Wed Nov 11 18:22:31 +0000 2020'
'Wed Nov 11 18:22:26 +0000 2020'
'Wed Nov 11 18:22:01 +0000 2020'
'Wed Nov 11 18:21:59 +0000 2020'
'Wed Nov 11 18:21:58 +0000 2020'
'Wed Nov 11 18:21:57 +0000 2020'
'Wed Nov 11 18:21:51 +0000 2020'
'Wed Nov 11 18:21:43 +0000 2020'
'Wed Nov 11 18:21:41 +0000 2020'
'Wed Nov 11 18:21:31 +0000 2020'
'Wed Nov 11 18:21:20 +0000 2020'
'Wed Nov 11 18:21:14 +0000 2020'
'Wed Nov 11 18:21:13 +0000 2020'
'Wed Nov 11 18:21:12 +0000 2020'
'Wed Nov 11 18:21:06 +0000 2020'
'Wed Nov 11 18:21:03 +0000 2020'
'Wed Nov 11 18:21:03 +0000 2020'
'Wed Nov 11 18:21:02 +0000 2020'
'Wed Nov 11 18:20:53 +0000 2020'
'Wed Nov 11 18:20:50 +0000 2020'
'Wed Nov 11 18:20:45 +0000 2020'
'Wed Nov 11 18:20:42 +0000 2020'
'Wed Nov 11 18:20:35 +0000 2020'
'Wed Nov 11 18:20:29 +0000 2020'
'Wed Nov 11 18:20:25 +0000 2020'
'Wed Nov 11 18:20:06 +0000 2020'
'Wed Nov 11 18:19:59 +0000 2020'
'Wed Nov 11 18:19:45 +0000 2020'
'Wed Nov 11 18:19:41 +0000 2020'
'Wed Nov 11 18:19:38 +0000 2020'
'Wed Nov 11 18:19:38 +0000 2020'
'Wed Nov 11 18:19:28 +0000 2020'
'Wed Nov 11 18:19:25 +0000 2020'
'Wed Nov 11 18:19:22 +0000 2020'
'Wed Nov 11 18:19:19 +0000 2020'
'Wed Nov 11 18:19:18 +0000 2020'
'Wed Nov 11 18:19:16 +0000 2020'
'Wed Nov 11 18:19:12 +0000 2020'
'Wed Nov 11 18:19:12 +0000 2020'
'Wed Nov 11 18:18:54 +0000 2020'
'Wed Nov 11 18:18:53 +0000 2020'
'Wed Nov 11 18:18:51 +0000 2020'
'Wed Nov 11 18:18:49 +0000 2020'
'Wed Nov 11 18:18:48 +0000 2020'
'Wed Nov 11 18:18:42 +0000 2020'
'Wed Nov 11 18:18:41 +0000 2020'
'Wed Nov 11 18:18:40 +0000 2020'
'Wed Nov 11 18:18:18 +0000 2020'
'Wed Nov 11 18:17:58 +0000 2020'
'Wed Nov 11 18:17:23 +0000 2020'
'Wed Nov 11 18:16:17 +0000 2020'
'Wed Nov 11 18:15:14 +0000 2020'
'Wed Nov 11 18:15:12 +0000 2020'
'Wed Nov 11 18:15:03 +0000 2020'
'Wed Nov 11 18:13:05 +0000 2020'
'Wed Nov 11 18:12:31 +0000 2020'
'Wed Nov 11 18:11:40 +0000 2020'
'Wed Nov 11 18:11:08 +0000 2020'
'Wed Nov 11 18:10:55 +0000 2020'
'Wed Nov 11 18:10:43 +0000 2020'
'Wed Nov 11 18:10:08 +0000 2020'
'Wed Nov 11 18:09:20 +0000 2020'
'Wed Nov 11 18:09:02 +0000 2020'
'Wed Nov 11 18:08:57 +0000 2020'
'Wed Nov 11 18:08:04 +0000 2020'
'Wed Nov 11 18:06:20 +0000 2020'
'Wed Nov 11 18:06:02 +0000 2020'
'Wed Nov 11 18:05:20 +0000 2020'
'Wed Nov 11 18:04:21 +0000 2020'
'Wed Nov 11 18:04:11 +0000 2020'
'Wed Nov 11 18:04:04 +0000 2020'
'Wed Nov 11 18:03:41 +0000 2020'
'Wed Nov 11 18:02:58 +0000 2020'
'Wed Nov 11 18:01:43 +0000 2020'
'Wed Nov 11 18:01:17 +0000 2020'
'Wed Nov 11 18:01:02 +0000 2020'
'Wed Nov 11 18:00:19 +0000 2020'
'Wed Nov 11 17:59:59 +0000 2020'
'Wed Nov 11 17:59:15 +0000 2020'
'Wed Nov 11 17:59:08 +0000 2020'
###Markdown
View the Collected Tweets Print the number of tweets and unique twitter users
###Code
print(tweet_collection.estimated_document_count())# number of tweets collected
user_cursor = tweet_collection.distinct("user.id")
print (len(user_cursor)) # number of unique Twitter users
###Output
1086
1038
###Markdown
Create a text index and print the Tweets containing specific keywords.
###Code
tweet_collection.create_index([("text", pymongo.TEXT)], name='text_index', default_language='english') # create a text index
###Output
_____no_output_____
###Markdown
Create a cursor to query tweets with the created index
###Code
tweet_cursor = tweet_collection.find({"$text": {"$search": "vote"}}) # return tweets contain vote
###Output
_____no_output_____
###Markdown
Use pprint to display tweets
###Code
for document in tweet_cursor[0:10]: # display the first 10 tweets from the query
try:
print ('----')
# pprint (document) # use pprint to print the entire tweet document
print ('name:', document["user"]["name"]) # user name
print ('text:', document["text"]) # tweets
except:
print ("***error in encoding")
pass
tweet_cursor = tweet_collection.find({"$text": {"$search": "vote"}}) # return tweets contain vote
###Output
_____no_output_____
###Markdown
Use pandas to display tweets
###Code
tweet_df = pd.DataFrame(list(tweet_cursor ))
tweet_df[:10] #display the first 10 tweets
tweet_df["favorite_count"].hist() # create a histogram show the favorite count
###Output
_____no_output_____
###Markdown
Know second and third lines for quiz
###Code
client = MongoClient(mongod_connect)
db = client.demo # use or create a database named demo
tweet_collection = db.tweet_collection #use or create a collection named tweet_collection
tweet_collection.create_index([("id", pymongo.ASCENDING)],unique = True) # make sure the collected tweets are unique
rest_auth = twitter.oauth.OAuth(OAUTH_TOKEN,OATH_TOKEN_SECRET,CONSUMER_KEY,CONSUMER_SECRET)
rest_api = twitter.Twitter(auth=rest_auth)
count = 100 #number of returned tweets, default and max is 100
geocode = "38.4392897,-78.9412224,50mi" # defin the location, in Harrisonburg, VA
q = "election" #define the keywords, tweets contain election
search_results = rest_api.search.tweets( count=count,q=q, geocode=geocode) #you can use both q and geocode
statuses = search_results["statuses"]
since_id_new = statuses[-1]['id']
for statuse in statuses:
try:
tweet_collection.insert_one(statuse)
pprint(statuse['created_at'])# print the date of the collected tweets
except:
pass
###Output
'Mon Nov 01 21:00:59 +0000 2021'
'Mon Nov 01 20:53:30 +0000 2021'
'Mon Nov 01 20:53:02 +0000 2021'
'Mon Nov 01 20:52:26 +0000 2021'
'Mon Nov 01 20:37:07 +0000 2021'
'Mon Nov 01 20:22:42 +0000 2021'
'Mon Nov 01 20:05:11 +0000 2021'
'Mon Nov 01 19:25:39 +0000 2021'
'Mon Nov 01 19:13:57 +0000 2021'
'Mon Nov 01 19:08:46 +0000 2021'
'Mon Nov 01 19:07:19 +0000 2021'
'Mon Nov 01 19:00:47 +0000 2021'
'Mon Nov 01 18:49:22 +0000 2021'
'Mon Nov 01 18:33:51 +0000 2021'
'Mon Nov 01 18:30:09 +0000 2021'
'Mon Nov 01 18:17:19 +0000 2021'
'Mon Nov 01 18:08:49 +0000 2021'
'Mon Nov 01 18:02:33 +0000 2021'
'Mon Nov 01 18:00:10 +0000 2021'
'Mon Nov 01 17:59:53 +0000 2021'
'Mon Nov 01 17:57:53 +0000 2021'
'Mon Nov 01 17:27:06 +0000 2021'
'Mon Nov 01 17:18:56 +0000 2021'
'Mon Nov 01 17:12:46 +0000 2021'
'Mon Nov 01 17:07:01 +0000 2021'
'Mon Nov 01 16:57:09 +0000 2021'
'Mon Nov 01 16:48:57 +0000 2021'
'Mon Nov 01 16:45:07 +0000 2021'
'Mon Nov 01 16:42:37 +0000 2021'
'Mon Nov 01 16:40:18 +0000 2021'
'Mon Nov 01 16:29:51 +0000 2021'
'Mon Nov 01 16:29:08 +0000 2021'
'Mon Nov 01 16:28:58 +0000 2021'
'Mon Nov 01 16:05:00 +0000 2021'
'Mon Nov 01 16:02:05 +0000 2021'
'Mon Nov 01 15:38:09 +0000 2021'
'Mon Nov 01 15:35:33 +0000 2021'
'Mon Nov 01 15:12:24 +0000 2021'
'Mon Nov 01 15:04:06 +0000 2021'
'Mon Nov 01 14:59:37 +0000 2021'
'Mon Nov 01 14:32:26 +0000 2021'
'Mon Nov 01 14:30:26 +0000 2021'
'Mon Nov 01 13:51:08 +0000 2021'
'Mon Nov 01 13:47:27 +0000 2021'
'Mon Nov 01 13:38:55 +0000 2021'
'Mon Nov 01 13:36:56 +0000 2021'
'Mon Nov 01 13:13:38 +0000 2021'
'Mon Nov 01 13:09:25 +0000 2021'
'Mon Nov 01 13:03:43 +0000 2021'
'Mon Nov 01 12:56:32 +0000 2021'
'Mon Nov 01 12:55:41 +0000 2021'
'Mon Nov 01 12:47:27 +0000 2021'
'Mon Nov 01 12:47:25 +0000 2021'
'Mon Nov 01 12:43:24 +0000 2021'
'Mon Nov 01 12:42:01 +0000 2021'
'Mon Nov 01 12:41:47 +0000 2021'
'Mon Nov 01 12:29:40 +0000 2021'
'Mon Nov 01 12:28:45 +0000 2021'
'Mon Nov 01 12:18:11 +0000 2021'
'Mon Nov 01 12:12:53 +0000 2021'
'Mon Nov 01 12:04:15 +0000 2021'
'Mon Nov 01 12:02:05 +0000 2021'
'Mon Nov 01 10:00:31 +0000 2021'
'Mon Nov 01 06:55:33 +0000 2021'
'Mon Nov 01 02:52:58 +0000 2021'
'Mon Nov 01 02:52:55 +0000 2021'
'Mon Nov 01 02:52:41 +0000 2021'
'Mon Nov 01 02:52:40 +0000 2021'
'Mon Nov 01 02:40:00 +0000 2021'
'Mon Nov 01 01:19:06 +0000 2021'
'Mon Nov 01 00:48:40 +0000 2021'
'Mon Nov 01 00:31:28 +0000 2021'
'Sun Oct 31 23:42:36 +0000 2021'
'Sun Oct 31 23:41:28 +0000 2021'
'Sun Oct 31 23:36:00 +0000 2021'
'Sun Oct 31 23:34:29 +0000 2021'
'Sun Oct 31 23:24:49 +0000 2021'
'Sun Oct 31 23:23:36 +0000 2021'
'Sun Oct 31 23:22:06 +0000 2021'
'Sun Oct 31 22:47:38 +0000 2021'
'Sun Oct 31 21:56:21 +0000 2021'
'Sun Oct 31 21:45:29 +0000 2021'
'Sun Oct 31 21:36:54 +0000 2021'
'Sun Oct 31 21:33:15 +0000 2021'
'Sun Oct 31 20:21:24 +0000 2021'
'Sun Oct 31 20:06:35 +0000 2021'
'Sun Oct 31 19:13:01 +0000 2021'
'Sun Oct 31 19:05:07 +0000 2021'
'Sun Oct 31 18:53:15 +0000 2021'
'Sun Oct 31 18:43:43 +0000 2021'
'Sun Oct 31 18:41:39 +0000 2021'
'Sun Oct 31 18:16:00 +0000 2021'
'Sun Oct 31 17:57:46 +0000 2021'
'Sun Oct 31 17:24:00 +0000 2021'
'Sun Oct 31 17:11:33 +0000 2021'
'Sun Oct 31 17:08:45 +0000 2021'
'Sun Oct 31 16:36:12 +0000 2021'
'Sun Oct 31 16:21:43 +0000 2021'
'Sun Oct 31 16:20:05 +0000 2021'
'Sun Oct 31 16:07:03 +0000 2021'
###Markdown
Lec 9 Install Python Libraries
###Code
!pip install pymongo
!pip install pymongo[srv]
!pip install dnspython
!pip install tweepy
!pip install twitter
import pymongo
from pymongo import MongoClient
import json
import tweepy
import twitter
from pprint import pprint
import configparser
import pandas as pd
###Output
_____no_output_____
###Markdown
Load the Authorization Info
###Code
config = configparser.ConfigParser()
config.read('config.ini')
CONSUMER_KEY = config['mytwitter']['api_key']
CONSUMER_SECRET = config['mytwitter']['api_secrete']
OAUTH_TOKEN = config['mytwitter']['access_token']
OATH_TOKEN_SECRET = config['mytwitter']['access_secrete']
mongod_connect = config['mymongo']['connection']
###Output
_____no_output_____
###Markdown
Connect to the MongoDB Cluster
###Code
client = MongoClient(mongod_connect)
db = client.demo # use or create a database named demo
tweet_collection = db.tweet_collection #use or create a collection named tweet_collection
tweet_collection.create_index([("id", pymongo.ASCENDING)],unique = True) # make sure the collected tweets are unique
###Output
_____no_output_____
###Markdown
Use the REST API to Collect Tweets
###Code
rest_auth = twitter.oauth.OAuth(OAUTH_TOKEN,OATH_TOKEN_SECRET,CONSUMER_KEY,CONSUMER_SECRET)
rest_api = twitter.Twitter(auth=rest_auth)
count = 100 #number of returned tweets, default and max is 100
geocode = "38.4392897,-78.9412224,50mi" # defin the location, in Harrisonburg, VA
q = "election" #define the keywords, tweets contain election
search_results = rest_api.search.tweets( count=count,q=q, geocode=geocode) #you can use both q and geocode
statuses = search_results["statuses"]
since_id_new = statuses[-1]['id']
for statuse in statuses:
try:
tweet_collection.insert_one(statuse)
pprint(statuse['created_at'])# print the date of the collected tweets
except:
pass
###Output
'Mon Nov 01 21:00:59 +0000 2021'
'Mon Nov 01 20:53:30 +0000 2021'
'Mon Nov 01 20:53:02 +0000 2021'
'Mon Nov 01 20:52:26 +0000 2021'
'Mon Nov 01 20:37:07 +0000 2021'
'Mon Nov 01 20:22:42 +0000 2021'
'Mon Nov 01 20:05:11 +0000 2021'
'Mon Nov 01 19:25:39 +0000 2021'
'Mon Nov 01 19:13:57 +0000 2021'
'Mon Nov 01 19:08:46 +0000 2021'
'Mon Nov 01 19:07:19 +0000 2021'
'Mon Nov 01 19:00:47 +0000 2021'
'Mon Nov 01 18:49:22 +0000 2021'
'Mon Nov 01 18:33:51 +0000 2021'
'Mon Nov 01 18:30:09 +0000 2021'
'Mon Nov 01 18:17:19 +0000 2021'
'Mon Nov 01 18:08:49 +0000 2021'
'Mon Nov 01 18:02:33 +0000 2021'
'Mon Nov 01 18:00:10 +0000 2021'
'Mon Nov 01 17:59:53 +0000 2021'
'Mon Nov 01 17:57:53 +0000 2021'
'Mon Nov 01 17:27:06 +0000 2021'
'Mon Nov 01 17:18:56 +0000 2021'
'Mon Nov 01 17:12:46 +0000 2021'
'Mon Nov 01 17:07:01 +0000 2021'
'Mon Nov 01 16:57:09 +0000 2021'
'Mon Nov 01 16:48:57 +0000 2021'
'Mon Nov 01 16:45:07 +0000 2021'
'Mon Nov 01 16:42:37 +0000 2021'
'Mon Nov 01 16:40:18 +0000 2021'
'Mon Nov 01 16:29:51 +0000 2021'
'Mon Nov 01 16:29:08 +0000 2021'
'Mon Nov 01 16:28:58 +0000 2021'
'Mon Nov 01 16:05:00 +0000 2021'
'Mon Nov 01 16:02:05 +0000 2021'
'Mon Nov 01 15:38:09 +0000 2021'
'Mon Nov 01 15:35:33 +0000 2021'
'Mon Nov 01 15:12:24 +0000 2021'
'Mon Nov 01 15:04:06 +0000 2021'
'Mon Nov 01 14:59:37 +0000 2021'
'Mon Nov 01 14:32:26 +0000 2021'
'Mon Nov 01 14:30:26 +0000 2021'
'Mon Nov 01 13:51:08 +0000 2021'
'Mon Nov 01 13:47:27 +0000 2021'
'Mon Nov 01 13:38:55 +0000 2021'
'Mon Nov 01 13:36:56 +0000 2021'
'Mon Nov 01 13:13:38 +0000 2021'
'Mon Nov 01 13:09:25 +0000 2021'
'Mon Nov 01 13:03:43 +0000 2021'
'Mon Nov 01 12:56:32 +0000 2021'
'Mon Nov 01 12:55:41 +0000 2021'
'Mon Nov 01 12:47:27 +0000 2021'
'Mon Nov 01 12:47:25 +0000 2021'
'Mon Nov 01 12:43:24 +0000 2021'
'Mon Nov 01 12:42:01 +0000 2021'
'Mon Nov 01 12:41:47 +0000 2021'
'Mon Nov 01 12:29:40 +0000 2021'
'Mon Nov 01 12:28:45 +0000 2021'
'Mon Nov 01 12:18:11 +0000 2021'
'Mon Nov 01 12:12:53 +0000 2021'
'Mon Nov 01 12:04:15 +0000 2021'
'Mon Nov 01 12:02:05 +0000 2021'
'Mon Nov 01 10:00:31 +0000 2021'
'Mon Nov 01 06:55:33 +0000 2021'
'Mon Nov 01 02:52:58 +0000 2021'
'Mon Nov 01 02:52:55 +0000 2021'
'Mon Nov 01 02:52:41 +0000 2021'
'Mon Nov 01 02:52:40 +0000 2021'
'Mon Nov 01 02:40:00 +0000 2021'
'Mon Nov 01 01:19:06 +0000 2021'
'Mon Nov 01 00:48:40 +0000 2021'
'Mon Nov 01 00:31:28 +0000 2021'
'Sun Oct 31 23:42:36 +0000 2021'
'Sun Oct 31 23:41:28 +0000 2021'
'Sun Oct 31 23:36:00 +0000 2021'
'Sun Oct 31 23:34:29 +0000 2021'
'Sun Oct 31 23:24:49 +0000 2021'
'Sun Oct 31 23:23:36 +0000 2021'
'Sun Oct 31 23:22:06 +0000 2021'
'Sun Oct 31 22:47:38 +0000 2021'
'Sun Oct 31 21:56:21 +0000 2021'
'Sun Oct 31 21:45:29 +0000 2021'
'Sun Oct 31 21:36:54 +0000 2021'
'Sun Oct 31 21:33:15 +0000 2021'
'Sun Oct 31 20:21:24 +0000 2021'
'Sun Oct 31 20:06:35 +0000 2021'
'Sun Oct 31 19:13:01 +0000 2021'
'Sun Oct 31 19:05:07 +0000 2021'
'Sun Oct 31 18:53:15 +0000 2021'
'Sun Oct 31 18:43:43 +0000 2021'
'Sun Oct 31 18:41:39 +0000 2021'
'Sun Oct 31 18:16:00 +0000 2021'
'Sun Oct 31 17:57:46 +0000 2021'
'Sun Oct 31 17:24:00 +0000 2021'
'Sun Oct 31 17:11:33 +0000 2021'
'Sun Oct 31 17:08:45 +0000 2021'
'Sun Oct 31 16:36:12 +0000 2021'
'Sun Oct 31 16:21:43 +0000 2021'
'Sun Oct 31 16:20:05 +0000 2021'
'Sun Oct 31 16:07:03 +0000 2021'
###Markdown
Collect Tweets into MongoDB Install Python librariesYou may need to restart your Jupyter Notebook instance after installed those libraries.
###Code
!pip install pymongo
!pip install pymongo[srv]
!pip install dnspython
!pip install tweepy
!pip install twitter
###Output
Requirement already satisfied: twitter in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (1.18.0)
[33mWARNING: You are using pip version 20.0.2; however, version 20.2.4 is available.
You should consider upgrading via the '/home/ec2-user/anaconda3/envs/python3/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
Import Python libraries
###Code
import pymongo
from pymongo import MongoClient
import json
import tweepy
import twitter
from pprint import pprint
import configparser
import pandas as pd
###Output
_____no_output_____
###Markdown
Load the Authorization Info Save database connection info and API Keys in a config.ini file and use the configparse to load the authorization info.
###Code
config = configparser.ConfigParser()
config.read('config.ini')
CONSUMER_KEY = config['mytwitter']['api_key']
CONSUMER_SECRET = config['mytwitter']['api_secrete']
OAUTH_TOKEN = config['mytwitter']['access_token']
OATH_TOKEN_SECRET = config['mytwitter']['access_secrete']
mongod_connect = config['mymongo']['connection']
###Output
_____no_output_____
###Markdown
Connect to the MongoDB Cluster
###Code
client = MongoClient(mongod_connect)
db = client.gp5 # use or create a database named demo
tweet_collection = db.tweet_collection #use or create a collection named tweet_collection
tweet_collection.create_index([("id", pymongo.ASCENDING)],unique = True) # make sure the collected tweets are unique
###Output
_____no_output_____
###Markdown
Use the Streaming API to Collect Tweets Authorize the Stream API
###Code
stream_auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
stream_auth.set_access_token(OAUTH_TOKEN, OATH_TOKEN_SECRET)
strem_api = tweepy.API(stream_auth)
###Output
_____no_output_____
###Markdown
Define the query for the Stream API
###Code
track = ['election'] # define the keywords, tweets contain election
locations = [-78.9326449,38.4150904,-78.8816972,38.4450731] #defin the location, in Harrisonburg, VA
###Output
_____no_output_____
###Markdown
The collected tweets will contain 'election' OR are located in Harrisonburg, VA
###Code
class MyStreamListener(tweepy.StreamListener):
def on_status(self, status):
print (status.id_str)
try:
tweet_collection.insert_one(status._json)
except:
pass
def on_error(self, status_code):
if status_code == 420:
#returning False in on_data disconnects the stream
return False
myStreamListener = MyStreamListener()
myStream = tweepy.Stream(auth = strem_api.auth, listener=myStreamListener)
myStream.filter(track=track)# (locations = locations) #Use either track or locations
###Output
1323304991533240327
1323304991453491202
1323304991415742466
1323304991424188422
1323304991675863040
1323304991604486145
1323304991659036675
1323304991470325760
1323304991596032000
1323304991696846851
1323304991793303555
1323304991948333056
1323304991843639296
1323304991982051329
1323304991998824451
1323304992137240576
1323304992221134849
1323304992145625088
1323304992300769280
1323304992225153024
1323304992296431617
1323304992447614978
1323304992501977089
1323304992690888705
1323304992623779840
1323304992426598400
1323304992770502662
1323304992909021185
1323304992904806401
1323304992992841735
1323304993068376066
1323304992648953858
1323304993001263104
1323304993059794945
1323304993110319104
1323304993231790080
1323304993236070402
1323304993181630464
1323304993156464643
1323304993235951617
1323304993064054784
1323304993152225280
1323304993353584645
1323304993353461761
1323304993299062784
1323304993349345281
1323304993399709698
1323304993567338498
1323304993588420608
1323304993487790080
1323304993290653697
1323304993697505289
1323304993693200385
1323304993806520320
1323304993928155136
1323304993907220485
1323304993882034181
1323304994079043585
1323304994037239814
1323304994053910529
1323304993974267905
1323304994142113794
1323304994079010817
1323304994221682688
1323304994284597249
1323304994276282374
1323304994314067968
1323304994204954624
1323304994267930625
1323304994263638017
1323304994507051009
1323304996671291392
1323304996750921729
1323304996675485717
1323304996809678848
1323304996738400259
1323304996830588928
1323304996801286147
1323304996906164225
1323304997044604928
1323304997107331073
1323304997233328130
1323304997124276230
1323304997384265729
1323304997329817601
1323304997333929991
1323304997384314880
1323304997442985984
1323304997476573194
1323304997426237441
1323304997392687104
1323304997635936257
1323304997602365441
1323304997614952453
1323304997736665090
1323304997837180930
1323304997866606592
1323304997853880320
1323304997979901952
1323304997912674305
1323304997925257216
1323304997967196160
1323304998210478080
1323304998357372931
1323304997862346752
1323304998349012992
1323304998357401600
1323304998432919557
1323304998491619329
1323304998474813441
1323304998495768578
1323304998474829824
1323304998558748674
1323304998684545025
1323304998709522433
1323304998827012097
1323304998856515585
1323304998739058689
1323304998835523586
1323304998902603776
1323304998885859329
1323305000894803969
1323305000789925888
1323305000945225733
1323305001087864832
1323305000890626050
1323305000936722433
1323305000962019330
1323305000878198785
1323305001108819969
1323305001180016641
1323305001276575744
1323305001209516039
1323305001419157505
1323305001360465922
1323305001557643265
1323305001582764038
1323305001607929857
1323305001536655360
1323305001687678976
1323305001721057280
1323305001033281537
1323305001922502661
1323305001981235200
1323305001964441603
1323305001998032896
1323305001918365698
1323305002149040128
1323305001909932032
1323305002216075264
1323305002522288128
1323305002497118213
1323305002455150592
1323305002631270401
1323305002753052677
1323305002769674240
1323305002924998667
1323305002979393536
1323305003025649672
1323305003159834625
1323305002861879296
1323305003151339520
1323305003193454594
1323305003100983296
1323305003172417537
1323305003201650688
1323305003226976256
1323305003239485441
1323305003256143873
1323305003176599552
1323305002744471553
1323305004967493632
1323305005135273984
1323305005261058048
1323305004950695938
1323305005303140355
1323305005361876999
1323305005579935744
1323305005512859648
1323305005588369411
1323305005634461697
1323305005894565889
1323305005898715138
1323305005861011458
1323305006129434624
1323305006137827329
1323305006234136577
1323305006401990659
1323305006313934850
1323305006422847488
1323305006389481474
1323305006204932097
1323305006511083520
1323305006636945408
1323305006653603841
1323305006733402113
1323305006603317248
1323305006792122368
1323305006796328962
1323305006766956545
1323305006901141506
1323305006867587073
1323305006959919106
1323305007039586309
1323305006741757954
1323305007337279488
1323305007307984898
1323305007253463040
1323305007429672961
1323305007404453894
1323305007253475330
1323305007156928512
1323305007651848192
1323305007618363396
1323305007890997249
1323305007857414144
1323305007979106309
1323305008176181249
1323305008251633665
1323305008184627202
1323305008373387264
1323305009233231874
1323305008448696320
1323305009245728768
1323305009346392065
1323305009375830019
1323305009283551234
1323305009388441600
1323305009606397952
1323305009526804482
1323305009619107841
1323305009706983424
1323305009744936960
1323305009778319360
1323305009828777985
1323305009635840001
1323305009958834186
1323305009996582913
1323305010101317632
1323305009996537863
1323305009988071424
1323305010197835776
1323305010210480130
1323305010248241152
1323305010092978177
1323305010273267713
1323305010399125504
1323305010600566784
1323305010613030912
1323305010713649152
1323305010638295040
1323305010755739655
1323305010843639812
1323305010868948993
1323305010822864896
1323305010835394560
1323305010814484481
1323305011003228161
1323305010957033475
1323305010923524106
1323305011166633986
1323305011141582849
1323305011267268608
1323305011460231168
1323305011431002119
1323305011007393792
1323305011502342146
1323305011506565120
1323305011808489472
1323305011867242496
1323305011980365824
1323305013406519299
1323305013431705603
1323305013502971905
1323305012655783937
1323305013603635206
1323305013683212288
1323305013691768834
1323305013473472512
1323305013632999426
1323305013859536899
1323305013863538688
1323305013758885888
1323305013960126464
1323305014014709770
1323305014077591553
1323305013997830144
1323305014123728898
1323305014291501059
1323305014278918146
1323305014195064832
1323305014257946627
1323305014207676417
1323305013502976001
1323305014308171776
1323305014476099589
1323305014413111297
1323305014648053761
1323305014736097288
1323305014857662470
1323305014908145664
1323305014861901834
1323305014769586176
1323305014731964416
1323305014060670976
1323305015008702464
1323305015159738371
1323305015214223361
1323305015298109440
1323305015340109824
1323305015327531008
1323305015755288577
1323305015591796738
1323305015700836361
1323305015814070273
1323305015918927874
1323305015969259520
1323305015977672704
1323305015965065217
1323305016136904704
1323305016111865856
###Markdown
Use the REST API to Collect Tweets Authorize the REST API
###Code
rest_auth = twitter.oauth.OAuth(OAUTH_TOKEN,OATH_TOKEN_SECRET,CONSUMER_KEY,CONSUMER_SECRET)
rest_api = twitter.Twitter(auth=rest_auth)
###Output
_____no_output_____
###Markdown
Define the query for the REST API
###Code
count = 100 #number of returned tweets, default and max is 100
geocode = "38.4392897,-78.9412224,50mi" # defin the location, in Harrisonburg, VA
q = "election" #define the keywords, tweets contain election
###Output
_____no_output_____
###Markdown
The collected tweets will contain 'election' AND are located in Harrisonburg, VA
###Code
search_results = rest_api.search.tweets( count=count,q=q, geocode=geocode) #you can use both q and geocode
statuses = search_results["statuses"]
since_id_new = statuses[-1]['id']
for statuse in statuses:
try:
tweet_collection.insert_one(statuse)
pprint(statuse['created_at'])# print the date of the collected tweets
except:
pass
###Output
'Mon Nov 02 16:44:48 +0000 2020'
'Mon Nov 02 16:44:40 +0000 2020'
'Mon Nov 02 16:44:23 +0000 2020'
'Mon Nov 02 16:42:23 +0000 2020'
'Mon Nov 02 16:41:27 +0000 2020'
'Mon Nov 02 16:41:03 +0000 2020'
'Mon Nov 02 16:39:15 +0000 2020'
'Mon Nov 02 16:37:36 +0000 2020'
'Mon Nov 02 16:36:23 +0000 2020'
'Mon Nov 02 16:35:59 +0000 2020'
'Mon Nov 02 16:35:51 +0000 2020'
'Mon Nov 02 16:35:42 +0000 2020'
'Mon Nov 02 16:35:39 +0000 2020'
'Mon Nov 02 16:34:16 +0000 2020'
'Mon Nov 02 16:33:57 +0000 2020'
'Mon Nov 02 16:32:29 +0000 2020'
'Mon Nov 02 16:32:14 +0000 2020'
'Mon Nov 02 16:32:07 +0000 2020'
'Mon Nov 02 16:31:01 +0000 2020'
'Mon Nov 02 16:30:45 +0000 2020'
'Mon Nov 02 16:30:23 +0000 2020'
'Mon Nov 02 16:29:40 +0000 2020'
'Mon Nov 02 16:29:33 +0000 2020'
'Mon Nov 02 16:29:21 +0000 2020'
'Mon Nov 02 16:29:08 +0000 2020'
'Mon Nov 02 16:29:05 +0000 2020'
'Mon Nov 02 16:27:55 +0000 2020'
'Mon Nov 02 16:27:55 +0000 2020'
'Mon Nov 02 16:27:16 +0000 2020'
'Mon Nov 02 16:27:03 +0000 2020'
'Mon Nov 02 16:26:33 +0000 2020'
'Mon Nov 02 16:26:31 +0000 2020'
'Mon Nov 02 16:25:07 +0000 2020'
'Mon Nov 02 16:25:06 +0000 2020'
'Mon Nov 02 16:24:45 +0000 2020'
'Mon Nov 02 16:24:44 +0000 2020'
'Mon Nov 02 16:24:34 +0000 2020'
'Mon Nov 02 16:24:29 +0000 2020'
'Mon Nov 02 16:23:49 +0000 2020'
'Mon Nov 02 16:23:49 +0000 2020'
'Mon Nov 02 16:23:32 +0000 2020'
'Mon Nov 02 16:23:31 +0000 2020'
'Mon Nov 02 16:23:19 +0000 2020'
'Mon Nov 02 16:22:32 +0000 2020'
'Mon Nov 02 16:22:12 +0000 2020'
'Mon Nov 02 16:22:07 +0000 2020'
'Mon Nov 02 16:21:55 +0000 2020'
'Mon Nov 02 16:21:33 +0000 2020'
'Mon Nov 02 16:20:25 +0000 2020'
'Mon Nov 02 16:20:05 +0000 2020'
'Mon Nov 02 16:19:54 +0000 2020'
'Mon Nov 02 16:19:36 +0000 2020'
'Mon Nov 02 16:18:40 +0000 2020'
'Mon Nov 02 16:16:33 +0000 2020'
'Mon Nov 02 16:15:26 +0000 2020'
'Mon Nov 02 16:13:50 +0000 2020'
'Mon Nov 02 16:13:24 +0000 2020'
'Mon Nov 02 16:13:05 +0000 2020'
'Mon Nov 02 16:11:48 +0000 2020'
'Mon Nov 02 16:10:50 +0000 2020'
'Mon Nov 02 16:09:48 +0000 2020'
'Mon Nov 02 16:09:42 +0000 2020'
'Mon Nov 02 16:08:32 +0000 2020'
'Mon Nov 02 16:07:02 +0000 2020'
'Mon Nov 02 16:06:55 +0000 2020'
'Mon Nov 02 16:06:43 +0000 2020'
'Mon Nov 02 16:05:33 +0000 2020'
'Mon Nov 02 16:05:19 +0000 2020'
'Mon Nov 02 16:04:53 +0000 2020'
'Mon Nov 02 16:04:41 +0000 2020'
'Mon Nov 02 16:04:13 +0000 2020'
'Mon Nov 02 16:03:24 +0000 2020'
'Mon Nov 02 16:03:04 +0000 2020'
'Mon Nov 02 16:02:25 +0000 2020'
'Mon Nov 02 16:02:21 +0000 2020'
'Mon Nov 02 16:01:04 +0000 2020'
###Markdown
Continue fetching early tweets with the same query. YOU WILL REACH YOUR RATE LIMIT VERY FAST
###Code
since_id_old = 0
while(since_id_new != since_id_old):
since_id_old = since_id_new
search_results = rest_api.search.tweets( count=count,q=q,
geocode=geocode, max_id= since_id_new)
statuses = search_results["statuses"]
since_id_new = statuses[-1]['id']
for statuse in statuses:
try:
tweet_collection.insert_one(statuse)
pprint(statuse['created_at']) # print the date of the collected tweets
except:
pass
###Output
'Mon Nov 02 00:52:26 +0000 2020'
'Mon Nov 02 00:51:16 +0000 2020'
'Mon Nov 02 00:50:44 +0000 2020'
'Mon Nov 02 00:49:20 +0000 2020'
'Mon Nov 02 00:46:02 +0000 2020'
'Mon Nov 02 00:43:02 +0000 2020'
'Mon Nov 02 00:41:22 +0000 2020'
'Mon Nov 02 00:41:07 +0000 2020'
'Mon Nov 02 00:40:21 +0000 2020'
'Mon Nov 02 00:36:32 +0000 2020'
'Mon Nov 02 00:35:59 +0000 2020'
'Mon Nov 02 00:34:24 +0000 2020'
'Mon Nov 02 00:33:37 +0000 2020'
'Mon Nov 02 00:31:33 +0000 2020'
'Mon Nov 02 00:28:27 +0000 2020'
'Mon Nov 02 00:28:15 +0000 2020'
'Mon Nov 02 00:28:13 +0000 2020'
'Mon Nov 02 00:27:17 +0000 2020'
'Mon Nov 02 00:26:04 +0000 2020'
'Mon Nov 02 00:23:07 +0000 2020'
'Mon Nov 02 00:22:05 +0000 2020'
'Mon Nov 02 00:21:26 +0000 2020'
'Mon Nov 02 00:20:43 +0000 2020'
'Mon Nov 02 00:19:05 +0000 2020'
'Mon Nov 02 00:18:37 +0000 2020'
'Mon Nov 02 00:16:46 +0000 2020'
'Mon Nov 02 00:13:57 +0000 2020'
'Mon Nov 02 00:13:24 +0000 2020'
'Mon Nov 02 00:11:26 +0000 2020'
'Mon Nov 02 00:09:37 +0000 2020'
'Mon Nov 02 00:07:41 +0000 2020'
'Mon Nov 02 00:07:30 +0000 2020'
'Mon Nov 02 00:05:45 +0000 2020'
'Mon Nov 02 00:04:19 +0000 2020'
'Mon Nov 02 00:03:43 +0000 2020'
'Mon Nov 02 00:02:55 +0000 2020'
'Mon Nov 02 00:00:14 +0000 2020'
'Sun Nov 01 23:54:07 +0000 2020'
'Sun Nov 01 23:53:49 +0000 2020'
'Sun Nov 01 23:52:43 +0000 2020'
'Sun Nov 01 23:49:21 +0000 2020'
'Sun Nov 01 23:48:45 +0000 2020'
'Sun Nov 01 23:46:48 +0000 2020'
'Sun Nov 01 23:46:37 +0000 2020'
'Sun Nov 01 23:46:05 +0000 2020'
'Sun Nov 01 23:44:56 +0000 2020'
'Sun Nov 01 23:44:20 +0000 2020'
'Sun Nov 01 23:42:40 +0000 2020'
'Sun Nov 01 23:42:21 +0000 2020'
'Sun Nov 01 23:40:35 +0000 2020'
'Sun Nov 01 23:39:17 +0000 2020'
'Sun Nov 01 23:34:40 +0000 2020'
'Sun Nov 01 23:33:08 +0000 2020'
'Sun Nov 01 23:29:44 +0000 2020'
'Sun Nov 01 23:28:10 +0000 2020'
'Sun Nov 01 23:28:02 +0000 2020'
'Sun Nov 01 23:27:21 +0000 2020'
'Sun Nov 01 23:25:59 +0000 2020'
'Sun Nov 01 23:21:22 +0000 2020'
'Sun Nov 01 23:21:01 +0000 2020'
'Sun Nov 01 23:15:20 +0000 2020'
'Sun Nov 01 23:08:14 +0000 2020'
'Sun Nov 01 23:07:47 +0000 2020'
'Sun Nov 01 23:04:53 +0000 2020'
'Sun Nov 01 23:00:47 +0000 2020'
'Sun Nov 01 23:00:39 +0000 2020'
'Sun Nov 01 22:56:52 +0000 2020'
'Sun Nov 01 22:51:29 +0000 2020'
'Sun Nov 01 22:50:04 +0000 2020'
'Sun Nov 01 22:50:01 +0000 2020'
'Sun Nov 01 22:48:34 +0000 2020'
'Sun Nov 01 22:46:53 +0000 2020'
'Sun Nov 01 22:43:52 +0000 2020'
'Sun Nov 01 22:38:00 +0000 2020'
'Sun Nov 01 22:37:24 +0000 2020'
'Sun Nov 01 22:36:20 +0000 2020'
'Sun Nov 01 22:36:14 +0000 2020'
'Sun Nov 01 22:35:33 +0000 2020'
'Sun Nov 01 22:34:34 +0000 2020'
'Sun Nov 01 22:34:18 +0000 2020'
'Sun Nov 01 22:32:35 +0000 2020'
'Sun Nov 01 22:32:14 +0000 2020'
'Sun Nov 01 22:32:02 +0000 2020'
'Sun Nov 01 22:30:59 +0000 2020'
'Sun Nov 01 22:28:19 +0000 2020'
'Sun Nov 01 22:27:33 +0000 2020'
'Sun Nov 01 22:26:58 +0000 2020'
'Sun Nov 01 22:26:51 +0000 2020'
'Sun Nov 01 22:26:19 +0000 2020'
'Sun Nov 01 22:26:00 +0000 2020'
'Sun Nov 01 22:25:20 +0000 2020'
'Sun Nov 01 22:24:46 +0000 2020'
'Sun Nov 01 22:24:09 +0000 2020'
'Sun Nov 01 22:24:00 +0000 2020'
'Sun Nov 01 22:23:59 +0000 2020'
'Sun Nov 01 22:21:12 +0000 2020'
'Sun Nov 01 22:20:37 +0000 2020'
'Sun Nov 01 22:20:03 +0000 2020'
'Sun Nov 01 22:18:28 +0000 2020'
'Sun Nov 01 22:18:15 +0000 2020'
'Sun Nov 01 22:17:10 +0000 2020'
'Sun Nov 01 22:16:29 +0000 2020'
'Sun Nov 01 22:16:25 +0000 2020'
'Sun Nov 01 22:15:46 +0000 2020'
'Sun Nov 01 22:14:04 +0000 2020'
'Sun Nov 01 22:10:42 +0000 2020'
'Sun Nov 01 22:04:01 +0000 2020'
'Sun Nov 01 22:03:56 +0000 2020'
'Sun Nov 01 22:03:06 +0000 2020'
'Sun Nov 01 22:00:08 +0000 2020'
'Sun Nov 01 21:58:00 +0000 2020'
'Sun Nov 01 21:55:15 +0000 2020'
'Sun Nov 01 21:52:31 +0000 2020'
'Sun Nov 01 21:51:47 +0000 2020'
'Sun Nov 01 21:50:26 +0000 2020'
'Sun Nov 01 21:46:03 +0000 2020'
'Sun Nov 01 21:41:54 +0000 2020'
'Sun Nov 01 21:40:25 +0000 2020'
'Sun Nov 01 21:40:13 +0000 2020'
'Sun Nov 01 21:39:35 +0000 2020'
'Sun Nov 01 21:39:16 +0000 2020'
'Sun Nov 01 21:38:46 +0000 2020'
'Sun Nov 01 21:36:51 +0000 2020'
'Sun Nov 01 21:36:28 +0000 2020'
'Sun Nov 01 21:35:53 +0000 2020'
'Sun Nov 01 21:34:59 +0000 2020'
'Sun Nov 01 21:34:03 +0000 2020'
'Sun Nov 01 21:33:47 +0000 2020'
'Sun Nov 01 21:32:58 +0000 2020'
'Sun Nov 01 21:32:47 +0000 2020'
'Sun Nov 01 21:31:35 +0000 2020'
'Sun Nov 01 21:30:39 +0000 2020'
'Sun Nov 01 21:29:37 +0000 2020'
'Sun Nov 01 21:29:19 +0000 2020'
'Sun Nov 01 21:28:56 +0000 2020'
'Sun Nov 01 21:28:22 +0000 2020'
'Sun Nov 01 21:27:34 +0000 2020'
'Sun Nov 01 21:25:46 +0000 2020'
'Sun Nov 01 21:25:25 +0000 2020'
'Sun Nov 01 21:25:20 +0000 2020'
'Sun Nov 01 21:23:31 +0000 2020'
'Sun Nov 01 21:21:59 +0000 2020'
'Sun Nov 01 21:20:27 +0000 2020'
'Sun Nov 01 21:18:29 +0000 2020'
'Sun Nov 01 21:17:56 +0000 2020'
'Sun Nov 01 21:15:23 +0000 2020'
'Sun Nov 01 21:13:30 +0000 2020'
'Sun Nov 01 21:11:20 +0000 2020'
'Sun Nov 01 21:09:51 +0000 2020'
'Sun Nov 01 21:09:43 +0000 2020'
'Sun Nov 01 21:08:11 +0000 2020'
'Sun Nov 01 21:07:20 +0000 2020'
'Sun Nov 01 21:06:45 +0000 2020'
'Sun Nov 01 21:06:15 +0000 2020'
'Sun Nov 01 21:01:16 +0000 2020'
'Sun Nov 01 20:59:48 +0000 2020'
'Sun Nov 01 20:58:20 +0000 2020'
'Sun Nov 01 20:57:30 +0000 2020'
'Sun Nov 01 20:57:05 +0000 2020'
'Sun Nov 01 20:54:38 +0000 2020'
'Sun Nov 01 20:53:55 +0000 2020'
'Sun Nov 01 20:53:54 +0000 2020'
'Sun Nov 01 20:51:48 +0000 2020'
'Sun Nov 01 20:51:28 +0000 2020'
'Sun Nov 01 20:51:23 +0000 2020'
'Sun Nov 01 20:50:47 +0000 2020'
'Sun Nov 01 20:50:08 +0000 2020'
'Sun Nov 01 20:48:25 +0000 2020'
'Sun Nov 01 20:48:04 +0000 2020'
'Sun Nov 01 20:47:33 +0000 2020'
'Sun Nov 01 20:44:43 +0000 2020'
'Sun Nov 01 20:44:34 +0000 2020'
'Sun Nov 01 20:44:33 +0000 2020'
'Sun Nov 01 20:43:37 +0000 2020'
'Sun Nov 01 20:42:32 +0000 2020'
'Sun Nov 01 20:41:51 +0000 2020'
'Sun Nov 01 20:41:08 +0000 2020'
'Sun Nov 01 20:40:12 +0000 2020'
'Sun Nov 01 20:37:23 +0000 2020'
'Sun Nov 01 20:36:44 +0000 2020'
'Sun Nov 01 20:32:07 +0000 2020'
'Sun Nov 01 20:31:39 +0000 2020'
'Sun Nov 01 20:31:21 +0000 2020'
'Sun Nov 01 20:30:11 +0000 2020'
'Sun Nov 01 20:30:00 +0000 2020'
'Sun Nov 01 20:29:50 +0000 2020'
'Sun Nov 01 20:29:10 +0000 2020'
'Sun Nov 01 20:27:51 +0000 2020'
'Sun Nov 01 20:26:01 +0000 2020'
'Sun Nov 01 20:21:29 +0000 2020'
'Sun Nov 01 20:19:41 +0000 2020'
'Sun Nov 01 20:18:46 +0000 2020'
'Sun Nov 01 20:17:45 +0000 2020'
'Sun Nov 01 20:15:06 +0000 2020'
'Sun Nov 01 20:14:09 +0000 2020'
'Sun Nov 01 20:13:22 +0000 2020'
'Sun Nov 01 20:12:15 +0000 2020'
'Sun Nov 01 20:11:18 +0000 2020'
'Sun Nov 01 20:10:09 +0000 2020'
'Sun Nov 01 20:08:39 +0000 2020'
'Sun Nov 01 20:08:16 +0000 2020'
'Sun Nov 01 20:06:30 +0000 2020'
'Sun Nov 01 20:06:00 +0000 2020'
'Sun Nov 01 20:04:35 +0000 2020'
'Sun Nov 01 20:03:42 +0000 2020'
'Sun Nov 01 20:02:37 +0000 2020'
'Sun Nov 01 19:57:53 +0000 2020'
'Sun Nov 01 19:56:36 +0000 2020'
'Sun Nov 01 19:54:37 +0000 2020'
'Sun Nov 01 19:54:22 +0000 2020'
'Sun Nov 01 19:53:59 +0000 2020'
'Sun Nov 01 19:53:27 +0000 2020'
'Sun Nov 01 19:52:38 +0000 2020'
'Sun Nov 01 19:51:04 +0000 2020'
'Sun Nov 01 19:50:51 +0000 2020'
'Sun Nov 01 19:50:45 +0000 2020'
'Sun Nov 01 19:49:58 +0000 2020'
'Sun Nov 01 19:49:03 +0000 2020'
'Sun Nov 01 19:48:32 +0000 2020'
'Sun Nov 01 19:48:08 +0000 2020'
'Sun Nov 01 19:47:43 +0000 2020'
'Sun Nov 01 19:46:43 +0000 2020'
'Sun Nov 01 19:45:06 +0000 2020'
'Sun Nov 01 19:44:55 +0000 2020'
'Sun Nov 01 19:44:39 +0000 2020'
'Sun Nov 01 19:43:43 +0000 2020'
'Sun Nov 01 19:43:05 +0000 2020'
'Sun Nov 01 19:41:40 +0000 2020'
'Sun Nov 01 19:40:36 +0000 2020'
'Sun Nov 01 19:40:27 +0000 2020'
'Sun Nov 01 19:40:19 +0000 2020'
'Sun Nov 01 19:37:46 +0000 2020'
'Sun Nov 01 19:36:39 +0000 2020'
'Sun Nov 01 19:36:33 +0000 2020'
'Sun Nov 01 19:35:04 +0000 2020'
'Sun Nov 01 19:34:50 +0000 2020'
'Sun Nov 01 19:34:11 +0000 2020'
'Sun Nov 01 19:33:51 +0000 2020'
'Sun Nov 01 19:30:48 +0000 2020'
'Sun Nov 01 19:30:24 +0000 2020'
'Sun Nov 01 19:30:18 +0000 2020'
'Sun Nov 01 19:26:49 +0000 2020'
'Sun Nov 01 19:25:43 +0000 2020'
'Sun Nov 01 19:25:39 +0000 2020'
'Sun Nov 01 19:25:22 +0000 2020'
'Sun Nov 01 19:24:26 +0000 2020'
'Sun Nov 01 19:21:34 +0000 2020'
'Sun Nov 01 19:18:00 +0000 2020'
'Sun Nov 01 19:17:54 +0000 2020'
'Sun Nov 01 19:15:40 +0000 2020'
'Sun Nov 01 19:13:18 +0000 2020'
'Sun Nov 01 19:11:10 +0000 2020'
'Sun Nov 01 19:08:05 +0000 2020'
'Sun Nov 01 19:07:47 +0000 2020'
'Sun Nov 01 19:06:03 +0000 2020'
'Sun Nov 01 19:05:03 +0000 2020'
'Sun Nov 01 19:05:02 +0000 2020'
'Sun Nov 01 19:03:34 +0000 2020'
'Sun Nov 01 19:01:24 +0000 2020'
'Sun Nov 01 19:00:07 +0000 2020'
'Sun Nov 01 19:00:01 +0000 2020'
'Sun Nov 01 18:59:12 +0000 2020'
'Sun Nov 01 18:58:10 +0000 2020'
'Sun Nov 01 18:57:35 +0000 2020'
'Sun Nov 01 18:57:13 +0000 2020'
'Sun Nov 01 18:56:02 +0000 2020'
'Sun Nov 01 18:55:58 +0000 2020'
'Sun Nov 01 18:53:38 +0000 2020'
'Sun Nov 01 18:53:01 +0000 2020'
'Sun Nov 01 18:51:16 +0000 2020'
'Sun Nov 01 18:51:04 +0000 2020'
'Sun Nov 01 18:50:32 +0000 2020'
'Sun Nov 01 18:50:21 +0000 2020'
'Sun Nov 01 18:48:10 +0000 2020'
'Sun Nov 01 18:47:21 +0000 2020'
'Sun Nov 01 18:46:52 +0000 2020'
'Sun Nov 01 18:46:05 +0000 2020'
'Sun Nov 01 18:45:00 +0000 2020'
'Sun Nov 01 18:43:19 +0000 2020'
'Sun Nov 01 18:40:57 +0000 2020'
###Markdown
View the Collected Tweets Print the number of tweets and unique twitter users
###Code
print(tweet_collection.estimated_document_count())# number of tweets collected
user_cursor = tweet_collection.distinct("user.id")
print (len(user_cursor)) # number of unique Twitter users
###Output
3529
3275
###Markdown
Create a text index and print the Tweets containing specific keywords.
###Code
tweet_collection.create_index([("text", pymongo.TEXT)], name='text_index', default_language='english') # create a text index
###Output
_____no_output_____
###Markdown
Create a cursor to query tweets with the created index
###Code
tweet_cursor = tweet_collection.find({"$text": {"$search": "vote"}}) # return tweets contain vote
###Output
_____no_output_____
###Markdown
Use pprint to display tweets
###Code
for document in tweet_cursor[0:10]: # display the first 10 tweets from the query
try:
print ('----')
#pprint (document) # use pprint to print the entire tweet document
print ('name:', document["user"]["name"]) # user name
print ('text:', document["text"]) # tweets
except:
print ("***error in encoding")
pass
tweet_cursor = tweet_collection.find({"$text": {"$search": "vote"}}) # return tweets contain vote
###Output
_____no_output_____
###Markdown
Use pandas to display tweets
###Code
tweet_df = pd.DataFrame(list(tweet_cursor ))
tweet_df[:10] #display the first 10 tweets
tweet_df["favorite_count"].hist() # create a histogram show the favorite count
###Output
_____no_output_____ |
module4/ssignment_applied_modeling_4.ipynb | ###Markdown
Setting up for model
###Code
#Setting up matrices for models:
target = ['critic_score']
leaky = ['critic_count','user_score','user_count','name']
features = train.columns.drop(target+leaky)
features
X_train = train[features]
y_train = np.ravel(train[target])
X_val = val[features]
y_val = np.ravel(val[target])
X_test = test[features]
y_test = np.ravel(test[target])
#Some verification:
train.shape,X_train.shape
###Output
_____no_output_____
###Markdown
Baseline
###Code
from sklearn.metrics import mean_absolute_error as mae
from sklearn.metrics import r2_score
X_train.head()
y_baseline = [y_train.mean()]*len(y_val)
print(f'Mean absolute error: {mae(y_val,y_baseline)}')
print(f'R2 score: {r2_score(y_val,y_baseline)}')
###Output
Mean absolute error: 11.458943440022388
R2 score: -9.199689323002858e-06
###Markdown
Model
###Code
np.ravel(y_val)
import category_encoders as ce
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.model_selection import RandomizedSearchCV
process = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
)
X_train_processed = process.fit_transform(X_train)
X_val_processed = process.transform(X_val)
model = RandomForestRegressor(
n_estimators=100
)
model.fit(X_train_processed,y_train)
model.score(X_val_processed,y_val)
#Get one observation to put through the predictor:
print(y_train[0])
X_train.iloc[[0]]
# process.transform(X_val)
train
v
###Output
_____no_output_____ |
L3E3.ipynb | ###Markdown
Assignment 3 - Exercise 3Bruno Kiyoshi Ynumaru - 201805995
###Code
prob_statement = """Uma empresa est´a considerando 5 oportunidades de investimento distintas. A sa´ıda de caixa e o valor
presente l´ıquido (VPL) destes investimentos s˜ao dados na tabela abaixo (em milh˜oes de d´olares). A
empresa possui $40 milh˜oes para investimento no instante atual (instante 0); ela estima que em um
ano a partir de agora (instante 1) $20 milh˜oes estar˜ao dispon´ıveis para investimento. Os investimentos
podem ser comprados em qualquer fra¸c˜ao. Neste caso, a sa´ıda de caixa e o VPL s˜ao ajustados na
mesma propor¸c˜ao. Formule e resolva um PL que ajude a empresa a maximizar o VPL que pode ser
gerado investindo nos investimentos 1-5.
"""
def fix_statement(str_statement):
list_replacements = [("¸c", "ç"),
("´a", "á"),
("´e", "é"),
("´ı", "í"),
("´o", "ó"),
("´u", "ú"),
("˜a", "ã"),
("˜o", "õ"),
("$", "\\\$")]
for replacement in list_replacements:
str_statement = str_statement.replace(replacement[0], replacement[1])
return str_statement
prob_statement = fix_statement(prob_statement)
print(prob_statement)
###Output
Uma empresa está considerando 5 oportunidades de investimento distintas. A saída de caixa e o valor
presente líquido (VPL) destes investimentos são dados na tabela abaixo (em milhões de dólares). A
empresa possui \\$40 milhões para investimento no instante atual (instante 0); ela estima que em um
ano a partir de agora (instante 1) \\$20 milhões estarão disponíveis para investimento. Os investimentos
podem ser comprados em qualquer fração. Neste caso, a saída de caixa e o VPL são ajustados na
mesma proporção. Formule e resolva um PL que ajude a empresa a maximizar o VPL que pode ser
gerado investindo nos investimentos 1-5.
###Markdown
Uma empresa está considerando 5 oportunidades de investimento distintas. A saída de caixa e o valorpresente líquido (VPL) destes investimentos são dados na tabela abaixo (em milhões de dólares). Aempresa possui \\$40 milhões para investimento no instante atual (instante 0); ela estima que em umano a partir de agora (instante 1) \\$20 milhões estarão disponíveis para investimento. Os investimentospodem ser comprados em qualquer fração. Neste caso, a saída de caixa e o VPL são ajustados namesma proporção. Formule e resolva um PL que ajude a empresa a maximizar o VPL que pode sergerado investindo nos investimentos 1-5.| Investimentos|1| 2| 3| 4| 5|| --- | --- | --- | --- | --- | --- ||Saída de caixa (instante 0)| 11| 53| 5 |5 |29||Saída de caixa (instante 1) |3 |6| 5| 1 |34||VPL| 13| 16| 10| 14| 39|
###Code
import gurobipy as gp
from gurobipy import GRB, Model
# Create a new model
m = Model("Wyndor_Glass")
# Create variables
x1 = m.addVar(lb=0, vtype=GRB.CONTINUOUS, name="investment 1 (share)")
x2 = m.addVar(lb=0, vtype=GRB.CONTINUOUS, name="investment 2 (share)")
x3 = m.addVar(lb=0, vtype=GRB.CONTINUOUS, name="investment 3 (share)")
x4 = m.addVar(lb=0, vtype=GRB.CONTINUOUS, name="investment 4 (share)")
x5 = m.addVar(lb=0, vtype=GRB.CONTINUOUS, name="investment 5 (share)")
m.setObjective(x1 * 13 + x2 * 16 + x3 * 10 + x4 * 14 + x5 * 39,
GRB.MAXIMIZE)
# Add constraints
m.addConstr(x1 * 11 + x2 * 53 + x3 * 5 + x4 * 5 + x5 * 29 <= 40, 'Funds on time 0')
m.addConstr(x1 * 3 + x2 * 6 + x3 * 5 + x4 * 1 + x5 * 34 <= 20, 'Funds on time 1')
m.addConstr(x1 <= 1, 'max share of investment 1')
m.addConstr(x2 <= 1, 'max share of investment 2')
m.addConstr(x3 <= 1, 'max share of investment 3')
m.addConstr(x4 <= 1, 'max share of investment 4')
m.addConstr(x5 <= 1, 'max share of investment 5')
m.optimize()
for v in m.getVars():
print(f'{v.varName}, {v.x}')
print(f'Obj: {m.objVal}')
###Output
investment 1 (share), 1.0
investment 2 (share), 0.20085995085995084
investment 3 (share), 1.0
investment 4 (share), 1.0
investment 5 (share), 0.2880835380835381
Obj: 51.449017199017206
|
_site/notes/notes_ipynb/docs/lecture3-linear-regression.ipynb | ###Markdown
Lecture 3: Optimization and Linear Regression Applied Machine Learning__Volodymyr Kuleshov__Cornell Tech Part 1: Optimization and Calculus BackgroundIn the previous lecture, we learned what is a supervised machine learning problem.Before we turn our attention to Linear Regression, we will first dive deeper into the question of optimization. Review: Components of A Supervised Machine Learning ProblemAt a high level, a supervised machine learning problem has the following structure:$$ \text{Dataset} + \underbrace{\text{Learning Algorithm}}_\text{Model Class + Objective + Optimizer } \to \text{Predictive Model} $$The predictive model is chosen to model the relationship between inputs and targets. For instance, it can predict future targets. Optimizer: NotationAt a high-level an optimizer takes * an objective $J$ (also called a loss function) and * a model class $\mathcal{M}$ and finds a model $f \in \mathcal{M}$ with the smallest value of the objective $J$.\begin{align*}\min_{f \in \mathcal{M}} J(f)\end{align*}Intuitively, this is the function that bests "fits" the data on the training dataset $\mathcal{D} = \{(x^{(i)}, y^{(i)}) \mid i = 1,2,...,n\}$. We will use the a quadratic function as our running example for an objective $J$.
###Code
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [8, 4]
def quadratic_function(theta):
"""The cost function, J(theta)."""
return 0.5*(2*theta-1)**2
###Output
_____no_output_____
###Markdown
We can visualize it.
###Code
# First construct a grid of theta1 parameter pairs and their corresponding
# cost function values.
thetas = np.linspace(-0.2,1,10)
f_vals = quadratic_function(thetas[:,np.newaxis])
plt.plot(thetas, f_vals)
plt.xlabel('Theta')
plt.ylabel('Objective value')
plt.title('Simple quadratic function')
###Output
_____no_output_____
###Markdown
Calculus Review: DerivativesRecall that the derivative $$\frac{d f(\theta_0)}{d \theta}$$ of a univariate function $f : \mathbb{R} \to \mathbb{R}$ is the instantaneous rate of change of the function $f(\theta)$ with respect to its parameter $\theta$ at the point $\theta_0$.
###Code
def quadratic_derivative(theta):
return (2*theta-1)*2
df0 = quadratic_derivative(np.array([[0]])) # derivative at zero
f0 = quadratic_function(np.array([[0]]))
line_length = 0.2
plt.plot(thetas, f_vals)
plt.annotate('', xytext=(0-line_length, f0-line_length*df0), xy=(0+line_length, f0+line_length*df0),
arrowprops={'arrowstyle': '-', 'lw': 1.5}, va='center', ha='center')
plt.xlabel('Theta')
plt.ylabel('Objective value')
plt.title('Simple quadratic function')
pts = np.array([[0, 0.5, 0.8]]).reshape((3,1))
df0s = quadratic_derivative(pts)
f0s = quadratic_function(pts)
plt.plot(thetas, f_vals)
for pt, f0, df0 in zip(pts.flatten(), f0s.flatten(), df0s.flatten()):
plt.annotate('', xytext=(pt-line_length, f0-line_length*df0), xy=(pt+line_length, f0+line_length*df0),
arrowprops={'arrowstyle': '-', 'lw': 1}, va='center', ha='center')
plt.xlabel('Theta')
plt.ylabel('Objective value')
plt.title('Simple quadratic function')
###Output
_____no_output_____
###Markdown
Calculus Review: Partial DerivativesThe partial derivative $$\frac{\partial f(\theta_0)}{\partial \theta_j}$$ of a multivariate function $f : \mathbb{R}^d \to \mathbb{R}$ is the derivative of $f$ with respect to $\theta_j$ while all othe other inputs $\theta_k$ for $k\neq j$ are fixed. Calculus Review: The GradientThe gradient $\nabla_\theta f$ further extends the derivative to multivariate functions $f : \mathbb{R}^d \to \mathbb{R}$, and is defined at a point $\theta_0$ as$$ \nabla_\theta f (\theta_0) = \begin{bmatrix}\frac{\partial f(\theta_0)}{\partial \theta_1} \\\frac{\partial f(\theta_0)}{\partial \theta_2} \\\vdots \\\frac{\partial f(\theta_0)}{\partial \theta_d}\end{bmatrix}.$$The $j$-th entry of the vector $\nabla_\theta f (\theta_0)$ is the partial derivative $\frac{\partial f(\theta_0)}{\partial \theta_j}$ of $f$ with respect to the $j$-th component of $\theta$. We will use a quadratic function as a running example.
###Code
def quadratic_function2d(theta0, theta1):
"""Quadratic objective function, J(theta0, theta1).
The inputs theta0, theta1 are 2d arrays and we evaluate
the objective at each value theta0[i,j], theta1[i,j].
We implement it this way so it's easier to plot the
level curves of the function in 2d.
Parameters:
theta0 (np.array): 2d array of first parameter theta0
theta1 (np.array): 2d array of second parameter theta1
Returns:
fvals (np.array): 2d array of objective function values
fvals is the same dimension as theta0 and theta1.
fvals[i,j] is the value at theta0[i,j] and theta1[i,j].
"""
theta0 = np.atleast_2d(np.asarray(theta0))
theta1 = np.atleast_2d(np.asarray(theta1))
return 0.5*((2*theta1-2)**2 + (theta0-3)**2)
###Output
_____no_output_____
###Markdown
Let's visualize this function.
###Code
theta0_grid = np.linspace(-4,7,101)
theta1_grid = np.linspace(-1,4,101)
theta_grid = theta0_grid[np.newaxis,:], theta1_grid[:,np.newaxis]
J_grid = quadratic_function2d(theta0_grid[np.newaxis,:], theta1_grid[:,np.newaxis])
X, Y = np.meshgrid(theta0_grid, theta1_grid)
contours = plt.contour(X, Y, J_grid, 10)
plt.clabel(contours)
plt.axis('equal')
###Output
_____no_output_____
###Markdown
Let's write down the derivative of the quadratic function.
###Code
def quadratic_derivative2d(theta0, theta1):
"""Derivative of quadratic objective function.
The inputs theta0, theta1 are 1d arrays and we evaluate
the derivative at each value theta0[i], theta1[i].
Parameters:
theta0 (np.array): 1d array of first parameter theta0
theta1 (np.array): 1d array of second parameter theta1
Returns:
grads (np.array): 2d array of partial derivatives
grads is of the same size as theta0 and theta1
along first dimension and of size
two along the second dimension.
grads[i,j] is the j-th partial derivative
at input theta0[i], theta1[i].
"""
# this is the gradient of 0.5*((2*theta1-2)**2 + (theta0-3)**2)
grads = np.stack([theta0-3, (2*theta1-2)*2], axis=1)
grads = grads.reshape([len(theta0), 2])
return grads
###Output
_____no_output_____
###Markdown
We can visualize the derivative.
###Code
theta0_pts, theta1_pts = np.array([2.3, -1.35, -2.3]), np.array([2.4, -0.15, 2.75])
dfs = quadratic_derivative2d(theta0_pts, theta1_pts)
line_length = 0.2
contours = plt.contour(X, Y, J_grid, 10)
for theta0_pt, theta1_pt, df0 in zip(theta0_pts, theta1_pts, dfs):
plt.annotate('', xytext=(theta0_pt, theta1_pt),
xy=(theta0_pt-line_length*df0[0], theta1_pt-line_length*df0[1]),
arrowprops={'arrowstyle': '->', 'lw': 2}, va='center', ha='center')
plt.scatter(theta0_pts, theta1_pts)
plt.clabel(contours)
plt.xlabel('Theta0')
plt.ylabel('Theta1')
plt.title('Gradients of the quadratic function')
plt.axis('equal')
###Output
_____no_output_____
###Markdown
Part 1b: Gradient DescentNext, we will use gradients to define an important algorithm called *gradient descent*. Calculus Review: The GradientThe gradient $\nabla_\theta f$ further extends the derivative to multivariate functions $f : \mathbb{R}^d \to \mathbb{R}$, and is defined at a point $\theta_0$ as$$ \nabla_\theta f (\theta_0) = \begin{bmatrix}\frac{\partial f(\theta_0)}{\partial \theta_1} \\\frac{\partial f(\theta_0)}{\partial \theta_2} \\\vdots \\\frac{\partial f(\theta_0)}{\partial \theta_d}\end{bmatrix}.$$The $j$-th entry of the vector $\nabla_\theta f (\theta_0)$ is the partial derivative $\frac{\partial f(\theta_0)}{\partial \theta_j}$ of $f$ with respect to the $j$-th component of $\theta$.
###Code
theta0_pts, theta1_pts = np.array([2.3, -1.35, -2.3]), np.array([2.4, -0.15, 2.75])
dfs = quadratic_derivative2d(theta0_pts, theta1_pts)
line_length = 0.2
contours = plt.contour(X, Y, J_grid, 10)
for theta0_pt, theta1_pt, df0 in zip(theta0_pts, theta1_pts, dfs):
plt.annotate('', xytext=(theta0_pt, theta1_pt),
xy=(theta0_pt-line_length*df0[0], theta1_pt-line_length*df0[1]),
arrowprops={'arrowstyle': '->', 'lw': 2}, va='center', ha='center')
plt.scatter(theta0_pts, theta1_pts)
plt.clabel(contours)
plt.xlabel('Theta0')
plt.ylabel('Theta1')
plt.title('Gradients of the quadratic function')
plt.axis('equal')
###Output
_____no_output_____
###Markdown
Gradient Descent: IntuitionGradient descent is a very common optimization algorithm used in machine learning.The intuition behind gradient descent is to repeatedly obtain the gradient to determine the direction in which the function decreases most steeply and take a step in that direction. Gradient Descent: NotationMore formally, if we want to optimize $J(\theta)$, we start with an initial guess $\theta_0$ for the parameters and repeat the following update until $\theta$ is no longer changing:$$ \theta_i := \theta_{i-1} - \alpha \cdot \nabla_\theta J(\theta_{i-1}). $$As code, this method may look as follows:```pythontheta, theta_prev = random_initialization()while norm(theta - theta_prev) > convergence_threshold: theta_prev = theta theta = theta_prev - step_size * gradient(theta_prev)```In the above algorithm, we stop when $||\theta_i - \theta_{i-1}||$ is small. It's easy to implement this function in numpy.
###Code
convergence_threshold = 2e-1
step_size = 2e-1
theta, theta_prev = np.array([[-2], [3]]), np.array([[0], [0]])
opt_pts = [theta.flatten()]
opt_grads = []
while np.linalg.norm(theta - theta_prev) > convergence_threshold:
# we repeat this while the value of the function is decreasing
theta_prev = theta
gradient = quadratic_derivative2d(*theta).reshape([2,1])
theta = theta_prev - step_size * gradient
opt_pts += [theta.flatten()]
opt_grads += [gradient.flatten()]
###Output
_____no_output_____
###Markdown
We can now visualize gradient descent.
###Code
opt_pts = np.array(opt_pts)
opt_grads = np.array(opt_grads)
contours = plt.contour(X, Y, J_grid, 10)
plt.clabel(contours)
plt.scatter(opt_pts[:,0], opt_pts[:,1])
for opt_pt, opt_grad in zip(opt_pts, opt_grads):
plt.annotate('', xytext=(opt_pt[0], opt_pt[1]),
xy=(opt_pt[0]-0.8*step_size*opt_grad[0], opt_pt[1]-0.8*step_size*opt_grad[1]),
arrowprops={'arrowstyle': '->', 'lw': 2}, va='center', ha='center')
plt.axis('equal')
###Output
_____no_output_____
###Markdown
Part 2: Gradient Descent in Linear ModelsLet's now use gradient descent to derive a supervised learning algorithm for linear models. Review: Gradient DescentIf we want to optimize $J(\theta)$, we start with an initial guess $\theta_0$ for the parameters and repeat the following update:$$ \theta_i := \theta_{i-1} - \alpha \cdot \nabla_\theta J(\theta_{i-1}). $$As code, this method may look as follows:```pythontheta, theta_prev = random_initialization()while norm(theta - theta_prev) > convergence_threshold: theta_prev = theta theta = theta_prev - step_size * gradient(theta_prev)``` Review: Linear Model FamilyRecall that a linear model has the form\begin{align*}y & = \theta_0 + \theta_1 \cdot x_1 + \theta_2 \cdot x_2 + ... + \theta_d \cdot x_d\end{align*}where $x \in \mathbb{R}^d$ is a vector of features and $y$ is the target. The $\theta_j$ are the *parameters* of the model.By using the notation $x_0 = 1$, we can represent the model in a vectorized form$$ f_\theta(x) = \sum_{j=0}^d \theta_j \cdot x_j = \theta^\top x. $$ Let's define our model in Python.
###Code
def f(X, theta):
"""The linear model we are trying to fit.
Parameters:
theta (np.array): d-dimensional vector of parameters
X (np.array): (n,d)-dimensional data matrix
Returns:
y_pred (np.array): n-dimensional vector of predicted targets
"""
return X.dot(theta)
###Output
_____no_output_____
###Markdown
An Objective: Mean Squared ErrorWe pick $\theta$ to minimize the mean squared error (MSE). Slight variants of this objective are also known as the residual sum of squares (RSS) or the sum of squared residuals (SSR).$$J(\theta)= \frac{1}{2n} \sum_{i=1}^n(y^{(i)}-\theta^\top x^{(i)})^2$$In other words, we are looking for the best compromise in $\theta$ over all the data points. Let's implement mean squared error.
###Code
def mean_squared_error(theta, X, y):
"""The cost function, J, describing the goodness of fit.
Parameters:
theta (np.array): d-dimensional vector of parameters
X (np.array): (n,d)-dimensional design matrix
y (np.array): n-dimensional vector of targets
"""
return 0.5*np.mean((y-f(X, theta))**2)
###Output
_____no_output_____
###Markdown
Mean Squared Error: Partial DerivativesLet's work out what a partial derivative is for the MSE error loss for a linear model.\begin{align*}\frac{\partial J(\theta)}{\partial \theta_j} & = \frac{\partial}{\partial \theta_j} \frac{1}{2} \left( f_\theta(x) - y \right)^2 \\& = \left( f_\theta(x) - y \right) \cdot \frac{\partial}{\partial \theta_j} \left( f_\theta(x) - y \right) \\& = \left( f_\theta(x) - y \right) \cdot \frac{\partial}{\partial \theta_j} \left( \sum_{k=0}^d \theta_k \cdot x_k - y \right) \\& = \left( f_\theta(x) - y \right) \cdot x_j\end{align*} Mean Squared Error: The GradientWe can use this derivation to obtain an expression for the gradient of the MSE for a linear model\begin{align*}\nabla_\theta J (\theta) = \begin{bmatrix}\frac{\partial f(\theta)}{\partial \theta_1} \\\frac{\partial f(\theta)}{\partial \theta_2} \\\vdots \\\frac{\partial f(\theta)}{\partial \theta_d}\end{bmatrix}=\begin{bmatrix}\left( f_\theta(x) - y \right) \cdot x_1 \\\left( f_\theta(x) - y \right) \cdot x_2 \\\vdots \\\left( f_\theta(x) - y \right) \cdot x_d\end{bmatrix}=\left( f_\theta(x) - y \right) \cdot \bf{x}.\end{align*} Let's implement the gradient.
###Code
def mse_gradient(theta, X, y):
"""The gradient of the cost function.
Parameters:
theta (np.array): d-dimensional vector of parameters
X (np.array): (n,d)-dimensional design matrix
y (np.array): n-dimensional vector of targets
Returns:
grad (np.array): d-dimensional gradient of the MSE
"""
return np.mean((f(X, theta) - y) * X.T, axis=1)
###Output
_____no_output_____
###Markdown
The UCI Diabetes DatasetIn this section, we are going to again use the UCI Diabetes Dataset.* For each patient we have a access to a measurement of their body mass index (BMI) and a quantiative diabetes risk score (from 0-300). * We are interested in understanding how BMI affects an individual's diabetes risk.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [8, 4]
import numpy as np
import pandas as pd
from sklearn import datasets
# Load the diabetes dataset
X, y = datasets.load_diabetes(return_X_y=True, as_frame=True)
# add an extra column of onens
X['one'] = 1
# Collect 20 data points and only use bmi dimension
X_train = X.iloc[-20:].loc[:, ['bmi', 'one']]
y_train = y.iloc[-20:] / 300
plt.scatter(X_train.loc[:,['bmi']], y_train, color='black')
plt.xlabel('Body Mass Index (BMI)')
plt.ylabel('Diabetes Risk')
###Output
_____no_output_____
###Markdown
Gradient Descent for Linear RegressionPutting this together with the gradient descent algorithm, we obtain a learning method for training linear models.```pythontheta, theta_prev = random_initialization()while abs(J(theta) - J(theta_prev)) > conv_threshold: theta_prev = theta theta = theta_prev - step_size * (f(x, theta)-y) * x```This update rule is also known as the Least Mean Squares (LMS) or Widrow-Hoff learning rule.
###Code
threshold = 1e-3
step_size = 4e-1
theta, theta_prev = np.array([2,1]), np.ones(2,)
opt_pts = [theta]
opt_grads = []
iter = 0
while np.linalg.norm(theta - theta_prev) > threshold:
if iter % 100 == 0:
print('Iteration %d. MSE: %.6f' % (iter, mean_squared_error(theta, X_train, y_train)))
theta_prev = theta
gradient = mse_gradient(theta, X_train, y_train)
theta = theta_prev - step_size * gradient
opt_pts += [theta]
opt_grads += [gradient]
iter += 1
x_line = np.stack([np.linspace(-0.1, 0.1, 10), np.ones(10,)])
y_line = opt_pts[-1].dot(x_line)
plt.scatter(X_train.loc[:,['bmi']], y_train, color='black')
plt.plot(x_line[0], y_line)
plt.xlabel('Body Mass Index (BMI)')
plt.ylabel('Diabetes Risk')
###Output
_____no_output_____
###Markdown
Part 3: Ordinary Least SquaresIn practice, there is a more effective way than gradient descent to find linear model parameters.We will see this method here, which will lead to our first non-toy algorithm: Ordinary Least Squares. Review: The GradientThe gradient $\nabla_\theta f$ further extends the derivative to multivariate functions $f : \mathbb{R}^d \to \mathbb{R}$, and is defined at a point $\theta_0$ as$$ \nabla_\theta f (\theta_0) = \begin{bmatrix}\frac{\partial f(\theta_0)}{\partial \theta_1} \\\frac{\partial f(\theta_0)}{\partial \theta_2} \\\vdots \\\frac{\partial f(\theta_0)}{\partial \theta_d}\end{bmatrix}.$$In other words, the $j$-th entry of the vector $\nabla_\theta f (\theta_0)$ is the partial derivative $\frac{\partial f(\theta_0)}{\partial \theta_j}$ of $f$ with respect to the $j$-th component of $\theta$. The UCI Diabetes DatasetIn this section, we are going to again use the UCI Diabetes Dataset.* For each patient we have a access to a measurement of their body mass index (BMI) and a quantiative diabetes risk score (from 0-300). * We are interested in understanding how BMI affects an individual's diabetes risk.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [8, 4]
import numpy as np
import pandas as pd
from sklearn import datasets
# Load the diabetes dataset
X, y = datasets.load_diabetes(return_X_y=True, as_frame=True)
# add an extra column of onens
X['one'] = 1
# Collect 20 data points
X_train = X.iloc[-20:]
y_train = y.iloc[-20:]
plt.scatter(X_train.loc[:,['bmi']], y_train, color='black')
plt.xlabel('Body Mass Index (BMI)')
plt.ylabel('Diabetes Risk')
###Output
_____no_output_____
###Markdown
Notation: Design MatrixMachine learning algorithms are most easily defined in the language of linear algebra. Therefore, it will be useful to represent the entire dataset as one matrix $X \in \mathbb{R}^{n \times d}$, of the form:$$ X = \begin{bmatrix}x^{(1)}_1 & x^{(1)}_2 & \ldots & x^{(1)}_d \\x^{(2)}_1 & x^{(2)}_2 & \ldots & x^{(2)}_d \\\vdots \\x^{(n)}_1 & x^{(n)}_2 & \ldots & x^{(n)}_d\end{bmatrix}=\begin{bmatrix}- & (x^{(1)})^\top & - \\- & (x^{(2)})^\top & - \\& \vdots & \\- & (x^{(n)})^\top & - \\\end{bmatrix}.$$ We can view the design matrix for the diabetes dataset.
###Code
X_train.head()
###Output
_____no_output_____
###Markdown
Notation: Design MatrixSimilarly, we can vectorize the target variables into a vector $y \in \mathbb{R}^n$ of the form$$ y = \begin{bmatrix}y^{(1)} \\y^{(2)} \\\vdots \\y^{(n)}\end{bmatrix}.$$ Squared Error in Matrix FormRecall that we may fit a linear model by choosing $\theta$ that minimizes the squared error:$$J(\theta)=\frac{1}{2}\sum_{i=1}^n(y^{(i)}-\theta^\top x^{(i)})^2$$In other words, we are looking for the best compromise in $\beta$ over all the data points. We can write this sum in matrix-vector form as:$$J(\theta) = \frac{1}{2} (y-X\theta)^\top(y-X\theta) = \frac{1}{2} \|y-X\theta\|^2,$$where $X$ is the design matrix and $\|\cdot\|$ denotes the Euclidean norm. The Gradient of the Squared ErrorWe can a gradient for the mean squared error as follows.\begin{align*}\nabla_\theta J(\theta) & = \nabla_\theta \frac{1}{2} (X \theta - y)^\top (X \theta - y) \\& = \frac{1}{2} \nabla_\theta \left( (X \theta)^\top (X \theta) - (X \theta)^\top y - y^\top (X \theta) + y^\top y \right) \\& = \frac{1}{2} \nabla_\theta \left( \theta^\top (X^\top X) \theta - 2(X \theta)^\top y \right) \\& = \frac{1}{2} \left( 2(X^\top X) \theta - 2X^\top y \right) \\& = (X^\top X) \theta - X^\top y\end{align*}We used the facts that $a^\top b = b^\top a$ (line 3), that $\nabla_x b^\top x = b$ (line 4), and that $\nabla_x x^\top A x = 2 A x$ for a symmetric matrix $A$ (line 4). Normal Equations<!-- We know from calculus that a function is minimized when its derivative is set to zero. In our case, our objective function is a (multivariate) quadratic; hence it only has one minimum, which is the global minimum. -->Setting the above derivative to zero, we obtain the *normal equations*:$$ (X^\top X) \theta = X^\top y.$$Hence, the value $\theta^*$ that minimizes this objective is given by:$$ \theta^* = (X^\top X)^{-1} X^\top y.$$ Note that we assumed that the matrix $(X^\top X)$ is invertible; if this is not the case, there are easy ways of addressing this issue. Let's apply the normal equations.
###Code
import numpy as np
theta_best = np.linalg.inv(X_train.T.dot(X_train)).dot(X_train.T).dot(y_train)
theta_best_df = pd.DataFrame(data=theta_best[np.newaxis, :], columns=X.columns)
theta_best_df
###Output
_____no_output_____
###Markdown
We can now use our estimate of theta to compute predictions for 3 new data points.
###Code
# Collect 3 data points for testing
X_test = X.iloc[:3]
y_test = y.iloc[:3]
# generate predictions on the new patients
y_test_pred = X_test.dot(theta_best)
###Output
_____no_output_____
###Markdown
Let's visualize these predictions.
###Code
# visualize the results
plt.xlabel('Body Mass Index (BMI)')
plt.ylabel('Diabetes Risk')
plt.scatter(X_train.loc[:, ['bmi']], y_train)
plt.scatter(X_test.loc[:, ['bmi']], y_test, color='red', marker='o')
plt.plot(X_test.loc[:, ['bmi']], y_test_pred, 'x', color='red', mew=3, markersize=8)
plt.legend(['Model', 'Prediction', 'Initial patients', 'New patients'])
###Output
_____no_output_____
###Markdown
Algorithm: Ordinary Least Squares* __Type__: Supervised learning (regression)* __Model family__: Linear models* __Objective function__: Mean squared error* __Optimizer__: Normal equations Part 4: Non-Linear Least SquaresSo far, we have learned about a very simple linear model. These can capture only simple linear relationships in the data. How can we use what we learned so far to model more complex relationships?We will now see a simple approach to model complex non-linear relationships called *least squares*. Review: Polynomial FunctionsRecall that a polynomial of degree $p$ is a function of the form$$a_p x^p + a_{p-1} x^{p-1} + ... + a_{1} x + a_0.$$Below are some examples of polynomial functions.
###Code
import warnings
warnings.filterwarnings("ignore")
plt.figure(figsize=(16,4))
x_vars = np.linspace(-2, 2)
plt.subplot('131')
plt.title('Quadratic Function')
plt.plot(x_vars, x_vars**2)
plt.legend(["$x^2$"])
plt.subplot('132')
plt.title('Cubic Function')
plt.plot(x_vars, x_vars**3)
plt.legend(["$x^3$"])
plt.subplot('133')
plt.title('Third Degree Polynomial')
plt.plot(x_vars, x_vars**3 + 2*x_vars**2 + x_vars + 1)
plt.legend(["$x^3 + 2 x^2 + x + 1$"])
###Output
_____no_output_____
###Markdown
Modeling Non-Linear Relationships With Polynomial Regression<!-- Note that the set of $p$-th degree polynomials forms a linear model with parameters $a_p, a_{p-1}, ..., a_0$.This means we can use our algorithms for linear models to learn non-linear features! -->Specifically, given a one-dimensional continuous variable $x$, we can defining a feature function $\phi : \mathbb{R} \to \mathbb{R}^{p+1}$ as$$ \phi(x) = \begin{bmatrix}1 \\x \\x^2 \\\vdots \\x^p\end{bmatrix}.$$ The class of models of the form$$ f_\theta(x) := \sum_{j=0}^p \theta_p x^p = \theta^\top \phi(x) $$with parameters $\theta$ and polynomial features $\phi$ is the set of $p$-degree polynomials. * This model is non-linear in the input variable $x$, meaning that we can model complex data relationships. * It is a linear model as a function of the parameters $\theta$, meaning that we can use our familiar ordinary least squares algorithm to learn these features. The UCI Diabetes DatasetIn this section, we are going to again use the UCI Diabetes Dataset.* For each patient we have a access to a measurement of their body mass index (BMI) and a quantiative diabetes risk score (from 0-300). * We are interested in understanding how BMI affects an individual's diabetes risk.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [8, 4]
import numpy as np
import pandas as pd
from sklearn import datasets
# Load the diabetes dataset
X, y = datasets.load_diabetes(return_X_y=True, as_frame=True)
# add an extra column of onens
X['one'] = 1
# Collect 20 data points
X_train = X.iloc[-20:]
y_train = y.iloc[-20:]
plt.scatter(X_train.loc[:,['bmi']], y_train, color='black')
plt.xlabel('Body Mass Index (BMI)')
plt.ylabel('Diabetes Risk')
###Output
_____no_output_____
###Markdown
Diabetes Dataset: A Non-Linear FeaturizationLet's now obtain linear features for this dataset.
###Code
X_bmi = X_train.loc[:, ['bmi']]
X_bmi_p3 = pd.concat([X_bmi, X_bmi**2, X_bmi**3], axis=1)
X_bmi_p3.columns = ['bmi', 'bmi2', 'bmi3']
X_bmi_p3['one'] = 1
X_bmi_p3.head()
###Output
_____no_output_____
###Markdown
Diabetes Dataset: A Polynomial ModelBy training a linear model on this featurization of the diabetes set, we can obtain a polynomial model of diabetest risk as a function of BMI.
###Code
# Fit a linear regression
theta = np.linalg.inv(X_bmi_p3.T.dot(X_bmi_p3)).dot(X_bmi_p3.T).dot(y_train)
# Show the learned polynomial curve
x_line = np.linspace(-0.1, 0.1, 10)
x_line_p3 = np.stack([x_line, x_line**2, x_line**3, np.ones(10,)], axis=1)
y_train_pred = x_line_p3.dot(theta)
plt.xlabel('Body Mass Index (BMI)')
plt.ylabel('Diabetes Risk')
plt.scatter(X_bmi, y_train)
plt.plot(x_line, y_train_pred)
###Output
_____no_output_____
###Markdown
Multivariate Polynomial RegressionWe can also take this approach to construct non-linear function of multiples variable by using multivariate polynomials.For example, a polynomial of degree $2$ over two variables $x_1, x_2$ is a function of the form<!-- $$a_{20} x_1^2 + a_{10} x_1 + a_{02} x_2^2 + a_{01} x_2 + a_{22} x_1^2 x_2^2 + a_{21} x_1^2 x_2 + a_{12} x_1 x_2^2 + a_11 x_1 x_2 + a_{00}.$$ -->$$a_{20} x_1^2 + a_{10} x_1 + a_{02} x_2^2 + a_{01} x_2 + a_{11} x_1 x_2 + a_{00}.$$ In general, a polynomial of degree $p$ over two variables $x_1, x_2$ is a function of the form$$f(x_1, x_2) = \sum_{i,j \geq 0 : i+j \leq p} a_{ij} x_1^i x_2^j.$$ In our two-dimensional example, this corresponds to a feature function $\phi : \mathbb{R}^2 \to \mathbb{R}^6$ of the form$$ \phi(x) = \begin{bmatrix}1 \\x_1 \\x_1^2 \\x_2 \\x_2^2 \\x_1 x_2\end{bmatrix}.$$The same approach holds for polynomials of an degree and any number of variables. Towards General Non-Linear FeaturesAny non-linear feature map $\phi(x) : \mathbb{R}^d \to \mathbb{R}^p$ can be used in this way to obtain general models of the form$$ f_\theta(x) := \theta^\top \phi(x) $$that are highly non-linear in $x$ but linear in $\theta$. For example, here is a way of modeling complex periodic functions via a sum of sines and cosines.
###Code
import warnings
warnings.filterwarnings("ignore")
plt.figure(figsize=(16,4))
x_vars = np.linspace(-5, 5)
plt.subplot('131')
plt.title('Cosine Function')
plt.plot(x_vars, np.cos(x_vars))
plt.legend(["$cos(x)$"])
plt.subplot('132')
plt.title('Sine Function')
plt.plot(x_vars, np.sin(2*x_vars))
plt.legend(["$x^3$"])
plt.subplot('133')
plt.title('Combination of Sines and Cosines')
plt.plot(x_vars, np.cos(x_vars) + np.sin(2*x_vars) + np.cos(4*x_vars))
plt.legend(["$cos(x) + sin(2x) + cos(4x)$"])
###Output
_____no_output_____ |
churn_project.ipynb | ###Markdown
Data Manipulation: a. Extract the 5th column & store it in ‘customer_5’b. Extract the 15th column & store it in ‘customer_15’c. Extract all the male senior citizens whose Payment Method is Electronic check & store the result in ‘senior_male_electronic’d. Extract all those customers whose tenure is greater than 70 months or their Monthly charges is more than 100$ & store the result in ‘customer_total_tenure’e. Extract all the customers whose Contract is of two years, payment method is Mailed check & the value of Churn is ‘Yes’ & store the result in ‘two_mail_yes’f. Extract 333 random records from the customer_churndataframe& store the result in ‘customer_333’g. Get the count of different levels from the ‘Churn’ column
###Code
customer_5 = df.iloc[:,4]
print(customer_5)
customer_15 = df.iloc[:,14]
customer_15
senior_male_electronic = (df['gender']=='male') & (df['PaymentMethod']=="Electronic check") & (df['SeniorCitizen']==1)
senior_male_electronic
customer_total_tenure = df[(df['tenure']>70) | (df['MonthlyCharges']>100)]
customer_total_tenure
two_mail_yes = df[(df['Contract']=='Two year') & (df['PaymentMethod']=='Mailed check') & (df['Churn']=='Yes')]
two_mail_yes
customer_333 = df.sample(n=333)
customer_333
df['Churn'].value_counts()
###Output
_____no_output_____
###Markdown
Data Visualization: a. Build a bar-plot for the ’InternetService’ column:i. Set x-axis label to ‘Categories of Internet Service’ii. Set y-axis label to ‘Count of Categories’iii. Set the title of plot to be ‘Distribution of Internet Service’iv. Set the color of the bars to be ‘orange’b. Build a histogram for the ‘tenure’ column:i. Set the number of bins to be 30 ii. Set the color of the bins to be ‘green’iii. Assign the title ‘Distribution of tenure’c. Build a scatter-plot between ‘MonthlyCharges’ & ‘tenure’. Map ‘MonthlyCharges’ to the y-axis & ‘tenure’ to the ‘x-axis’:i. Assign the points a color of ‘brown’ii. Set the x-axis label to ‘Tenure of customer’iii. Set the y-axis label to ‘Monthly Charges of customer’iv. Set the title to ‘Tenure vs Monthly Charges’d. Build a box-plot between ‘tenure’ & ‘Contract’. Map ‘tenure’ on the y-axis & ‘Contract’ on the x-axis
###Code
df['InternetService'].value_counts().tolist() # for understanding
df['InternetService'].value_counts().keys().tolist() #for understanding
plt.figure(figsize=(10,7))
plt.bar(df['InternetService'].value_counts().keys().tolist(),df['InternetService'].value_counts().tolist(),color='orange')
plt.xlabel("Categories of Internet Service")
plt.ylabel("Count of Categories")
plt.title("Distribution of Internet Service")
plt.show()
plt.figure(figsize=(10,7))
plt.hist(df['tenure'],bins=30,color='g')
plt.title('‘Distribution of tenure’')
plt.show()
plt.figure(figsize=(10,7))
plt.scatter(x=df['tenure'],y=df['MonthlyCharges'],color='brown')
plt.xlabel("Tenure of customer")
plt.ylabel("Monthly Charges of customer")
plt.title("Tenure vs Monthly Charges")
plt.show()
df.boxplot(column=['tenure'],by=['Contract'])
plt.show()
###Output
_____no_output_____
###Markdown
Linear Regression: a. Build a simple linear model where dependent variable is ‘MonthlyCharges’ and independent variable is ‘tenure’i. Divide the dataset into train and test sets in 70:30 ratio. ii. Build the model on train set and predict the values on test set iii. After predicting the values, find the root mean square error iv. Find out the error in prediction & store the result in ‘error’v. Find the root mean square error
###Code
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
x=df[['tenure']]
y=df['MonthlyCharges']
x.head()
y.head()
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.30,random_state=0)
print("Independent Variable for Training the model", x_train.shape, '\n', x_train.head(2))
print("Target Variable for Training the model", y_train.shape, '\n', y_train.head(2))
print("Independent Variable for Testing the model", x_test.shape, '\n', x_test.head(2))
print("Target Variable for Comparing the Prediction the model's prediction", y_test.shape, '\n', y_test.head(2))
model=LinearRegression()
model.fit(x_train,y_train)
# Intercept
inter = model.intercept_
inter
# Slope
slope = model.coef_
slope
y_pred = model.predict(x_test)
y_pred[:5]
aftermodel = pd.DataFrame({"Actual Sales": y_test, "Predicted Sales" : np.around(y_pred, 2)})
aftermodel[:10]
#Error in predicted value
aftermodel['Error']= aftermodel['Actual Sales'] - aftermodel['Predicted Sales']
aftermodel
from sklearn.metrics import mean_squared_error
# Calculating Mean Squared Error and Root Mean Squared Error
mse = mean_squared_error(y_test, y_pred)
rmse = round(np.sqrt(mse), 2)
print("Root mean square error : ",rmse,"\n")
print(y_pred.mean())
###Output
Root mean square error : 29.39
64.85362533013019
###Markdown
Logistic Regression: a. Build a simple logistic regression modelwhere dependent variable is ‘Churn’ & independent variable is ‘MonthlyCharges’i. Divide the dataset in 65:35 ratio ii. Build the model on train set and predict the values on test set iii. Build the confusion matrix and get the accuracy score
###Code
from sklearn.linear_model import LogisticRegression
x = df[['MonthlyCharges']]
y = df['Churn']
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.35,random_state=150)
log_model = LogisticRegression()
log_model.fit(x_train,y_train)
predict = log_model.predict(x_test)
predict[:5]
# Comparing Actual and Predicted Data
comp = pd.DataFrame({"Actual" : y_test, "Predicted": predict})
comp.head()
from sklearn.metrics import confusion_matrix, accuracy_score
print("confusion_matrix\n",confusion_matrix(y_test,predict))
print("accuracy_score\n",accuracy_score(y_test,predict))
###Output
accuracy_score
0.7493917274939172
###Markdown
Multiple Logistic Regression: b. Build a multiple logistic regression model where dependent variable is ‘Churn’ & independent variables are ‘tenure’ & ‘MonthlyCharges’i. Divide the dataset in 80:20 ratio ii. Build the model on train set and predict the values on test set iii. Build the confusion matrix and get the accuracy score
###Code
x = df[['tenure','MonthlyCharges']]
y = df['Churn']
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.20,random_state=150)
multi_log_model = LogisticRegression()
multi_log_model.fit(x_train,y_train)
pred = multi_log_model.predict(x_test)
pred[:10]
# Comparing Actual and Predicted Data
comp = pd.DataFrame({"Actual" : y_test, "Predicted": pred})
comp.head()
print("confusion_matrix\n",confusion_matrix(y_test,pred))
print("accuracy_score\n",accuracy_score(y_test,pred))
###Output
accuracy_score
0.8076650106458482
###Markdown
Decision Tree: a. Build a decision tree model where dependent variable is ‘Churn’ & independent variable is ‘tenure’i. Divide the dataset in 80:20 ratio ii. Build the model on train set and predict the values on test setiii. Build the confusion matrix and calculate the accuracy
###Code
x = df[['tenure']]
y = df['Churn']
from sklearn.tree import DecisionTreeClassifier
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.20,random_state=150)
tree = DecisionTreeClassifier()
tree.fit(x_train,y_train)
pred = tree.predict(x_test)
print("confusion_matrix \n",confusion_matrix(y_test,pred))
print("accuracy_score",accuracy_score(y_test,pred))
###Output
accuracy_score 0.7629524485450674
###Markdown
Random Forest: a. Build a Random Forest model where dependent variable is ‘Churn’ & independent variables are ‘tenure’ and ‘MonthlyCharges’i. Divide the dataset in 70:30 ratio ii. Build the model on train set and predict the values on test set iii. Build the confusion matrix and calculate the accuracy
###Code
x = df[['tenure','MonthlyCharges']]
y = df['Churn']
from sklearn.ensemble import RandomForestClassifier
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.30,random_state=0)
forest = RandomForestClassifier()
forest.fit(x_train,y_train)
pred = forest.predict(x_test)
pred[:5]
print("confusion_matrix \n",confusion_matrix(y_test,pred))
print("accuracy_score",accuracy_score(y_test,pred))
from sklearn import metrics
print('Error Metrics')
em = metrics.classification_report(y_test, pred)
print(em)
###Output
Error Metrics
precision recall f1-score support
No 0.80 0.86 0.83 1560
Yes 0.51 0.41 0.45 553
accuracy 0.74 2113
macro avg 0.66 0.63 0.64 2113
weighted avg 0.73 0.74 0.73 2113
###Markdown
Customer bank churn project The dataset consists of a randomly sampled population of a banking customers detailing demographics (independent variables) and whether a customer left (or stayed) the bank within the last 6 months (dependent variable).Project goal is to predict whether will customer leave or not. Data exploring
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from matplotlib import cm
df=pd.read_csv("Churn-Modelling.csv")
df.head()
###Output
_____no_output_____
###Markdown
Column information:- RowNumber - number of rows- CustomerId - customer id number- Surname - customer's surname- CreditScore - number between 300–850 that depicts a consumer's creditworthiness. The higher the score, the better a borrower looks to potential lenders- Geography - customer's state- Gender - female/male- Age - customer's age- Tenure - period or duration for which the loan amount is sanctioned- Balance - the amount of money held in a bank account at a given moment- NumOfProducts - number any facilities or services related to cash management, including treasury, depository, overdraft, credit or debit card, purchase card, electronic funds transfer and other cash management arrangements- HasCrCard - 1-has credit card ,0 - hasn't credit card- IsActiveMember - 1 - is active member , 0 - is not active member- EstimatedSalary - estimated salary- Exited - 1 - customer left , 0 - customer stayed
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10000 entries, 0 to 9999
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 RowNumber 10000 non-null int64
1 CustomerId 10000 non-null int64
2 Surname 10000 non-null object
3 CreditScore 10000 non-null int64
4 Geography 10000 non-null object
5 Gender 10000 non-null object
6 Age 10000 non-null int64
7 Tenure 10000 non-null int64
8 Balance 10000 non-null float64
9 NumOfProducts 10000 non-null int64
10 HasCrCard 10000 non-null int64
11 IsActiveMember 10000 non-null int64
12 EstimatedSalary 10000 non-null float64
13 Exited 10000 non-null int64
dtypes: float64(2), int64(9), object(3)
memory usage: 976.6+ KB
###Markdown
We dont have any missing values. But we do have columns that we don't need like "RowNumber", "CustomerId","Surname" so we will just drop them and also we will rename some of columns name for better understanding.
###Code
df2=df.drop(["RowNumber","CustomerId","Surname"],axis=1)
#Renaming columns
df2.columns=['credit_score', 'state', 'gender', 'age', 'tenure', 'balance',
'number_of_products', 'credit_card', 'active_member', 'estimated_salary',
'churn']
df2.head()
###Output
_____no_output_____
###Markdown
Data visualization Balance column We want to see how many customers leave depend on their amount of money in bank account.
###Code
#balance of customers which are not leaving
balance_churn_no=df2[df2.churn==0].balance
#balance of customers which are leaving
balance_churn_yes=df2[df2.churn==1].balance
#Plot histogram
plt.hist([balance_churn_yes, balance_churn_no], color=['red', 'blue'], label=['Churn=Yes', 'Churn=No'])
plt.legend()
plt.xlabel('balance')
plt.ylabel('Number of customers')
###Output
_____no_output_____
###Markdown
As we can see that over 3000 customers hasn't left even though they don't have no money in bank account. We also see that he most customers left with no money and around 100 000$. Tenure column We want to see if customers with longer tenure would stay and vice versa.
###Code
#tenure of customers which are not leaving
tenure_churn_no=df2[df2.churn==0].tenure
#tenure of customers whicht are not leaving
tenure_churn_yes=df2[df2.churn==1].tenure
#Plot histogram
plt.hist([tenure_churn_yes, tenure_churn_no], color=['red', 'blue'], label=['Churn=Yes', 'Churn=No'])
plt.legend()
plt.xlabel('tenure')
plt.ylabel('Number of customers')
###Output
_____no_output_____
###Markdown
As we could assume the customers with longer tenure have stayed. Gender column Let's see how many females and males are leaving and how many are staying.
###Code
# Count how many females and males stayed
count_churn=df2.groupby('gender')['churn'].apply(lambda x: (x==0).sum()).reset_index(name='Number of customers')
color = cm.viridis(np.linspace(.4, .8, 30))
count_churn= count_churn.sort_values("Number of customers" , ascending=[False])
count_churn.plot.bar(x='gender', y='Number of customers', color=color , figsize=(12,7))
###Output
_____no_output_____
###Markdown
We can see that more males have stayed then females. Age column Let's see how ages affects on staying or leaving.
###Code
#age of customers which are not leaving
age_churn_no=df2[df2.churn==0].age
#age of customers which are not leaving
age_churn_yes=df2[df2.churn==1].age
#plot histogram
plt.hist([age_churn_yes, age_churn_no], color=['red', 'blue'], label=['Churn=Yes', 'Churn=No'])
plt.legend()
plt.xlabel('age')
plt.ylabel('Number of customers')
###Output
_____no_output_____
###Markdown
We can see that the customer between 30 and 40 years old are staying.
###Code
# Count percentage of churns by state
count_churn=df2.groupby('state')['churn'].apply(lambda x: (x==1).sum()).reset_index(name='Percentage of churn')
count_churn= count_churn.sort_values("Percentage of churn" , ascending=[False])
count_churn.plot.pie(x="state",y='Percentage of churn', autopct='%1.1f%%',labels=df2["state"].unique(),figsize=(10,7))
###Output
_____no_output_____
###Markdown
We can see that France and Spain have the most customers that left. Preparing dataset for model
###Code
df2.head()
# define a function to discover columns with categorical varibales
def discover_categorical_columns(df):
"""
This function takes dataframe as an input, goes through columns in a dataframe to check if column is of an object type.
If the colunm is of an object type it means column contains categorical variables. Function than prints all unique
categorical values of a column
Args:
df (pd.DataFrame) - only requried argument for the function
Returns:
Prints unique categorical values for object type columns.
"""
for column in df:
if df[column].dtype=='object':
print('{} : {}'.format(column, df[column].unique()))
discover_categorical_columns(df2)
###Output
state : ['France' 'Spain' 'Germany']
gender : ['Female' 'Male']
###Markdown
We only have 2 object type columns with maximum 3 unique values. We can replace 'Female' and 'Male' categorical values of 'gender' column to 1's and 0's. And for "state" column we will use pd.get_dummies()
###Code
df2.info()
# replace 'Female' with 1; replace 'Male' with 0
df2.replace({'Female':1, 'Male':0}, value=None, inplace=True)
#function for creating dummy column
def create_dummies(df,column_name):
dummies=pd.get_dummies(df[column_name],prefix=column_name)
df=pd.concat([df,dummies],axis=1)
return df
#dummy columns
df3=create_dummies(df2,"state")
df4=df3.drop(["state"],axis=1)
df4.head()
# visualizing correlations
plt.figure(figsize=(10,10))
sns.heatmap(df4.corr(), annot=True, cmap='coolwarm')
###Output
_____no_output_____
###Markdown
Everything look fine and we can now scale data for better result. Scaling data
###Code
# min-max scaling
from sklearn.preprocessing import MinMaxScaler, RobustScaler
scaler=MinMaxScaler()
data_scaled_array=scaler.fit_transform(df4)
df5=pd.DataFrame(data_scaled_array, columns=df4.columns)
df5.head()
###Output
_____no_output_____
###Markdown
Building Models
###Code
from sklearn.model_selection import train_test_split
# Defining features and target column
X = df5.drop(columns='churn', axis ='columns')
y = df5.churn
#Splitting our data on train and test
train_X,test_X,train_y,test_y=train_test_split(X,y,train_size=0.8,random_state=1)
###Output
_____no_output_____
###Markdown
Balance data
###Code
train_y.value_counts()
###Output
_____no_output_____
###Markdown
As we can see we have a lot more customers that are staying then that are leaving. Because of this we need to equalize those parameters to get better result. We will use SMOTE technique.MOTE (Synthetic Minority Oversampling TEchnique) consists of synthesizing elements for the minority class, based on those that already exist. It works randomly picingk a point from the minority class and computing the k-nearest neighbors for this point. The synthetic points are added between the chosen point and its neighbors.
###Code
#Importing SMOTE
from imblearn.over_sampling import SMOTE
smote = SMOTE(sampling_strategy='minority')
X_s, y_s = smote.fit_sample(X, y)
# check value counts
y_s.value_counts()
###Output
_____no_output_____
###Markdown
As we can see our target column is now balanced. So we can now build our models.
###Code
#Splitting our data on train and test
train_X, test_X, train_y,test_y = train_test_split(X_s, y_s, test_size=0.2, random_state=15, stratify=y_s)
train_y.value_counts()
###Output
_____no_output_____
###Markdown
Random Forest classifier model
###Code
# Importing RandomForestClassifier
from sklearn.ensemble import RandomForestClassifier
#Defining model
rf=RandomForestClassifier(n_estimators=200, random_state=1, min_samples_leaf=2)
# Fitting the model
rf.fit(train_X,train_y)
# predicting values on test set
predictions_rf=rf.predict(test_X)
###Output
_____no_output_____
###Markdown
Random Forest classifier accuracy
###Code
#Importing cross_val_score and accuracy_score
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score
# calculating accuracy with accuracy_score()
accuracy_rf=accuracy_score(test_y, predictions_rf)
accuracy_rf
# calculating accuracy result with cross_val_score()
accuracy_cross_val_rf=cross_val_score(rf, X_s, y_s, cv=10)
accuracy_cross_val_rf
#calculating cross_val_score mean
accuracy_cross_val_rf=np.mean(accuracy_cross_val_rf)
accuracy_cross_val_rf
###Output
_____no_output_____
###Markdown
Calculating f-1 score in RandomForest model
###Code
#Importing classification_report
from sklearn.metrics import classification_report
print(classification_report(test_y,predictions_rf))
###Output
precision recall f1-score support
0.0 0.90 0.89 0.89 1593
1.0 0.89 0.90 0.90 1593
accuracy 0.89 3186
macro avg 0.89 0.89 0.89 3186
weighted avg 0.89 0.89 0.89 3186
###Markdown
We can see that our model can accurately predict 90% of churns. Artificial neural networks model
###Code
# import MLPClassifier and make an instance
from sklearn.neural_network import MLPClassifier
# Defining our model
mlp=MLPClassifier(hidden_layer_sizes=(10,10), activation="relu",max_iter=1000)
# fitting the model
mlp.fit(train_X, train_y)
# predicting values on test set
predictions_mlp=mlp.predict(test_X)
###Output
_____no_output_____
###Markdown
Artificial neural networks accuracy
###Code
# calculating accuracy with accuracy_score()
accuracy_mlp=accuracy_score(test_y, predictions_mlp)
accuracy_mlp
# calculating accuracy result with cross_val_score()
accuracy_cross_val_mlp=cross_val_score(mlp, X_s, y_s, cv=10)
accuracy_cross_val_mlp
#calculating cross_val_score mean
accuracy_cross_val_mlp=np.mean(accuracy_cross_val_mlp)
accuracy_cross_val_mlp
###Output
_____no_output_____
###Markdown
Calculating f-1 score in Artifical Neural networks
###Code
print(classification_report(test_y,predictions_mlp))
###Output
precision recall f1-score support
0.0 0.78 0.83 0.80 1593
1.0 0.82 0.77 0.79 1593
accuracy 0.80 3186
macro avg 0.80 0.80 0.80 3186
weighted avg 0.80 0.80 0.80 3186
|
gray2color.ipynb | ###Markdown
Mounting google drive to save trained model
###Code
from google.colab import drive
drive.mount('/gdrive')
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly
Enter your authorization code:
··········
Mounted at /gdrive
###Markdown
Importing all Dependencies
###Code
import tensorflow as tf
import tensorflow.keras as keras
import matplotlib.pyplot as plt
import numpy as np
import os
from PIL import Image
from skimage.color import rgb2lab, lab2rgb, rgb2gray
###Output
_____no_output_____
###Markdown
Downloading training dataset using curlI found a alternative [kaggle dataset](https://www.kaggle.com/greatgamedota/ffhq-face-data-set) paste download link of dataset here. These link get obselete very soon so paste new download link.Link can be get when you download it into local and in download section of browser you can get link for download. Size: Approx 2GB
###Code
!curl -o "/content/faces.zip" "https://storage.googleapis.com/kaggle-data-sets/379454/735991/bundle/archive.zip?GoogleAccessId=web-data@kaggle-161607.iam.gserviceaccount.com&Expires=1590056491&Signature=iZbEXd9tLS6urEio8l4sGXEPbUdBOvwoyvqweIq%2BSEprSrQYGQ6AwS3Us93g%2FsV7OHMXI2dtXl0ZILOnA95nAjZv2u9DbCEjjsmqdZU3zuGXQpthruhAJ2ybVyFIBeTCuQdx1%2FoPHp3K%2FUSz03SODoeJG6zTg1QOEcp2vfIytplNIYIEVbHYuzokWN2ahDfW4JOQsytO%2F8TTJxB7fn2Yu16STvEf%2FKSpM7zHqvWqNyhh7d3hPgBIc1M6hZUHUXXWXT5PkJWeSa3gXSysvCOhemVWJyfhqX0tDs%2FM4MUtg1ZMe0V80KdzXrueBnIr5aVzuRn1PDFQvzANmGAkj1hrrQ%3D%3D&response-content-disposition=attachment%3B+filename%3Dffhq-face-data-set.zip"
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2015M 100 2015M 0 0 68.3M 0 0:00:29 0:00:29 --:--:-- 65.5M
###Markdown
Unzipping the dataset
###Code
!mkdir "/content/faces"
!unzip "/content/faces.zip" -d "/content/faces"
!rm faces.zip
###Output
[1;30;43mStreaming output truncated to the last 5000 lines.[0m
inflating: /content/faces/thumbnails128x128/65001.png
inflating: /content/faces/thumbnails128x128/65002.png
inflating: /content/faces/thumbnails128x128/65003.png
inflating: /content/faces/thumbnails128x128/65004.png
inflating: /content/faces/thumbnails128x128/65005.png
inflating: /content/faces/thumbnails128x128/65006.png
inflating: /content/faces/thumbnails128x128/65007.png
inflating: /content/faces/thumbnails128x128/65008.png
inflating: /content/faces/thumbnails128x128/65009.png
inflating: /content/faces/thumbnails128x128/65010.png
inflating: /content/faces/thumbnails128x128/65011.png
inflating: /content/faces/thumbnails128x128/65012.png
inflating: /content/faces/thumbnails128x128/65013.png
inflating: /content/faces/thumbnails128x128/65014.png
inflating: /content/faces/thumbnails128x128/65015.png
inflating: /content/faces/thumbnails128x128/65016.png
inflating: /content/faces/thumbnails128x128/65017.png
inflating: /content/faces/thumbnails128x128/65018.png
inflating: /content/faces/thumbnails128x128/65019.png
inflating: /content/faces/thumbnails128x128/65020.png
inflating: /content/faces/thumbnails128x128/65021.png
inflating: /content/faces/thumbnails128x128/65022.png
inflating: /content/faces/thumbnails128x128/65023.png
inflating: /content/faces/thumbnails128x128/65024.png
inflating: /content/faces/thumbnails128x128/65025.png
inflating: /content/faces/thumbnails128x128/65026.png
inflating: /content/faces/thumbnails128x128/65027.png
inflating: /content/faces/thumbnails128x128/65028.png
inflating: /content/faces/thumbnails128x128/65029.png
inflating: /content/faces/thumbnails128x128/65030.png
inflating: /content/faces/thumbnails128x128/65031.png
inflating: /content/faces/thumbnails128x128/65032.png
inflating: /content/faces/thumbnails128x128/65033.png
inflating: /content/faces/thumbnails128x128/65034.png
inflating: /content/faces/thumbnails128x128/65035.png
inflating: /content/faces/thumbnails128x128/65036.png
inflating: /content/faces/thumbnails128x128/65037.png
inflating: /content/faces/thumbnails128x128/65038.png
inflating: /content/faces/thumbnails128x128/65039.png
inflating: /content/faces/thumbnails128x128/65040.png
inflating: /content/faces/thumbnails128x128/65041.png
inflating: /content/faces/thumbnails128x128/65042.png
inflating: /content/faces/thumbnails128x128/65043.png
inflating: /content/faces/thumbnails128x128/65044.png
inflating: /content/faces/thumbnails128x128/65045.png
inflating: /content/faces/thumbnails128x128/65046.png
inflating: /content/faces/thumbnails128x128/65047.png
inflating: /content/faces/thumbnails128x128/65048.png
inflating: /content/faces/thumbnails128x128/65049.png
inflating: /content/faces/thumbnails128x128/65050.png
inflating: /content/faces/thumbnails128x128/65051.png
inflating: /content/faces/thumbnails128x128/65052.png
inflating: /content/faces/thumbnails128x128/65053.png
inflating: /content/faces/thumbnails128x128/65054.png
inflating: /content/faces/thumbnails128x128/65055.png
inflating: /content/faces/thumbnails128x128/65056.png
inflating: /content/faces/thumbnails128x128/65057.png
inflating: /content/faces/thumbnails128x128/65058.png
inflating: /content/faces/thumbnails128x128/65059.png
inflating: /content/faces/thumbnails128x128/65060.png
inflating: /content/faces/thumbnails128x128/65061.png
inflating: /content/faces/thumbnails128x128/65062.png
inflating: /content/faces/thumbnails128x128/65063.png
inflating: /content/faces/thumbnails128x128/65064.png
inflating: /content/faces/thumbnails128x128/65065.png
inflating: /content/faces/thumbnails128x128/65066.png
inflating: /content/faces/thumbnails128x128/65067.png
inflating: /content/faces/thumbnails128x128/65068.png
inflating: /content/faces/thumbnails128x128/65069.png
inflating: /content/faces/thumbnails128x128/65070.png
inflating: /content/faces/thumbnails128x128/65071.png
inflating: /content/faces/thumbnails128x128/65072.png
inflating: /content/faces/thumbnails128x128/65073.png
inflating: /content/faces/thumbnails128x128/65074.png
inflating: /content/faces/thumbnails128x128/65075.png
inflating: /content/faces/thumbnails128x128/65076.png
inflating: /content/faces/thumbnails128x128/65077.png
inflating: /content/faces/thumbnails128x128/65078.png
inflating: /content/faces/thumbnails128x128/65079.png
inflating: /content/faces/thumbnails128x128/65080.png
inflating: /content/faces/thumbnails128x128/65081.png
inflating: /content/faces/thumbnails128x128/65082.png
inflating: /content/faces/thumbnails128x128/65083.png
inflating: /content/faces/thumbnails128x128/65084.png
inflating: /content/faces/thumbnails128x128/65085.png
inflating: /content/faces/thumbnails128x128/65086.png
inflating: /content/faces/thumbnails128x128/65087.png
inflating: /content/faces/thumbnails128x128/65088.png
inflating: /content/faces/thumbnails128x128/65089.png
inflating: /content/faces/thumbnails128x128/65090.png
inflating: /content/faces/thumbnails128x128/65091.png
inflating: /content/faces/thumbnails128x128/65092.png
inflating: /content/faces/thumbnails128x128/65093.png
inflating: /content/faces/thumbnails128x128/65094.png
inflating: /content/faces/thumbnails128x128/65095.png
inflating: /content/faces/thumbnails128x128/65096.png
inflating: /content/faces/thumbnails128x128/65097.png
inflating: /content/faces/thumbnails128x128/65098.png
inflating: /content/faces/thumbnails128x128/65099.png
inflating: /content/faces/thumbnails128x128/65100.png
inflating: /content/faces/thumbnails128x128/65101.png
inflating: /content/faces/thumbnails128x128/65102.png
inflating: /content/faces/thumbnails128x128/65103.png
inflating: /content/faces/thumbnails128x128/65104.png
inflating: /content/faces/thumbnails128x128/65105.png
inflating: /content/faces/thumbnails128x128/65106.png
inflating: /content/faces/thumbnails128x128/65107.png
inflating: /content/faces/thumbnails128x128/65108.png
inflating: /content/faces/thumbnails128x128/65109.png
inflating: /content/faces/thumbnails128x128/65110.png
inflating: /content/faces/thumbnails128x128/65111.png
inflating: /content/faces/thumbnails128x128/65112.png
inflating: /content/faces/thumbnails128x128/65113.png
inflating: /content/faces/thumbnails128x128/65114.png
inflating: /content/faces/thumbnails128x128/65115.png
inflating: /content/faces/thumbnails128x128/65116.png
inflating: /content/faces/thumbnails128x128/65117.png
inflating: /content/faces/thumbnails128x128/65118.png
inflating: /content/faces/thumbnails128x128/65119.png
inflating: /content/faces/thumbnails128x128/65120.png
inflating: /content/faces/thumbnails128x128/65121.png
inflating: /content/faces/thumbnails128x128/65122.png
inflating: /content/faces/thumbnails128x128/65123.png
inflating: /content/faces/thumbnails128x128/65124.png
inflating: /content/faces/thumbnails128x128/65125.png
inflating: /content/faces/thumbnails128x128/65126.png
inflating: /content/faces/thumbnails128x128/65127.png
inflating: /content/faces/thumbnails128x128/65128.png
inflating: /content/faces/thumbnails128x128/65129.png
inflating: /content/faces/thumbnails128x128/65130.png
inflating: /content/faces/thumbnails128x128/65131.png
inflating: /content/faces/thumbnails128x128/65132.png
inflating: /content/faces/thumbnails128x128/65133.png
inflating: /content/faces/thumbnails128x128/65134.png
inflating: /content/faces/thumbnails128x128/65135.png
inflating: /content/faces/thumbnails128x128/65136.png
inflating: /content/faces/thumbnails128x128/65137.png
inflating: /content/faces/thumbnails128x128/65138.png
inflating: /content/faces/thumbnails128x128/65139.png
inflating: /content/faces/thumbnails128x128/65140.png
inflating: /content/faces/thumbnails128x128/65141.png
inflating: /content/faces/thumbnails128x128/65142.png
inflating: /content/faces/thumbnails128x128/65143.png
inflating: /content/faces/thumbnails128x128/65144.png
inflating: /content/faces/thumbnails128x128/65145.png
inflating: /content/faces/thumbnails128x128/65146.png
inflating: /content/faces/thumbnails128x128/65147.png
inflating: /content/faces/thumbnails128x128/65148.png
inflating: /content/faces/thumbnails128x128/65149.png
inflating: /content/faces/thumbnails128x128/65150.png
inflating: /content/faces/thumbnails128x128/65151.png
inflating: /content/faces/thumbnails128x128/65152.png
inflating: /content/faces/thumbnails128x128/65153.png
inflating: /content/faces/thumbnails128x128/65154.png
inflating: /content/faces/thumbnails128x128/65155.png
inflating: /content/faces/thumbnails128x128/65156.png
inflating: /content/faces/thumbnails128x128/65157.png
inflating: /content/faces/thumbnails128x128/65158.png
inflating: /content/faces/thumbnails128x128/65159.png
inflating: /content/faces/thumbnails128x128/65160.png
inflating: /content/faces/thumbnails128x128/65161.png
inflating: /content/faces/thumbnails128x128/65162.png
inflating: /content/faces/thumbnails128x128/65163.png
inflating: /content/faces/thumbnails128x128/65164.png
inflating: /content/faces/thumbnails128x128/65165.png
inflating: /content/faces/thumbnails128x128/65166.png
inflating: /content/faces/thumbnails128x128/65167.png
inflating: /content/faces/thumbnails128x128/65168.png
inflating: /content/faces/thumbnails128x128/65169.png
inflating: /content/faces/thumbnails128x128/65170.png
inflating: /content/faces/thumbnails128x128/65171.png
inflating: /content/faces/thumbnails128x128/65172.png
inflating: /content/faces/thumbnails128x128/65173.png
inflating: /content/faces/thumbnails128x128/65174.png
inflating: /content/faces/thumbnails128x128/65175.png
inflating: /content/faces/thumbnails128x128/65176.png
inflating: /content/faces/thumbnails128x128/65177.png
inflating: /content/faces/thumbnails128x128/65178.png
inflating: /content/faces/thumbnails128x128/65179.png
inflating: /content/faces/thumbnails128x128/65180.png
inflating: /content/faces/thumbnails128x128/65181.png
inflating: /content/faces/thumbnails128x128/65182.png
inflating: /content/faces/thumbnails128x128/65183.png
inflating: /content/faces/thumbnails128x128/65184.png
inflating: /content/faces/thumbnails128x128/65185.png
inflating: /content/faces/thumbnails128x128/65186.png
inflating: /content/faces/thumbnails128x128/65187.png
inflating: /content/faces/thumbnails128x128/65188.png
inflating: /content/faces/thumbnails128x128/65189.png
inflating: /content/faces/thumbnails128x128/65190.png
inflating: /content/faces/thumbnails128x128/65191.png
inflating: /content/faces/thumbnails128x128/65192.png
inflating: /content/faces/thumbnails128x128/65193.png
inflating: /content/faces/thumbnails128x128/65194.png
inflating: /content/faces/thumbnails128x128/65195.png
inflating: /content/faces/thumbnails128x128/65196.png
inflating: /content/faces/thumbnails128x128/65197.png
inflating: /content/faces/thumbnails128x128/65198.png
inflating: /content/faces/thumbnails128x128/65199.png
inflating: /content/faces/thumbnails128x128/65200.png
inflating: /content/faces/thumbnails128x128/65201.png
inflating: /content/faces/thumbnails128x128/65202.png
inflating: /content/faces/thumbnails128x128/65203.png
inflating: /content/faces/thumbnails128x128/65204.png
inflating: /content/faces/thumbnails128x128/65205.png
inflating: /content/faces/thumbnails128x128/65206.png
inflating: /content/faces/thumbnails128x128/65207.png
inflating: /content/faces/thumbnails128x128/65208.png
inflating: /content/faces/thumbnails128x128/65209.png
inflating: /content/faces/thumbnails128x128/65210.png
inflating: /content/faces/thumbnails128x128/65211.png
inflating: /content/faces/thumbnails128x128/65212.png
inflating: /content/faces/thumbnails128x128/65213.png
inflating: /content/faces/thumbnails128x128/65214.png
inflating: /content/faces/thumbnails128x128/65215.png
inflating: /content/faces/thumbnails128x128/65216.png
inflating: /content/faces/thumbnails128x128/65217.png
inflating: /content/faces/thumbnails128x128/65218.png
inflating: /content/faces/thumbnails128x128/65219.png
inflating: /content/faces/thumbnails128x128/65220.png
inflating: /content/faces/thumbnails128x128/65221.png
inflating: /content/faces/thumbnails128x128/65222.png
inflating: /content/faces/thumbnails128x128/65223.png
inflating: /content/faces/thumbnails128x128/65224.png
inflating: /content/faces/thumbnails128x128/65225.png
inflating: /content/faces/thumbnails128x128/65226.png
inflating: /content/faces/thumbnails128x128/65227.png
inflating: /content/faces/thumbnails128x128/65228.png
inflating: /content/faces/thumbnails128x128/65229.png
inflating: /content/faces/thumbnails128x128/65230.png
inflating: /content/faces/thumbnails128x128/65231.png
inflating: /content/faces/thumbnails128x128/65232.png
inflating: /content/faces/thumbnails128x128/65233.png
inflating: /content/faces/thumbnails128x128/65234.png
inflating: /content/faces/thumbnails128x128/65235.png
inflating: /content/faces/thumbnails128x128/65236.png
inflating: /content/faces/thumbnails128x128/65237.png
inflating: /content/faces/thumbnails128x128/65238.png
inflating: /content/faces/thumbnails128x128/65239.png
inflating: /content/faces/thumbnails128x128/65240.png
inflating: /content/faces/thumbnails128x128/65241.png
inflating: /content/faces/thumbnails128x128/65242.png
inflating: /content/faces/thumbnails128x128/65243.png
inflating: /content/faces/thumbnails128x128/65244.png
inflating: /content/faces/thumbnails128x128/65245.png
inflating: /content/faces/thumbnails128x128/65246.png
inflating: /content/faces/thumbnails128x128/65247.png
inflating: /content/faces/thumbnails128x128/65248.png
inflating: /content/faces/thumbnails128x128/65249.png
inflating: /content/faces/thumbnails128x128/65250.png
inflating: /content/faces/thumbnails128x128/65251.png
inflating: /content/faces/thumbnails128x128/65252.png
inflating: /content/faces/thumbnails128x128/65253.png
inflating: /content/faces/thumbnails128x128/65254.png
inflating: /content/faces/thumbnails128x128/65255.png
inflating: /content/faces/thumbnails128x128/65256.png
inflating: /content/faces/thumbnails128x128/65257.png
inflating: /content/faces/thumbnails128x128/65258.png
inflating: /content/faces/thumbnails128x128/65259.png
inflating: /content/faces/thumbnails128x128/65260.png
inflating: /content/faces/thumbnails128x128/65261.png
inflating: /content/faces/thumbnails128x128/65262.png
inflating: /content/faces/thumbnails128x128/65263.png
inflating: /content/faces/thumbnails128x128/65264.png
inflating: /content/faces/thumbnails128x128/65265.png
inflating: /content/faces/thumbnails128x128/65266.png
inflating: /content/faces/thumbnails128x128/65267.png
inflating: /content/faces/thumbnails128x128/65268.png
inflating: /content/faces/thumbnails128x128/65269.png
inflating: /content/faces/thumbnails128x128/65270.png
inflating: /content/faces/thumbnails128x128/65271.png
inflating: /content/faces/thumbnails128x128/65272.png
inflating: /content/faces/thumbnails128x128/65273.png
inflating: /content/faces/thumbnails128x128/65274.png
inflating: /content/faces/thumbnails128x128/65275.png
inflating: /content/faces/thumbnails128x128/65276.png
inflating: /content/faces/thumbnails128x128/65277.png
inflating: /content/faces/thumbnails128x128/65278.png
inflating: /content/faces/thumbnails128x128/65279.png
inflating: /content/faces/thumbnails128x128/65280.png
inflating: /content/faces/thumbnails128x128/65281.png
inflating: /content/faces/thumbnails128x128/65282.png
inflating: /content/faces/thumbnails128x128/65283.png
inflating: /content/faces/thumbnails128x128/65284.png
inflating: /content/faces/thumbnails128x128/65285.png
inflating: /content/faces/thumbnails128x128/65286.png
inflating: /content/faces/thumbnails128x128/65287.png
inflating: /content/faces/thumbnails128x128/65288.png
inflating: /content/faces/thumbnails128x128/65289.png
inflating: /content/faces/thumbnails128x128/65290.png
inflating: /content/faces/thumbnails128x128/65291.png
inflating: /content/faces/thumbnails128x128/65292.png
inflating: /content/faces/thumbnails128x128/65293.png
inflating: /content/faces/thumbnails128x128/65294.png
inflating: /content/faces/thumbnails128x128/65295.png
inflating: /content/faces/thumbnails128x128/65296.png
inflating: /content/faces/thumbnails128x128/65297.png
inflating: /content/faces/thumbnails128x128/65298.png
inflating: /content/faces/thumbnails128x128/65299.png
inflating: /content/faces/thumbnails128x128/65300.png
inflating: /content/faces/thumbnails128x128/65301.png
inflating: /content/faces/thumbnails128x128/65302.png
inflating: /content/faces/thumbnails128x128/65303.png
inflating: /content/faces/thumbnails128x128/65304.png
inflating: /content/faces/thumbnails128x128/65305.png
inflating: /content/faces/thumbnails128x128/65306.png
inflating: /content/faces/thumbnails128x128/65307.png
inflating: /content/faces/thumbnails128x128/65308.png
inflating: /content/faces/thumbnails128x128/65309.png
inflating: /content/faces/thumbnails128x128/65310.png
inflating: /content/faces/thumbnails128x128/65311.png
inflating: /content/faces/thumbnails128x128/65312.png
inflating: /content/faces/thumbnails128x128/65313.png
inflating: /content/faces/thumbnails128x128/65314.png
inflating: /content/faces/thumbnails128x128/65315.png
inflating: /content/faces/thumbnails128x128/65316.png
inflating: /content/faces/thumbnails128x128/65317.png
inflating: /content/faces/thumbnails128x128/65318.png
inflating: /content/faces/thumbnails128x128/65319.png
inflating: /content/faces/thumbnails128x128/65320.png
inflating: /content/faces/thumbnails128x128/65321.png
inflating: /content/faces/thumbnails128x128/65322.png
inflating: /content/faces/thumbnails128x128/65323.png
inflating: /content/faces/thumbnails128x128/65324.png
inflating: /content/faces/thumbnails128x128/65325.png
inflating: /content/faces/thumbnails128x128/65326.png
inflating: /content/faces/thumbnails128x128/65327.png
inflating: /content/faces/thumbnails128x128/65328.png
inflating: /content/faces/thumbnails128x128/65329.png
inflating: /content/faces/thumbnails128x128/65330.png
inflating: /content/faces/thumbnails128x128/65331.png
inflating: /content/faces/thumbnails128x128/65332.png
inflating: /content/faces/thumbnails128x128/65333.png
inflating: /content/faces/thumbnails128x128/65334.png
inflating: /content/faces/thumbnails128x128/65335.png
inflating: /content/faces/thumbnails128x128/65336.png
inflating: /content/faces/thumbnails128x128/65337.png
inflating: /content/faces/thumbnails128x128/65338.png
inflating: /content/faces/thumbnails128x128/65339.png
inflating: /content/faces/thumbnails128x128/65340.png
inflating: /content/faces/thumbnails128x128/65341.png
inflating: /content/faces/thumbnails128x128/65342.png
inflating: /content/faces/thumbnails128x128/65343.png
inflating: /content/faces/thumbnails128x128/65344.png
inflating: /content/faces/thumbnails128x128/65345.png
inflating: /content/faces/thumbnails128x128/65346.png
inflating: /content/faces/thumbnails128x128/65347.png
inflating: /content/faces/thumbnails128x128/65348.png
inflating: /content/faces/thumbnails128x128/65349.png
inflating: /content/faces/thumbnails128x128/65350.png
inflating: /content/faces/thumbnails128x128/65351.png
inflating: /content/faces/thumbnails128x128/65352.png
inflating: /content/faces/thumbnails128x128/65353.png
inflating: /content/faces/thumbnails128x128/65354.png
inflating: /content/faces/thumbnails128x128/65355.png
inflating: /content/faces/thumbnails128x128/65356.png
inflating: /content/faces/thumbnails128x128/65357.png
inflating: /content/faces/thumbnails128x128/65358.png
inflating: /content/faces/thumbnails128x128/65359.png
inflating: /content/faces/thumbnails128x128/65360.png
inflating: /content/faces/thumbnails128x128/65361.png
inflating: /content/faces/thumbnails128x128/65362.png
inflating: /content/faces/thumbnails128x128/65363.png
inflating: /content/faces/thumbnails128x128/65364.png
inflating: /content/faces/thumbnails128x128/65365.png
inflating: /content/faces/thumbnails128x128/65366.png
inflating: /content/faces/thumbnails128x128/65367.png
inflating: /content/faces/thumbnails128x128/65368.png
inflating: /content/faces/thumbnails128x128/65369.png
inflating: /content/faces/thumbnails128x128/65370.png
inflating: /content/faces/thumbnails128x128/65371.png
inflating: /content/faces/thumbnails128x128/65372.png
inflating: /content/faces/thumbnails128x128/65373.png
inflating: /content/faces/thumbnails128x128/65374.png
inflating: /content/faces/thumbnails128x128/65375.png
inflating: /content/faces/thumbnails128x128/65376.png
inflating: /content/faces/thumbnails128x128/65377.png
inflating: /content/faces/thumbnails128x128/65378.png
inflating: /content/faces/thumbnails128x128/65379.png
inflating: /content/faces/thumbnails128x128/65380.png
inflating: /content/faces/thumbnails128x128/65381.png
inflating: /content/faces/thumbnails128x128/65382.png
inflating: /content/faces/thumbnails128x128/65383.png
inflating: /content/faces/thumbnails128x128/65384.png
inflating: /content/faces/thumbnails128x128/65385.png
inflating: /content/faces/thumbnails128x128/65386.png
inflating: /content/faces/thumbnails128x128/65387.png
inflating: /content/faces/thumbnails128x128/65388.png
inflating: /content/faces/thumbnails128x128/65389.png
inflating: /content/faces/thumbnails128x128/65390.png
inflating: /content/faces/thumbnails128x128/65391.png
inflating: /content/faces/thumbnails128x128/65392.png
inflating: /content/faces/thumbnails128x128/65393.png
inflating: /content/faces/thumbnails128x128/65394.png
inflating: /content/faces/thumbnails128x128/65395.png
inflating: /content/faces/thumbnails128x128/65396.png
inflating: /content/faces/thumbnails128x128/65397.png
inflating: /content/faces/thumbnails128x128/65398.png
inflating: /content/faces/thumbnails128x128/65399.png
inflating: /content/faces/thumbnails128x128/65400.png
inflating: /content/faces/thumbnails128x128/65401.png
inflating: /content/faces/thumbnails128x128/65402.png
inflating: /content/faces/thumbnails128x128/65403.png
inflating: /content/faces/thumbnails128x128/65404.png
inflating: /content/faces/thumbnails128x128/65405.png
inflating: /content/faces/thumbnails128x128/65406.png
inflating: /content/faces/thumbnails128x128/65407.png
inflating: /content/faces/thumbnails128x128/65408.png
inflating: /content/faces/thumbnails128x128/65409.png
inflating: /content/faces/thumbnails128x128/65410.png
inflating: /content/faces/thumbnails128x128/65411.png
inflating: /content/faces/thumbnails128x128/65412.png
inflating: /content/faces/thumbnails128x128/65413.png
inflating: /content/faces/thumbnails128x128/65414.png
inflating: /content/faces/thumbnails128x128/65415.png
inflating: /content/faces/thumbnails128x128/65416.png
inflating: /content/faces/thumbnails128x128/65417.png
inflating: /content/faces/thumbnails128x128/65418.png
inflating: /content/faces/thumbnails128x128/65419.png
inflating: /content/faces/thumbnails128x128/65420.png
inflating: /content/faces/thumbnails128x128/65421.png
inflating: /content/faces/thumbnails128x128/65422.png
inflating: /content/faces/thumbnails128x128/65423.png
inflating: /content/faces/thumbnails128x128/65424.png
inflating: /content/faces/thumbnails128x128/65425.png
inflating: /content/faces/thumbnails128x128/65426.png
inflating: /content/faces/thumbnails128x128/65427.png
inflating: /content/faces/thumbnails128x128/65428.png
inflating: /content/faces/thumbnails128x128/65429.png
inflating: /content/faces/thumbnails128x128/65430.png
inflating: /content/faces/thumbnails128x128/65431.png
inflating: /content/faces/thumbnails128x128/65432.png
inflating: /content/faces/thumbnails128x128/65433.png
inflating: /content/faces/thumbnails128x128/65434.png
inflating: /content/faces/thumbnails128x128/65435.png
inflating: /content/faces/thumbnails128x128/65436.png
inflating: /content/faces/thumbnails128x128/65437.png
inflating: /content/faces/thumbnails128x128/65438.png
inflating: /content/faces/thumbnails128x128/65439.png
inflating: /content/faces/thumbnails128x128/65440.png
inflating: /content/faces/thumbnails128x128/65441.png
inflating: /content/faces/thumbnails128x128/65442.png
inflating: /content/faces/thumbnails128x128/65443.png
inflating: /content/faces/thumbnails128x128/65444.png
inflating: /content/faces/thumbnails128x128/65445.png
inflating: /content/faces/thumbnails128x128/65446.png
inflating: /content/faces/thumbnails128x128/65447.png
inflating: /content/faces/thumbnails128x128/65448.png
inflating: /content/faces/thumbnails128x128/65449.png
inflating: /content/faces/thumbnails128x128/65450.png
inflating: /content/faces/thumbnails128x128/65451.png
inflating: /content/faces/thumbnails128x128/65452.png
inflating: /content/faces/thumbnails128x128/65453.png
inflating: /content/faces/thumbnails128x128/65454.png
inflating: /content/faces/thumbnails128x128/65455.png
inflating: /content/faces/thumbnails128x128/65456.png
inflating: /content/faces/thumbnails128x128/65457.png
inflating: /content/faces/thumbnails128x128/65458.png
inflating: /content/faces/thumbnails128x128/65459.png
inflating: /content/faces/thumbnails128x128/65460.png
inflating: /content/faces/thumbnails128x128/65461.png
inflating: /content/faces/thumbnails128x128/65462.png
inflating: /content/faces/thumbnails128x128/65463.png
inflating: /content/faces/thumbnails128x128/65464.png
inflating: /content/faces/thumbnails128x128/65465.png
inflating: /content/faces/thumbnails128x128/65466.png
inflating: /content/faces/thumbnails128x128/65467.png
inflating: /content/faces/thumbnails128x128/65468.png
inflating: /content/faces/thumbnails128x128/65469.png
inflating: /content/faces/thumbnails128x128/65470.png
inflating: /content/faces/thumbnails128x128/65471.png
inflating: /content/faces/thumbnails128x128/65472.png
inflating: /content/faces/thumbnails128x128/65473.png
inflating: /content/faces/thumbnails128x128/65474.png
inflating: /content/faces/thumbnails128x128/65475.png
inflating: /content/faces/thumbnails128x128/65476.png
inflating: /content/faces/thumbnails128x128/65477.png
inflating: /content/faces/thumbnails128x128/65478.png
inflating: /content/faces/thumbnails128x128/65479.png
inflating: /content/faces/thumbnails128x128/65480.png
inflating: /content/faces/thumbnails128x128/65481.png
inflating: /content/faces/thumbnails128x128/65482.png
inflating: /content/faces/thumbnails128x128/65483.png
inflating: /content/faces/thumbnails128x128/65484.png
inflating: /content/faces/thumbnails128x128/65485.png
inflating: /content/faces/thumbnails128x128/65486.png
inflating: /content/faces/thumbnails128x128/65487.png
inflating: /content/faces/thumbnails128x128/65488.png
inflating: /content/faces/thumbnails128x128/65489.png
inflating: /content/faces/thumbnails128x128/65490.png
inflating: /content/faces/thumbnails128x128/65491.png
inflating: /content/faces/thumbnails128x128/65492.png
inflating: /content/faces/thumbnails128x128/65493.png
inflating: /content/faces/thumbnails128x128/65494.png
inflating: /content/faces/thumbnails128x128/65495.png
inflating: /content/faces/thumbnails128x128/65496.png
inflating: /content/faces/thumbnails128x128/65497.png
inflating: /content/faces/thumbnails128x128/65498.png
inflating: /content/faces/thumbnails128x128/65499.png
inflating: /content/faces/thumbnails128x128/65500.png
inflating: /content/faces/thumbnails128x128/65501.png
inflating: /content/faces/thumbnails128x128/65502.png
inflating: /content/faces/thumbnails128x128/65503.png
inflating: /content/faces/thumbnails128x128/65504.png
inflating: /content/faces/thumbnails128x128/65505.png
inflating: /content/faces/thumbnails128x128/65506.png
inflating: /content/faces/thumbnails128x128/65507.png
inflating: /content/faces/thumbnails128x128/65508.png
inflating: /content/faces/thumbnails128x128/65509.png
inflating: /content/faces/thumbnails128x128/65510.png
inflating: /content/faces/thumbnails128x128/65511.png
inflating: /content/faces/thumbnails128x128/65512.png
inflating: /content/faces/thumbnails128x128/65513.png
inflating: /content/faces/thumbnails128x128/65514.png
inflating: /content/faces/thumbnails128x128/65515.png
inflating: /content/faces/thumbnails128x128/65516.png
inflating: /content/faces/thumbnails128x128/65517.png
inflating: /content/faces/thumbnails128x128/65518.png
inflating: /content/faces/thumbnails128x128/65519.png
inflating: /content/faces/thumbnails128x128/65520.png
inflating: /content/faces/thumbnails128x128/65521.png
inflating: /content/faces/thumbnails128x128/65522.png
inflating: /content/faces/thumbnails128x128/65523.png
inflating: /content/faces/thumbnails128x128/65524.png
inflating: /content/faces/thumbnails128x128/65525.png
inflating: /content/faces/thumbnails128x128/65526.png
inflating: /content/faces/thumbnails128x128/65527.png
inflating: /content/faces/thumbnails128x128/65528.png
inflating: /content/faces/thumbnails128x128/65529.png
inflating: /content/faces/thumbnails128x128/65530.png
inflating: /content/faces/thumbnails128x128/65531.png
inflating: /content/faces/thumbnails128x128/65532.png
inflating: /content/faces/thumbnails128x128/65533.png
inflating: /content/faces/thumbnails128x128/65534.png
inflating: /content/faces/thumbnails128x128/65535.png
inflating: /content/faces/thumbnails128x128/65536.png
inflating: /content/faces/thumbnails128x128/65537.png
inflating: /content/faces/thumbnails128x128/65538.png
inflating: /content/faces/thumbnails128x128/65539.png
inflating: /content/faces/thumbnails128x128/65540.png
inflating: /content/faces/thumbnails128x128/65541.png
inflating: /content/faces/thumbnails128x128/65542.png
inflating: /content/faces/thumbnails128x128/65543.png
inflating: /content/faces/thumbnails128x128/65544.png
inflating: /content/faces/thumbnails128x128/65545.png
inflating: /content/faces/thumbnails128x128/65546.png
inflating: /content/faces/thumbnails128x128/65547.png
inflating: /content/faces/thumbnails128x128/65548.png
inflating: /content/faces/thumbnails128x128/65549.png
inflating: /content/faces/thumbnails128x128/65550.png
inflating: /content/faces/thumbnails128x128/65551.png
inflating: /content/faces/thumbnails128x128/65552.png
inflating: /content/faces/thumbnails128x128/65553.png
inflating: /content/faces/thumbnails128x128/65554.png
inflating: /content/faces/thumbnails128x128/65555.png
inflating: /content/faces/thumbnails128x128/65556.png
inflating: /content/faces/thumbnails128x128/65557.png
inflating: /content/faces/thumbnails128x128/65558.png
inflating: /content/faces/thumbnails128x128/65559.png
inflating: /content/faces/thumbnails128x128/65560.png
inflating: /content/faces/thumbnails128x128/65561.png
inflating: /content/faces/thumbnails128x128/65562.png
inflating: /content/faces/thumbnails128x128/65563.png
inflating: /content/faces/thumbnails128x128/65564.png
inflating: /content/faces/thumbnails128x128/65565.png
inflating: /content/faces/thumbnails128x128/65566.png
inflating: /content/faces/thumbnails128x128/65567.png
inflating: /content/faces/thumbnails128x128/65568.png
inflating: /content/faces/thumbnails128x128/65569.png
inflating: /content/faces/thumbnails128x128/65570.png
inflating: /content/faces/thumbnails128x128/65571.png
inflating: /content/faces/thumbnails128x128/65572.png
inflating: /content/faces/thumbnails128x128/65573.png
inflating: /content/faces/thumbnails128x128/65574.png
inflating: /content/faces/thumbnails128x128/65575.png
inflating: /content/faces/thumbnails128x128/65576.png
inflating: /content/faces/thumbnails128x128/65577.png
inflating: /content/faces/thumbnails128x128/65578.png
inflating: /content/faces/thumbnails128x128/65579.png
inflating: /content/faces/thumbnails128x128/65580.png
inflating: /content/faces/thumbnails128x128/65581.png
inflating: /content/faces/thumbnails128x128/65582.png
inflating: /content/faces/thumbnails128x128/65583.png
inflating: /content/faces/thumbnails128x128/65584.png
inflating: /content/faces/thumbnails128x128/65585.png
inflating: /content/faces/thumbnails128x128/65586.png
inflating: /content/faces/thumbnails128x128/65587.png
inflating: /content/faces/thumbnails128x128/65588.png
inflating: /content/faces/thumbnails128x128/65589.png
inflating: /content/faces/thumbnails128x128/65590.png
inflating: /content/faces/thumbnails128x128/65591.png
inflating: /content/faces/thumbnails128x128/65592.png
inflating: /content/faces/thumbnails128x128/65593.png
inflating: /content/faces/thumbnails128x128/65594.png
inflating: /content/faces/thumbnails128x128/65595.png
inflating: /content/faces/thumbnails128x128/65596.png
inflating: /content/faces/thumbnails128x128/65597.png
inflating: /content/faces/thumbnails128x128/65598.png
inflating: /content/faces/thumbnails128x128/65599.png
inflating: /content/faces/thumbnails128x128/65600.png
inflating: /content/faces/thumbnails128x128/65601.png
inflating: /content/faces/thumbnails128x128/65602.png
inflating: /content/faces/thumbnails128x128/65603.png
inflating: /content/faces/thumbnails128x128/65604.png
inflating: /content/faces/thumbnails128x128/65605.png
inflating: /content/faces/thumbnails128x128/65606.png
inflating: /content/faces/thumbnails128x128/65607.png
inflating: /content/faces/thumbnails128x128/65608.png
inflating: /content/faces/thumbnails128x128/65609.png
inflating: /content/faces/thumbnails128x128/65610.png
inflating: /content/faces/thumbnails128x128/65611.png
inflating: /content/faces/thumbnails128x128/65612.png
inflating: /content/faces/thumbnails128x128/65613.png
inflating: /content/faces/thumbnails128x128/65614.png
inflating: /content/faces/thumbnails128x128/65615.png
inflating: /content/faces/thumbnails128x128/65616.png
inflating: /content/faces/thumbnails128x128/65617.png
inflating: /content/faces/thumbnails128x128/65618.png
inflating: /content/faces/thumbnails128x128/65619.png
inflating: /content/faces/thumbnails128x128/65620.png
inflating: /content/faces/thumbnails128x128/65621.png
inflating: /content/faces/thumbnails128x128/65622.png
inflating: /content/faces/thumbnails128x128/65623.png
inflating: /content/faces/thumbnails128x128/65624.png
inflating: /content/faces/thumbnails128x128/65625.png
inflating: /content/faces/thumbnails128x128/65626.png
inflating: /content/faces/thumbnails128x128/65627.png
inflating: /content/faces/thumbnails128x128/65628.png
inflating: /content/faces/thumbnails128x128/65629.png
inflating: /content/faces/thumbnails128x128/65630.png
inflating: /content/faces/thumbnails128x128/65631.png
inflating: /content/faces/thumbnails128x128/65632.png
inflating: /content/faces/thumbnails128x128/65633.png
inflating: /content/faces/thumbnails128x128/65634.png
inflating: /content/faces/thumbnails128x128/65635.png
inflating: /content/faces/thumbnails128x128/65636.png
inflating: /content/faces/thumbnails128x128/65637.png
inflating: /content/faces/thumbnails128x128/65638.png
inflating: /content/faces/thumbnails128x128/65639.png
inflating: /content/faces/thumbnails128x128/65640.png
inflating: /content/faces/thumbnails128x128/65641.png
inflating: /content/faces/thumbnails128x128/65642.png
inflating: /content/faces/thumbnails128x128/65643.png
inflating: /content/faces/thumbnails128x128/65644.png
inflating: /content/faces/thumbnails128x128/65645.png
inflating: /content/faces/thumbnails128x128/65646.png
inflating: /content/faces/thumbnails128x128/65647.png
inflating: /content/faces/thumbnails128x128/65648.png
inflating: /content/faces/thumbnails128x128/65649.png
inflating: /content/faces/thumbnails128x128/65650.png
inflating: /content/faces/thumbnails128x128/65651.png
inflating: /content/faces/thumbnails128x128/65652.png
inflating: /content/faces/thumbnails128x128/65653.png
inflating: /content/faces/thumbnails128x128/65654.png
inflating: /content/faces/thumbnails128x128/65655.png
inflating: /content/faces/thumbnails128x128/65656.png
inflating: /content/faces/thumbnails128x128/65657.png
inflating: /content/faces/thumbnails128x128/65658.png
inflating: /content/faces/thumbnails128x128/65659.png
inflating: /content/faces/thumbnails128x128/65660.png
inflating: /content/faces/thumbnails128x128/65661.png
inflating: /content/faces/thumbnails128x128/65662.png
inflating: /content/faces/thumbnails128x128/65663.png
inflating: /content/faces/thumbnails128x128/65664.png
inflating: /content/faces/thumbnails128x128/65665.png
inflating: /content/faces/thumbnails128x128/65666.png
inflating: /content/faces/thumbnails128x128/65667.png
inflating: /content/faces/thumbnails128x128/65668.png
inflating: /content/faces/thumbnails128x128/65669.png
inflating: /content/faces/thumbnails128x128/65670.png
inflating: /content/faces/thumbnails128x128/65671.png
inflating: /content/faces/thumbnails128x128/65672.png
inflating: /content/faces/thumbnails128x128/65673.png
inflating: /content/faces/thumbnails128x128/65674.png
inflating: /content/faces/thumbnails128x128/65675.png
inflating: /content/faces/thumbnails128x128/65676.png
inflating: /content/faces/thumbnails128x128/65677.png
inflating: /content/faces/thumbnails128x128/65678.png
inflating: /content/faces/thumbnails128x128/65679.png
inflating: /content/faces/thumbnails128x128/65680.png
inflating: /content/faces/thumbnails128x128/65681.png
inflating: /content/faces/thumbnails128x128/65682.png
inflating: /content/faces/thumbnails128x128/65683.png
inflating: /content/faces/thumbnails128x128/65684.png
inflating: /content/faces/thumbnails128x128/65685.png
inflating: /content/faces/thumbnails128x128/65686.png
inflating: /content/faces/thumbnails128x128/65687.png
inflating: /content/faces/thumbnails128x128/65688.png
inflating: /content/faces/thumbnails128x128/65689.png
inflating: /content/faces/thumbnails128x128/65690.png
inflating: /content/faces/thumbnails128x128/65691.png
inflating: /content/faces/thumbnails128x128/65692.png
inflating: /content/faces/thumbnails128x128/65693.png
inflating: /content/faces/thumbnails128x128/65694.png
inflating: /content/faces/thumbnails128x128/65695.png
inflating: /content/faces/thumbnails128x128/65696.png
inflating: /content/faces/thumbnails128x128/65697.png
inflating: /content/faces/thumbnails128x128/65698.png
inflating: /content/faces/thumbnails128x128/65699.png
inflating: /content/faces/thumbnails128x128/65700.png
inflating: /content/faces/thumbnails128x128/65701.png
inflating: /content/faces/thumbnails128x128/65702.png
inflating: /content/faces/thumbnails128x128/65703.png
inflating: /content/faces/thumbnails128x128/65704.png
inflating: /content/faces/thumbnails128x128/65705.png
inflating: /content/faces/thumbnails128x128/65706.png
inflating: /content/faces/thumbnails128x128/65707.png
inflating: /content/faces/thumbnails128x128/65708.png
inflating: /content/faces/thumbnails128x128/65709.png
inflating: /content/faces/thumbnails128x128/65710.png
inflating: /content/faces/thumbnails128x128/65711.png
inflating: /content/faces/thumbnails128x128/65712.png
inflating: /content/faces/thumbnails128x128/65713.png
inflating: /content/faces/thumbnails128x128/65714.png
inflating: /content/faces/thumbnails128x128/65715.png
inflating: /content/faces/thumbnails128x128/65716.png
inflating: /content/faces/thumbnails128x128/65717.png
inflating: /content/faces/thumbnails128x128/65718.png
inflating: /content/faces/thumbnails128x128/65719.png
inflating: /content/faces/thumbnails128x128/65720.png
inflating: /content/faces/thumbnails128x128/65721.png
inflating: /content/faces/thumbnails128x128/65722.png
inflating: /content/faces/thumbnails128x128/65723.png
inflating: /content/faces/thumbnails128x128/65724.png
inflating: /content/faces/thumbnails128x128/65725.png
inflating: /content/faces/thumbnails128x128/65726.png
inflating: /content/faces/thumbnails128x128/65727.png
inflating: /content/faces/thumbnails128x128/65728.png
inflating: /content/faces/thumbnails128x128/65729.png
inflating: /content/faces/thumbnails128x128/65730.png
inflating: /content/faces/thumbnails128x128/65731.png
inflating: /content/faces/thumbnails128x128/65732.png
inflating: /content/faces/thumbnails128x128/65733.png
inflating: /content/faces/thumbnails128x128/65734.png
inflating: /content/faces/thumbnails128x128/65735.png
inflating: /content/faces/thumbnails128x128/65736.png
inflating: /content/faces/thumbnails128x128/65737.png
inflating: /content/faces/thumbnails128x128/65738.png
inflating: /content/faces/thumbnails128x128/65739.png
inflating: /content/faces/thumbnails128x128/65740.png
inflating: /content/faces/thumbnails128x128/65741.png
inflating: /content/faces/thumbnails128x128/65742.png
inflating: /content/faces/thumbnails128x128/65743.png
inflating: /content/faces/thumbnails128x128/65744.png
inflating: /content/faces/thumbnails128x128/65745.png
inflating: /content/faces/thumbnails128x128/65746.png
inflating: /content/faces/thumbnails128x128/65747.png
inflating: /content/faces/thumbnails128x128/65748.png
inflating: /content/faces/thumbnails128x128/65749.png
inflating: /content/faces/thumbnails128x128/65750.png
inflating: /content/faces/thumbnails128x128/65751.png
inflating: /content/faces/thumbnails128x128/65752.png
inflating: /content/faces/thumbnails128x128/65753.png
inflating: /content/faces/thumbnails128x128/65754.png
inflating: /content/faces/thumbnails128x128/65755.png
inflating: /content/faces/thumbnails128x128/65756.png
inflating: /content/faces/thumbnails128x128/65757.png
inflating: /content/faces/thumbnails128x128/65758.png
inflating: /content/faces/thumbnails128x128/65759.png
inflating: /content/faces/thumbnails128x128/65760.png
inflating: /content/faces/thumbnails128x128/65761.png
inflating: /content/faces/thumbnails128x128/65762.png
inflating: /content/faces/thumbnails128x128/65763.png
inflating: /content/faces/thumbnails128x128/65764.png
inflating: /content/faces/thumbnails128x128/65765.png
inflating: /content/faces/thumbnails128x128/65766.png
inflating: /content/faces/thumbnails128x128/65767.png
inflating: /content/faces/thumbnails128x128/65768.png
inflating: /content/faces/thumbnails128x128/65769.png
inflating: /content/faces/thumbnails128x128/65770.png
inflating: /content/faces/thumbnails128x128/65771.png
inflating: /content/faces/thumbnails128x128/65772.png
inflating: /content/faces/thumbnails128x128/65773.png
inflating: /content/faces/thumbnails128x128/65774.png
inflating: /content/faces/thumbnails128x128/65775.png
inflating: /content/faces/thumbnails128x128/65776.png
inflating: /content/faces/thumbnails128x128/65777.png
inflating: /content/faces/thumbnails128x128/65778.png
inflating: /content/faces/thumbnails128x128/65779.png
inflating: /content/faces/thumbnails128x128/65780.png
inflating: /content/faces/thumbnails128x128/65781.png
inflating: /content/faces/thumbnails128x128/65782.png
inflating: /content/faces/thumbnails128x128/65783.png
inflating: /content/faces/thumbnails128x128/65784.png
inflating: /content/faces/thumbnails128x128/65785.png
inflating: /content/faces/thumbnails128x128/65786.png
inflating: /content/faces/thumbnails128x128/65787.png
inflating: /content/faces/thumbnails128x128/65788.png
inflating: /content/faces/thumbnails128x128/65789.png
inflating: /content/faces/thumbnails128x128/65790.png
inflating: /content/faces/thumbnails128x128/65791.png
inflating: /content/faces/thumbnails128x128/65792.png
inflating: /content/faces/thumbnails128x128/65793.png
inflating: /content/faces/thumbnails128x128/65794.png
inflating: /content/faces/thumbnails128x128/65795.png
inflating: /content/faces/thumbnails128x128/65796.png
inflating: /content/faces/thumbnails128x128/65797.png
inflating: /content/faces/thumbnails128x128/65798.png
inflating: /content/faces/thumbnails128x128/65799.png
inflating: /content/faces/thumbnails128x128/65800.png
inflating: /content/faces/thumbnails128x128/65801.png
inflating: /content/faces/thumbnails128x128/65802.png
inflating: /content/faces/thumbnails128x128/65803.png
inflating: /content/faces/thumbnails128x128/65804.png
inflating: /content/faces/thumbnails128x128/65805.png
inflating: /content/faces/thumbnails128x128/65806.png
inflating: /content/faces/thumbnails128x128/65807.png
inflating: /content/faces/thumbnails128x128/65808.png
inflating: /content/faces/thumbnails128x128/65809.png
inflating: /content/faces/thumbnails128x128/65810.png
inflating: /content/faces/thumbnails128x128/65811.png
inflating: /content/faces/thumbnails128x128/65812.png
inflating: /content/faces/thumbnails128x128/65813.png
inflating: /content/faces/thumbnails128x128/65814.png
inflating: /content/faces/thumbnails128x128/65815.png
inflating: /content/faces/thumbnails128x128/65816.png
inflating: /content/faces/thumbnails128x128/65817.png
inflating: /content/faces/thumbnails128x128/65818.png
inflating: /content/faces/thumbnails128x128/65819.png
inflating: /content/faces/thumbnails128x128/65820.png
inflating: /content/faces/thumbnails128x128/65821.png
inflating: /content/faces/thumbnails128x128/65822.png
inflating: /content/faces/thumbnails128x128/65823.png
inflating: /content/faces/thumbnails128x128/65824.png
inflating: /content/faces/thumbnails128x128/65825.png
inflating: /content/faces/thumbnails128x128/65826.png
inflating: /content/faces/thumbnails128x128/65827.png
inflating: /content/faces/thumbnails128x128/65828.png
inflating: /content/faces/thumbnails128x128/65829.png
inflating: /content/faces/thumbnails128x128/65830.png
inflating: /content/faces/thumbnails128x128/65831.png
inflating: /content/faces/thumbnails128x128/65832.png
inflating: /content/faces/thumbnails128x128/65833.png
inflating: /content/faces/thumbnails128x128/65834.png
inflating: /content/faces/thumbnails128x128/65835.png
inflating: /content/faces/thumbnails128x128/65836.png
inflating: /content/faces/thumbnails128x128/65837.png
inflating: /content/faces/thumbnails128x128/65838.png
inflating: /content/faces/thumbnails128x128/65839.png
inflating: /content/faces/thumbnails128x128/65840.png
inflating: /content/faces/thumbnails128x128/65841.png
inflating: /content/faces/thumbnails128x128/65842.png
inflating: /content/faces/thumbnails128x128/65843.png
inflating: /content/faces/thumbnails128x128/65844.png
inflating: /content/faces/thumbnails128x128/65845.png
inflating: /content/faces/thumbnails128x128/65846.png
inflating: /content/faces/thumbnails128x128/65847.png
inflating: /content/faces/thumbnails128x128/65848.png
inflating: /content/faces/thumbnails128x128/65849.png
inflating: /content/faces/thumbnails128x128/65850.png
inflating: /content/faces/thumbnails128x128/65851.png
inflating: /content/faces/thumbnails128x128/65852.png
inflating: /content/faces/thumbnails128x128/65853.png
inflating: /content/faces/thumbnails128x128/65854.png
inflating: /content/faces/thumbnails128x128/65855.png
inflating: /content/faces/thumbnails128x128/65856.png
inflating: /content/faces/thumbnails128x128/65857.png
inflating: /content/faces/thumbnails128x128/65858.png
inflating: /content/faces/thumbnails128x128/65859.png
inflating: /content/faces/thumbnails128x128/65860.png
inflating: /content/faces/thumbnails128x128/65861.png
inflating: /content/faces/thumbnails128x128/65862.png
inflating: /content/faces/thumbnails128x128/65863.png
inflating: /content/faces/thumbnails128x128/65864.png
inflating: /content/faces/thumbnails128x128/65865.png
inflating: /content/faces/thumbnails128x128/65866.png
inflating: /content/faces/thumbnails128x128/65867.png
inflating: /content/faces/thumbnails128x128/65868.png
inflating: /content/faces/thumbnails128x128/65869.png
inflating: /content/faces/thumbnails128x128/65870.png
inflating: /content/faces/thumbnails128x128/65871.png
inflating: /content/faces/thumbnails128x128/65872.png
inflating: /content/faces/thumbnails128x128/65873.png
inflating: /content/faces/thumbnails128x128/65874.png
inflating: /content/faces/thumbnails128x128/65875.png
inflating: /content/faces/thumbnails128x128/65876.png
inflating: /content/faces/thumbnails128x128/65877.png
inflating: /content/faces/thumbnails128x128/65878.png
inflating: /content/faces/thumbnails128x128/65879.png
inflating: /content/faces/thumbnails128x128/65880.png
inflating: /content/faces/thumbnails128x128/65881.png
inflating: /content/faces/thumbnails128x128/65882.png
inflating: /content/faces/thumbnails128x128/65883.png
inflating: /content/faces/thumbnails128x128/65884.png
inflating: /content/faces/thumbnails128x128/65885.png
inflating: /content/faces/thumbnails128x128/65886.png
inflating: /content/faces/thumbnails128x128/65887.png
inflating: /content/faces/thumbnails128x128/65888.png
inflating: /content/faces/thumbnails128x128/65889.png
inflating: /content/faces/thumbnails128x128/65890.png
inflating: /content/faces/thumbnails128x128/65891.png
inflating: /content/faces/thumbnails128x128/65892.png
inflating: /content/faces/thumbnails128x128/65893.png
inflating: /content/faces/thumbnails128x128/65894.png
inflating: /content/faces/thumbnails128x128/65895.png
inflating: /content/faces/thumbnails128x128/65896.png
inflating: /content/faces/thumbnails128x128/65897.png
inflating: /content/faces/thumbnails128x128/65898.png
inflating: /content/faces/thumbnails128x128/65899.png
inflating: /content/faces/thumbnails128x128/65900.png
inflating: /content/faces/thumbnails128x128/65901.png
inflating: /content/faces/thumbnails128x128/65902.png
inflating: /content/faces/thumbnails128x128/65903.png
inflating: /content/faces/thumbnails128x128/65904.png
inflating: /content/faces/thumbnails128x128/65905.png
inflating: /content/faces/thumbnails128x128/65906.png
inflating: /content/faces/thumbnails128x128/65907.png
inflating: /content/faces/thumbnails128x128/65908.png
inflating: /content/faces/thumbnails128x128/65909.png
inflating: /content/faces/thumbnails128x128/65910.png
inflating: /content/faces/thumbnails128x128/65911.png
inflating: /content/faces/thumbnails128x128/65912.png
inflating: /content/faces/thumbnails128x128/65913.png
inflating: /content/faces/thumbnails128x128/65914.png
inflating: /content/faces/thumbnails128x128/65915.png
inflating: /content/faces/thumbnails128x128/65916.png
inflating: /content/faces/thumbnails128x128/65917.png
inflating: /content/faces/thumbnails128x128/65918.png
inflating: /content/faces/thumbnails128x128/65919.png
inflating: /content/faces/thumbnails128x128/65920.png
inflating: /content/faces/thumbnails128x128/65921.png
inflating: /content/faces/thumbnails128x128/65922.png
inflating: /content/faces/thumbnails128x128/65923.png
inflating: /content/faces/thumbnails128x128/65924.png
inflating: /content/faces/thumbnails128x128/65925.png
inflating: /content/faces/thumbnails128x128/65926.png
inflating: /content/faces/thumbnails128x128/65927.png
inflating: /content/faces/thumbnails128x128/65928.png
inflating: /content/faces/thumbnails128x128/65929.png
inflating: /content/faces/thumbnails128x128/65930.png
inflating: /content/faces/thumbnails128x128/65931.png
inflating: /content/faces/thumbnails128x128/65932.png
inflating: /content/faces/thumbnails128x128/65933.png
inflating: /content/faces/thumbnails128x128/65934.png
inflating: /content/faces/thumbnails128x128/65935.png
inflating: /content/faces/thumbnails128x128/65936.png
inflating: /content/faces/thumbnails128x128/65937.png
inflating: /content/faces/thumbnails128x128/65938.png
inflating: /content/faces/thumbnails128x128/65939.png
inflating: /content/faces/thumbnails128x128/65940.png
inflating: /content/faces/thumbnails128x128/65941.png
inflating: /content/faces/thumbnails128x128/65942.png
inflating: /content/faces/thumbnails128x128/65943.png
inflating: /content/faces/thumbnails128x128/65944.png
inflating: /content/faces/thumbnails128x128/65945.png
inflating: /content/faces/thumbnails128x128/65946.png
inflating: /content/faces/thumbnails128x128/65947.png
inflating: /content/faces/thumbnails128x128/65948.png
inflating: /content/faces/thumbnails128x128/65949.png
inflating: /content/faces/thumbnails128x128/65950.png
inflating: /content/faces/thumbnails128x128/65951.png
inflating: /content/faces/thumbnails128x128/65952.png
inflating: /content/faces/thumbnails128x128/65953.png
inflating: /content/faces/thumbnails128x128/65954.png
inflating: /content/faces/thumbnails128x128/65955.png
inflating: /content/faces/thumbnails128x128/65956.png
inflating: /content/faces/thumbnails128x128/65957.png
inflating: /content/faces/thumbnails128x128/65958.png
inflating: /content/faces/thumbnails128x128/65959.png
inflating: /content/faces/thumbnails128x128/65960.png
inflating: /content/faces/thumbnails128x128/65961.png
inflating: /content/faces/thumbnails128x128/65962.png
inflating: /content/faces/thumbnails128x128/65963.png
inflating: /content/faces/thumbnails128x128/65964.png
inflating: /content/faces/thumbnails128x128/65965.png
inflating: /content/faces/thumbnails128x128/65966.png
inflating: /content/faces/thumbnails128x128/65967.png
inflating: /content/faces/thumbnails128x128/65968.png
inflating: /content/faces/thumbnails128x128/65969.png
inflating: /content/faces/thumbnails128x128/65970.png
inflating: /content/faces/thumbnails128x128/65971.png
inflating: /content/faces/thumbnails128x128/65972.png
inflating: /content/faces/thumbnails128x128/65973.png
inflating: /content/faces/thumbnails128x128/65974.png
inflating: /content/faces/thumbnails128x128/65975.png
inflating: /content/faces/thumbnails128x128/65976.png
inflating: /content/faces/thumbnails128x128/65977.png
inflating: /content/faces/thumbnails128x128/65978.png
inflating: /content/faces/thumbnails128x128/65979.png
inflating: /content/faces/thumbnails128x128/65980.png
inflating: /content/faces/thumbnails128x128/65981.png
inflating: /content/faces/thumbnails128x128/65982.png
inflating: /content/faces/thumbnails128x128/65983.png
inflating: /content/faces/thumbnails128x128/65984.png
inflating: /content/faces/thumbnails128x128/65985.png
inflating: /content/faces/thumbnails128x128/65986.png
inflating: /content/faces/thumbnails128x128/65987.png
inflating: /content/faces/thumbnails128x128/65988.png
inflating: /content/faces/thumbnails128x128/65989.png
inflating: /content/faces/thumbnails128x128/65990.png
inflating: /content/faces/thumbnails128x128/65991.png
inflating: /content/faces/thumbnails128x128/65992.png
inflating: /content/faces/thumbnails128x128/65993.png
inflating: /content/faces/thumbnails128x128/65994.png
inflating: /content/faces/thumbnails128x128/65995.png
inflating: /content/faces/thumbnails128x128/65996.png
inflating: /content/faces/thumbnails128x128/65997.png
inflating: /content/faces/thumbnails128x128/65998.png
inflating: /content/faces/thumbnails128x128/65999.png
inflating: /content/faces/thumbnails128x128/66000.png
inflating: /content/faces/thumbnails128x128/66001.png
inflating: /content/faces/thumbnails128x128/66002.png
inflating: /content/faces/thumbnails128x128/66003.png
inflating: /content/faces/thumbnails128x128/66004.png
inflating: /content/faces/thumbnails128x128/66005.png
inflating: /content/faces/thumbnails128x128/66006.png
inflating: /content/faces/thumbnails128x128/66007.png
inflating: /content/faces/thumbnails128x128/66008.png
inflating: /content/faces/thumbnails128x128/66009.png
inflating: /content/faces/thumbnails128x128/66010.png
inflating: /content/faces/thumbnails128x128/66011.png
inflating: /content/faces/thumbnails128x128/66012.png
inflating: /content/faces/thumbnails128x128/66013.png
inflating: /content/faces/thumbnails128x128/66014.png
inflating: /content/faces/thumbnails128x128/66015.png
inflating: /content/faces/thumbnails128x128/66016.png
inflating: /content/faces/thumbnails128x128/66017.png
inflating: /content/faces/thumbnails128x128/66018.png
inflating: /content/faces/thumbnails128x128/66019.png
inflating: /content/faces/thumbnails128x128/66020.png
inflating: /content/faces/thumbnails128x128/66021.png
inflating: /content/faces/thumbnails128x128/66022.png
inflating: /content/faces/thumbnails128x128/66023.png
inflating: /content/faces/thumbnails128x128/66024.png
inflating: /content/faces/thumbnails128x128/66025.png
inflating: /content/faces/thumbnails128x128/66026.png
inflating: /content/faces/thumbnails128x128/66027.png
inflating: /content/faces/thumbnails128x128/66028.png
inflating: /content/faces/thumbnails128x128/66029.png
inflating: /content/faces/thumbnails128x128/66030.png
inflating: /content/faces/thumbnails128x128/66031.png
inflating: /content/faces/thumbnails128x128/66032.png
inflating: /content/faces/thumbnails128x128/66033.png
inflating: /content/faces/thumbnails128x128/66034.png
inflating: /content/faces/thumbnails128x128/66035.png
inflating: /content/faces/thumbnails128x128/66036.png
inflating: /content/faces/thumbnails128x128/66037.png
inflating: /content/faces/thumbnails128x128/66038.png
inflating: /content/faces/thumbnails128x128/66039.png
inflating: /content/faces/thumbnails128x128/66040.png
inflating: /content/faces/thumbnails128x128/66041.png
inflating: /content/faces/thumbnails128x128/66042.png
inflating: /content/faces/thumbnails128x128/66043.png
inflating: /content/faces/thumbnails128x128/66044.png
inflating: /content/faces/thumbnails128x128/66045.png
inflating: /content/faces/thumbnails128x128/66046.png
inflating: /content/faces/thumbnails128x128/66047.png
inflating: /content/faces/thumbnails128x128/66048.png
inflating: /content/faces/thumbnails128x128/66049.png
inflating: /content/faces/thumbnails128x128/66050.png
inflating: /content/faces/thumbnails128x128/66051.png
inflating: /content/faces/thumbnails128x128/66052.png
inflating: /content/faces/thumbnails128x128/66053.png
inflating: /content/faces/thumbnails128x128/66054.png
inflating: /content/faces/thumbnails128x128/66055.png
inflating: /content/faces/thumbnails128x128/66056.png
inflating: /content/faces/thumbnails128x128/66057.png
inflating: /content/faces/thumbnails128x128/66058.png
inflating: /content/faces/thumbnails128x128/66059.png
inflating: /content/faces/thumbnails128x128/66060.png
inflating: /content/faces/thumbnails128x128/66061.png
inflating: /content/faces/thumbnails128x128/66062.png
inflating: /content/faces/thumbnails128x128/66063.png
inflating: /content/faces/thumbnails128x128/66064.png
inflating: /content/faces/thumbnails128x128/66065.png
inflating: /content/faces/thumbnails128x128/66066.png
inflating: /content/faces/thumbnails128x128/66067.png
inflating: /content/faces/thumbnails128x128/66068.png
inflating: /content/faces/thumbnails128x128/66069.png
inflating: /content/faces/thumbnails128x128/66070.png
inflating: /content/faces/thumbnails128x128/66071.png
inflating: /content/faces/thumbnails128x128/66072.png
inflating: /content/faces/thumbnails128x128/66073.png
inflating: /content/faces/thumbnails128x128/66074.png
inflating: /content/faces/thumbnails128x128/66075.png
inflating: /content/faces/thumbnails128x128/66076.png
inflating: /content/faces/thumbnails128x128/66077.png
inflating: /content/faces/thumbnails128x128/66078.png
inflating: /content/faces/thumbnails128x128/66079.png
inflating: /content/faces/thumbnails128x128/66080.png
inflating: /content/faces/thumbnails128x128/66081.png
inflating: /content/faces/thumbnails128x128/66082.png
inflating: /content/faces/thumbnails128x128/66083.png
inflating: /content/faces/thumbnails128x128/66084.png
inflating: /content/faces/thumbnails128x128/66085.png
inflating: /content/faces/thumbnails128x128/66086.png
inflating: /content/faces/thumbnails128x128/66087.png
inflating: /content/faces/thumbnails128x128/66088.png
inflating: /content/faces/thumbnails128x128/66089.png
inflating: /content/faces/thumbnails128x128/66090.png
inflating: /content/faces/thumbnails128x128/66091.png
inflating: /content/faces/thumbnails128x128/66092.png
inflating: /content/faces/thumbnails128x128/66093.png
inflating: /content/faces/thumbnails128x128/66094.png
inflating: /content/faces/thumbnails128x128/66095.png
inflating: /content/faces/thumbnails128x128/66096.png
inflating: /content/faces/thumbnails128x128/66097.png
inflating: /content/faces/thumbnails128x128/66098.png
inflating: /content/faces/thumbnails128x128/66099.png
inflating: /content/faces/thumbnails128x128/66100.png
inflating: /content/faces/thumbnails128x128/66101.png
inflating: /content/faces/thumbnails128x128/66102.png
inflating: /content/faces/thumbnails128x128/66103.png
inflating: /content/faces/thumbnails128x128/66104.png
inflating: /content/faces/thumbnails128x128/66105.png
inflating: /content/faces/thumbnails128x128/66106.png
inflating: /content/faces/thumbnails128x128/66107.png
inflating: /content/faces/thumbnails128x128/66108.png
inflating: /content/faces/thumbnails128x128/66109.png
inflating: /content/faces/thumbnails128x128/66110.png
inflating: /content/faces/thumbnails128x128/66111.png
inflating: /content/faces/thumbnails128x128/66112.png
inflating: /content/faces/thumbnails128x128/66113.png
inflating: /content/faces/thumbnails128x128/66114.png
inflating: /content/faces/thumbnails128x128/66115.png
inflating: /content/faces/thumbnails128x128/66116.png
inflating: /content/faces/thumbnails128x128/66117.png
inflating: /content/faces/thumbnails128x128/66118.png
inflating: /content/faces/thumbnails128x128/66119.png
inflating: /content/faces/thumbnails128x128/66120.png
inflating: /content/faces/thumbnails128x128/66121.png
inflating: /content/faces/thumbnails128x128/66122.png
inflating: /content/faces/thumbnails128x128/66123.png
inflating: /content/faces/thumbnails128x128/66124.png
inflating: /content/faces/thumbnails128x128/66125.png
inflating: /content/faces/thumbnails128x128/66126.png
inflating: /content/faces/thumbnails128x128/66127.png
inflating: /content/faces/thumbnails128x128/66128.png
inflating: /content/faces/thumbnails128x128/66129.png
inflating: /content/faces/thumbnails128x128/66130.png
inflating: /content/faces/thumbnails128x128/66131.png
inflating: /content/faces/thumbnails128x128/66132.png
inflating: /content/faces/thumbnails128x128/66133.png
inflating: /content/faces/thumbnails128x128/66134.png
inflating: /content/faces/thumbnails128x128/66135.png
inflating: /content/faces/thumbnails128x128/66136.png
inflating: /content/faces/thumbnails128x128/66137.png
inflating: /content/faces/thumbnails128x128/66138.png
inflating: /content/faces/thumbnails128x128/66139.png
inflating: /content/faces/thumbnails128x128/66140.png
inflating: /content/faces/thumbnails128x128/66141.png
inflating: /content/faces/thumbnails128x128/66142.png
inflating: /content/faces/thumbnails128x128/66143.png
inflating: /content/faces/thumbnails128x128/66144.png
inflating: /content/faces/thumbnails128x128/66145.png
inflating: /content/faces/thumbnails128x128/66146.png
inflating: /content/faces/thumbnails128x128/66147.png
inflating: /content/faces/thumbnails128x128/66148.png
inflating: /content/faces/thumbnails128x128/66149.png
inflating: /content/faces/thumbnails128x128/66150.png
inflating: /content/faces/thumbnails128x128/66151.png
inflating: /content/faces/thumbnails128x128/66152.png
inflating: /content/faces/thumbnails128x128/66153.png
inflating: /content/faces/thumbnails128x128/66154.png
inflating: /content/faces/thumbnails128x128/66155.png
inflating: /content/faces/thumbnails128x128/66156.png
inflating: /content/faces/thumbnails128x128/66157.png
inflating: /content/faces/thumbnails128x128/66158.png
inflating: /content/faces/thumbnails128x128/66159.png
inflating: /content/faces/thumbnails128x128/66160.png
inflating: /content/faces/thumbnails128x128/66161.png
inflating: /content/faces/thumbnails128x128/66162.png
inflating: /content/faces/thumbnails128x128/66163.png
inflating: /content/faces/thumbnails128x128/66164.png
inflating: /content/faces/thumbnails128x128/66165.png
inflating: /content/faces/thumbnails128x128/66166.png
inflating: /content/faces/thumbnails128x128/66167.png
inflating: /content/faces/thumbnails128x128/66168.png
inflating: /content/faces/thumbnails128x128/66169.png
inflating: /content/faces/thumbnails128x128/66170.png
inflating: /content/faces/thumbnails128x128/66171.png
inflating: /content/faces/thumbnails128x128/66172.png
inflating: /content/faces/thumbnails128x128/66173.png
inflating: /content/faces/thumbnails128x128/66174.png
inflating: /content/faces/thumbnails128x128/66175.png
inflating: /content/faces/thumbnails128x128/66176.png
inflating: /content/faces/thumbnails128x128/66177.png
inflating: /content/faces/thumbnails128x128/66178.png
inflating: /content/faces/thumbnails128x128/66179.png
inflating: /content/faces/thumbnails128x128/66180.png
inflating: /content/faces/thumbnails128x128/66181.png
inflating: /content/faces/thumbnails128x128/66182.png
inflating: /content/faces/thumbnails128x128/66183.png
inflating: /content/faces/thumbnails128x128/66184.png
inflating: /content/faces/thumbnails128x128/66185.png
inflating: /content/faces/thumbnails128x128/66186.png
inflating: /content/faces/thumbnails128x128/66187.png
inflating: /content/faces/thumbnails128x128/66188.png
inflating: /content/faces/thumbnails128x128/66189.png
inflating: /content/faces/thumbnails128x128/66190.png
inflating: /content/faces/thumbnails128x128/66191.png
inflating: /content/faces/thumbnails128x128/66192.png
inflating: /content/faces/thumbnails128x128/66193.png
inflating: /content/faces/thumbnails128x128/66194.png
inflating: /content/faces/thumbnails128x128/66195.png
inflating: /content/faces/thumbnails128x128/66196.png
inflating: /content/faces/thumbnails128x128/66197.png
inflating: /content/faces/thumbnails128x128/66198.png
inflating: /content/faces/thumbnails128x128/66199.png
inflating: /content/faces/thumbnails128x128/66200.png
inflating: /content/faces/thumbnails128x128/66201.png
inflating: /content/faces/thumbnails128x128/66202.png
inflating: /content/faces/thumbnails128x128/66203.png
inflating: /content/faces/thumbnails128x128/66204.png
inflating: /content/faces/thumbnails128x128/66205.png
inflating: /content/faces/thumbnails128x128/66206.png
inflating: /content/faces/thumbnails128x128/66207.png
inflating: /content/faces/thumbnails128x128/66208.png
inflating: /content/faces/thumbnails128x128/66209.png
inflating: /content/faces/thumbnails128x128/66210.png
inflating: /content/faces/thumbnails128x128/66211.png
inflating: /content/faces/thumbnails128x128/66212.png
inflating: /content/faces/thumbnails128x128/66213.png
inflating: /content/faces/thumbnails128x128/66214.png
inflating: /content/faces/thumbnails128x128/66215.png
inflating: /content/faces/thumbnails128x128/66216.png
inflating: /content/faces/thumbnails128x128/66217.png
inflating: /content/faces/thumbnails128x128/66218.png
inflating: /content/faces/thumbnails128x128/66219.png
inflating: /content/faces/thumbnails128x128/66220.png
inflating: /content/faces/thumbnails128x128/66221.png
inflating: /content/faces/thumbnails128x128/66222.png
inflating: /content/faces/thumbnails128x128/66223.png
inflating: /content/faces/thumbnails128x128/66224.png
inflating: /content/faces/thumbnails128x128/66225.png
inflating: /content/faces/thumbnails128x128/66226.png
inflating: /content/faces/thumbnails128x128/66227.png
inflating: /content/faces/thumbnails128x128/66228.png
inflating: /content/faces/thumbnails128x128/66229.png
inflating: /content/faces/thumbnails128x128/66230.png
inflating: /content/faces/thumbnails128x128/66231.png
inflating: /content/faces/thumbnails128x128/66232.png
inflating: /content/faces/thumbnails128x128/66233.png
inflating: /content/faces/thumbnails128x128/66234.png
inflating: /content/faces/thumbnails128x128/66235.png
inflating: /content/faces/thumbnails128x128/66236.png
inflating: /content/faces/thumbnails128x128/66237.png
inflating: /content/faces/thumbnails128x128/66238.png
inflating: /content/faces/thumbnails128x128/66239.png
inflating: /content/faces/thumbnails128x128/66240.png
inflating: /content/faces/thumbnails128x128/66241.png
inflating: /content/faces/thumbnails128x128/66242.png
inflating: /content/faces/thumbnails128x128/66243.png
inflating: /content/faces/thumbnails128x128/66244.png
inflating: /content/faces/thumbnails128x128/66245.png
inflating: /content/faces/thumbnails128x128/66246.png
inflating: /content/faces/thumbnails128x128/66247.png
inflating: /content/faces/thumbnails128x128/66248.png
inflating: /content/faces/thumbnails128x128/66249.png
inflating: /content/faces/thumbnails128x128/66250.png
inflating: /content/faces/thumbnails128x128/66251.png
inflating: /content/faces/thumbnails128x128/66252.png
inflating: /content/faces/thumbnails128x128/66253.png
inflating: /content/faces/thumbnails128x128/66254.png
inflating: /content/faces/thumbnails128x128/66255.png
inflating: /content/faces/thumbnails128x128/66256.png
inflating: /content/faces/thumbnails128x128/66257.png
inflating: /content/faces/thumbnails128x128/66258.png
inflating: /content/faces/thumbnails128x128/66259.png
inflating: /content/faces/thumbnails128x128/66260.png
inflating: /content/faces/thumbnails128x128/66261.png
inflating: /content/faces/thumbnails128x128/66262.png
inflating: /content/faces/thumbnails128x128/66263.png
inflating: /content/faces/thumbnails128x128/66264.png
inflating: /content/faces/thumbnails128x128/66265.png
inflating: /content/faces/thumbnails128x128/66266.png
inflating: /content/faces/thumbnails128x128/66267.png
inflating: /content/faces/thumbnails128x128/66268.png
inflating: /content/faces/thumbnails128x128/66269.png
inflating: /content/faces/thumbnails128x128/66270.png
inflating: /content/faces/thumbnails128x128/66271.png
inflating: /content/faces/thumbnails128x128/66272.png
inflating: /content/faces/thumbnails128x128/66273.png
inflating: /content/faces/thumbnails128x128/66274.png
inflating: /content/faces/thumbnails128x128/66275.png
inflating: /content/faces/thumbnails128x128/66276.png
inflating: /content/faces/thumbnails128x128/66277.png
inflating: /content/faces/thumbnails128x128/66278.png
inflating: /content/faces/thumbnails128x128/66279.png
inflating: /content/faces/thumbnails128x128/66280.png
inflating: /content/faces/thumbnails128x128/66281.png
inflating: /content/faces/thumbnails128x128/66282.png
inflating: /content/faces/thumbnails128x128/66283.png
inflating: /content/faces/thumbnails128x128/66284.png
inflating: /content/faces/thumbnails128x128/66285.png
inflating: /content/faces/thumbnails128x128/66286.png
inflating: /content/faces/thumbnails128x128/66287.png
inflating: /content/faces/thumbnails128x128/66288.png
inflating: /content/faces/thumbnails128x128/66289.png
inflating: /content/faces/thumbnails128x128/66290.png
inflating: /content/faces/thumbnails128x128/66291.png
inflating: /content/faces/thumbnails128x128/66292.png
inflating: /content/faces/thumbnails128x128/66293.png
inflating: /content/faces/thumbnails128x128/66294.png
inflating: /content/faces/thumbnails128x128/66295.png
inflating: /content/faces/thumbnails128x128/66296.png
inflating: /content/faces/thumbnails128x128/66297.png
inflating: /content/faces/thumbnails128x128/66298.png
inflating: /content/faces/thumbnails128x128/66299.png
inflating: /content/faces/thumbnails128x128/66300.png
inflating: /content/faces/thumbnails128x128/66301.png
inflating: /content/faces/thumbnails128x128/66302.png
inflating: /content/faces/thumbnails128x128/66303.png
inflating: /content/faces/thumbnails128x128/66304.png
inflating: /content/faces/thumbnails128x128/66305.png
inflating: /content/faces/thumbnails128x128/66306.png
inflating: /content/faces/thumbnails128x128/66307.png
inflating: /content/faces/thumbnails128x128/66308.png
inflating: /content/faces/thumbnails128x128/66309.png
inflating: /content/faces/thumbnails128x128/66310.png
inflating: /content/faces/thumbnails128x128/66311.png
inflating: /content/faces/thumbnails128x128/66312.png
inflating: /content/faces/thumbnails128x128/66313.png
inflating: /content/faces/thumbnails128x128/66314.png
inflating: /content/faces/thumbnails128x128/66315.png
inflating: /content/faces/thumbnails128x128/66316.png
inflating: /content/faces/thumbnails128x128/66317.png
inflating: /content/faces/thumbnails128x128/66318.png
inflating: /content/faces/thumbnails128x128/66319.png
inflating: /content/faces/thumbnails128x128/66320.png
inflating: /content/faces/thumbnails128x128/66321.png
inflating: /content/faces/thumbnails128x128/66322.png
inflating: /content/faces/thumbnails128x128/66323.png
inflating: /content/faces/thumbnails128x128/66324.png
inflating: /content/faces/thumbnails128x128/66325.png
inflating: /content/faces/thumbnails128x128/66326.png
inflating: /content/faces/thumbnails128x128/66327.png
inflating: /content/faces/thumbnails128x128/66328.png
inflating: /content/faces/thumbnails128x128/66329.png
inflating: /content/faces/thumbnails128x128/66330.png
inflating: /content/faces/thumbnails128x128/66331.png
inflating: /content/faces/thumbnails128x128/66332.png
inflating: /content/faces/thumbnails128x128/66333.png
inflating: /content/faces/thumbnails128x128/66334.png
inflating: /content/faces/thumbnails128x128/66335.png
inflating: /content/faces/thumbnails128x128/66336.png
inflating: /content/faces/thumbnails128x128/66337.png
inflating: /content/faces/thumbnails128x128/66338.png
inflating: /content/faces/thumbnails128x128/66339.png
inflating: /content/faces/thumbnails128x128/66340.png
inflating: /content/faces/thumbnails128x128/66341.png
inflating: /content/faces/thumbnails128x128/66342.png
inflating: /content/faces/thumbnails128x128/66343.png
inflating: /content/faces/thumbnails128x128/66344.png
inflating: /content/faces/thumbnails128x128/66345.png
inflating: /content/faces/thumbnails128x128/66346.png
inflating: /content/faces/thumbnails128x128/66347.png
inflating: /content/faces/thumbnails128x128/66348.png
inflating: /content/faces/thumbnails128x128/66349.png
inflating: /content/faces/thumbnails128x128/66350.png
inflating: /content/faces/thumbnails128x128/66351.png
inflating: /content/faces/thumbnails128x128/66352.png
inflating: /content/faces/thumbnails128x128/66353.png
inflating: /content/faces/thumbnails128x128/66354.png
inflating: /content/faces/thumbnails128x128/66355.png
inflating: /content/faces/thumbnails128x128/66356.png
inflating: /content/faces/thumbnails128x128/66357.png
inflating: /content/faces/thumbnails128x128/66358.png
inflating: /content/faces/thumbnails128x128/66359.png
inflating: /content/faces/thumbnails128x128/66360.png
inflating: /content/faces/thumbnails128x128/66361.png
inflating: /content/faces/thumbnails128x128/66362.png
inflating: /content/faces/thumbnails128x128/66363.png
inflating: /content/faces/thumbnails128x128/66364.png
inflating: /content/faces/thumbnails128x128/66365.png
inflating: /content/faces/thumbnails128x128/66366.png
inflating: /content/faces/thumbnails128x128/66367.png
inflating: /content/faces/thumbnails128x128/66368.png
inflating: /content/faces/thumbnails128x128/66369.png
inflating: /content/faces/thumbnails128x128/66370.png
inflating: /content/faces/thumbnails128x128/66371.png
inflating: /content/faces/thumbnails128x128/66372.png
inflating: /content/faces/thumbnails128x128/66373.png
inflating: /content/faces/thumbnails128x128/66374.png
inflating: /content/faces/thumbnails128x128/66375.png
inflating: /content/faces/thumbnails128x128/66376.png
inflating: /content/faces/thumbnails128x128/66377.png
inflating: /content/faces/thumbnails128x128/66378.png
inflating: /content/faces/thumbnails128x128/66379.png
inflating: /content/faces/thumbnails128x128/66380.png
inflating: /content/faces/thumbnails128x128/66381.png
inflating: /content/faces/thumbnails128x128/66382.png
inflating: /content/faces/thumbnails128x128/66383.png
inflating: /content/faces/thumbnails128x128/66384.png
inflating: /content/faces/thumbnails128x128/66385.png
inflating: /content/faces/thumbnails128x128/66386.png
inflating: /content/faces/thumbnails128x128/66387.png
inflating: /content/faces/thumbnails128x128/66388.png
inflating: /content/faces/thumbnails128x128/66389.png
inflating: /content/faces/thumbnails128x128/66390.png
inflating: /content/faces/thumbnails128x128/66391.png
inflating: /content/faces/thumbnails128x128/66392.png
inflating: /content/faces/thumbnails128x128/66393.png
inflating: /content/faces/thumbnails128x128/66394.png
inflating: /content/faces/thumbnails128x128/66395.png
inflating: /content/faces/thumbnails128x128/66396.png
inflating: /content/faces/thumbnails128x128/66397.png
inflating: /content/faces/thumbnails128x128/66398.png
inflating: /content/faces/thumbnails128x128/66399.png
inflating: /content/faces/thumbnails128x128/66400.png
inflating: /content/faces/thumbnails128x128/66401.png
inflating: /content/faces/thumbnails128x128/66402.png
inflating: /content/faces/thumbnails128x128/66403.png
inflating: /content/faces/thumbnails128x128/66404.png
inflating: /content/faces/thumbnails128x128/66405.png
inflating: /content/faces/thumbnails128x128/66406.png
inflating: /content/faces/thumbnails128x128/66407.png
inflating: /content/faces/thumbnails128x128/66408.png
inflating: /content/faces/thumbnails128x128/66409.png
inflating: /content/faces/thumbnails128x128/66410.png
inflating: /content/faces/thumbnails128x128/66411.png
inflating: /content/faces/thumbnails128x128/66412.png
inflating: /content/faces/thumbnails128x128/66413.png
inflating: /content/faces/thumbnails128x128/66414.png
inflating: /content/faces/thumbnails128x128/66415.png
inflating: /content/faces/thumbnails128x128/66416.png
inflating: /content/faces/thumbnails128x128/66417.png
inflating: /content/faces/thumbnails128x128/66418.png
inflating: /content/faces/thumbnails128x128/66419.png
inflating: /content/faces/thumbnails128x128/66420.png
inflating: /content/faces/thumbnails128x128/66421.png
inflating: /content/faces/thumbnails128x128/66422.png
inflating: /content/faces/thumbnails128x128/66423.png
inflating: /content/faces/thumbnails128x128/66424.png
inflating: /content/faces/thumbnails128x128/66425.png
inflating: /content/faces/thumbnails128x128/66426.png
inflating: /content/faces/thumbnails128x128/66427.png
inflating: /content/faces/thumbnails128x128/66428.png
inflating: /content/faces/thumbnails128x128/66429.png
inflating: /content/faces/thumbnails128x128/66430.png
inflating: /content/faces/thumbnails128x128/66431.png
inflating: /content/faces/thumbnails128x128/66432.png
inflating: /content/faces/thumbnails128x128/66433.png
inflating: /content/faces/thumbnails128x128/66434.png
inflating: /content/faces/thumbnails128x128/66435.png
inflating: /content/faces/thumbnails128x128/66436.png
inflating: /content/faces/thumbnails128x128/66437.png
inflating: /content/faces/thumbnails128x128/66438.png
inflating: /content/faces/thumbnails128x128/66439.png
inflating: /content/faces/thumbnails128x128/66440.png
inflating: /content/faces/thumbnails128x128/66441.png
inflating: /content/faces/thumbnails128x128/66442.png
inflating: /content/faces/thumbnails128x128/66443.png
inflating: /content/faces/thumbnails128x128/66444.png
inflating: /content/faces/thumbnails128x128/66445.png
inflating: /content/faces/thumbnails128x128/66446.png
inflating: /content/faces/thumbnails128x128/66447.png
inflating: /content/faces/thumbnails128x128/66448.png
inflating: /content/faces/thumbnails128x128/66449.png
inflating: /content/faces/thumbnails128x128/66450.png
inflating: /content/faces/thumbnails128x128/66451.png
inflating: /content/faces/thumbnails128x128/66452.png
inflating: /content/faces/thumbnails128x128/66453.png
inflating: /content/faces/thumbnails128x128/66454.png
inflating: /content/faces/thumbnails128x128/66455.png
inflating: /content/faces/thumbnails128x128/66456.png
inflating: /content/faces/thumbnails128x128/66457.png
inflating: /content/faces/thumbnails128x128/66458.png
inflating: /content/faces/thumbnails128x128/66459.png
inflating: /content/faces/thumbnails128x128/66460.png
inflating: /content/faces/thumbnails128x128/66461.png
inflating: /content/faces/thumbnails128x128/66462.png
inflating: /content/faces/thumbnails128x128/66463.png
inflating: /content/faces/thumbnails128x128/66464.png
inflating: /content/faces/thumbnails128x128/66465.png
inflating: /content/faces/thumbnails128x128/66466.png
inflating: /content/faces/thumbnails128x128/66467.png
inflating: /content/faces/thumbnails128x128/66468.png
inflating: /content/faces/thumbnails128x128/66469.png
inflating: /content/faces/thumbnails128x128/66470.png
inflating: /content/faces/thumbnails128x128/66471.png
inflating: /content/faces/thumbnails128x128/66472.png
inflating: /content/faces/thumbnails128x128/66473.png
inflating: /content/faces/thumbnails128x128/66474.png
inflating: /content/faces/thumbnails128x128/66475.png
inflating: /content/faces/thumbnails128x128/66476.png
inflating: /content/faces/thumbnails128x128/66477.png
inflating: /content/faces/thumbnails128x128/66478.png
inflating: /content/faces/thumbnails128x128/66479.png
inflating: /content/faces/thumbnails128x128/66480.png
inflating: /content/faces/thumbnails128x128/66481.png
inflating: /content/faces/thumbnails128x128/66482.png
inflating: /content/faces/thumbnails128x128/66483.png
inflating: /content/faces/thumbnails128x128/66484.png
inflating: /content/faces/thumbnails128x128/66485.png
inflating: /content/faces/thumbnails128x128/66486.png
inflating: /content/faces/thumbnails128x128/66487.png
inflating: /content/faces/thumbnails128x128/66488.png
inflating: /content/faces/thumbnails128x128/66489.png
inflating: /content/faces/thumbnails128x128/66490.png
inflating: /content/faces/thumbnails128x128/66491.png
inflating: /content/faces/thumbnails128x128/66492.png
inflating: /content/faces/thumbnails128x128/66493.png
inflating: /content/faces/thumbnails128x128/66494.png
inflating: /content/faces/thumbnails128x128/66495.png
inflating: /content/faces/thumbnails128x128/66496.png
inflating: /content/faces/thumbnails128x128/66497.png
inflating: /content/faces/thumbnails128x128/66498.png
inflating: /content/faces/thumbnails128x128/66499.png
inflating: /content/faces/thumbnails128x128/66500.png
inflating: /content/faces/thumbnails128x128/66501.png
inflating: /content/faces/thumbnails128x128/66502.png
inflating: /content/faces/thumbnails128x128/66503.png
inflating: /content/faces/thumbnails128x128/66504.png
inflating: /content/faces/thumbnails128x128/66505.png
inflating: /content/faces/thumbnails128x128/66506.png
inflating: /content/faces/thumbnails128x128/66507.png
inflating: /content/faces/thumbnails128x128/66508.png
inflating: /content/faces/thumbnails128x128/66509.png
inflating: /content/faces/thumbnails128x128/66510.png
inflating: /content/faces/thumbnails128x128/66511.png
inflating: /content/faces/thumbnails128x128/66512.png
inflating: /content/faces/thumbnails128x128/66513.png
inflating: /content/faces/thumbnails128x128/66514.png
inflating: /content/faces/thumbnails128x128/66515.png
inflating: /content/faces/thumbnails128x128/66516.png
inflating: /content/faces/thumbnails128x128/66517.png
inflating: /content/faces/thumbnails128x128/66518.png
inflating: /content/faces/thumbnails128x128/66519.png
inflating: /content/faces/thumbnails128x128/66520.png
inflating: /content/faces/thumbnails128x128/66521.png
inflating: /content/faces/thumbnails128x128/66522.png
inflating: /content/faces/thumbnails128x128/66523.png
inflating: /content/faces/thumbnails128x128/66524.png
inflating: /content/faces/thumbnails128x128/66525.png
inflating: /content/faces/thumbnails128x128/66526.png
inflating: /content/faces/thumbnails128x128/66527.png
inflating: /content/faces/thumbnails128x128/66528.png
inflating: /content/faces/thumbnails128x128/66529.png
inflating: /content/faces/thumbnails128x128/66530.png
inflating: /content/faces/thumbnails128x128/66531.png
inflating: /content/faces/thumbnails128x128/66532.png
inflating: /content/faces/thumbnails128x128/66533.png
inflating: /content/faces/thumbnails128x128/66534.png
inflating: /content/faces/thumbnails128x128/66535.png
inflating: /content/faces/thumbnails128x128/66536.png
inflating: /content/faces/thumbnails128x128/66537.png
inflating: /content/faces/thumbnails128x128/66538.png
inflating: /content/faces/thumbnails128x128/66539.png
inflating: /content/faces/thumbnails128x128/66540.png
inflating: /content/faces/thumbnails128x128/66541.png
inflating: /content/faces/thumbnails128x128/66542.png
inflating: /content/faces/thumbnails128x128/66543.png
inflating: /content/faces/thumbnails128x128/66544.png
inflating: /content/faces/thumbnails128x128/66545.png
inflating: /content/faces/thumbnails128x128/66546.png
inflating: /content/faces/thumbnails128x128/66547.png
inflating: /content/faces/thumbnails128x128/66548.png
inflating: /content/faces/thumbnails128x128/66549.png
inflating: /content/faces/thumbnails128x128/66550.png
inflating: /content/faces/thumbnails128x128/66551.png
inflating: /content/faces/thumbnails128x128/66552.png
inflating: /content/faces/thumbnails128x128/66553.png
inflating: /content/faces/thumbnails128x128/66554.png
inflating: /content/faces/thumbnails128x128/66555.png
inflating: /content/faces/thumbnails128x128/66556.png
inflating: /content/faces/thumbnails128x128/66557.png
inflating: /content/faces/thumbnails128x128/66558.png
inflating: /content/faces/thumbnails128x128/66559.png
inflating: /content/faces/thumbnails128x128/66560.png
inflating: /content/faces/thumbnails128x128/66561.png
inflating: /content/faces/thumbnails128x128/66562.png
inflating: /content/faces/thumbnails128x128/66563.png
inflating: /content/faces/thumbnails128x128/66564.png
inflating: /content/faces/thumbnails128x128/66565.png
inflating: /content/faces/thumbnails128x128/66566.png
inflating: /content/faces/thumbnails128x128/66567.png
inflating: /content/faces/thumbnails128x128/66568.png
inflating: /content/faces/thumbnails128x128/66569.png
inflating: /content/faces/thumbnails128x128/66570.png
inflating: /content/faces/thumbnails128x128/66571.png
inflating: /content/faces/thumbnails128x128/66572.png
inflating: /content/faces/thumbnails128x128/66573.png
inflating: /content/faces/thumbnails128x128/66574.png
inflating: /content/faces/thumbnails128x128/66575.png
inflating: /content/faces/thumbnails128x128/66576.png
inflating: /content/faces/thumbnails128x128/66577.png
inflating: /content/faces/thumbnails128x128/66578.png
inflating: /content/faces/thumbnails128x128/66579.png
inflating: /content/faces/thumbnails128x128/66580.png
inflating: /content/faces/thumbnails128x128/66581.png
inflating: /content/faces/thumbnails128x128/66582.png
inflating: /content/faces/thumbnails128x128/66583.png
inflating: /content/faces/thumbnails128x128/66584.png
inflating: /content/faces/thumbnails128x128/66585.png
inflating: /content/faces/thumbnails128x128/66586.png
inflating: /content/faces/thumbnails128x128/66587.png
inflating: /content/faces/thumbnails128x128/66588.png
inflating: /content/faces/thumbnails128x128/66589.png
inflating: /content/faces/thumbnails128x128/66590.png
inflating: /content/faces/thumbnails128x128/66591.png
inflating: /content/faces/thumbnails128x128/66592.png
inflating: /content/faces/thumbnails128x128/66593.png
inflating: /content/faces/thumbnails128x128/66594.png
inflating: /content/faces/thumbnails128x128/66595.png
inflating: /content/faces/thumbnails128x128/66596.png
inflating: /content/faces/thumbnails128x128/66597.png
inflating: /content/faces/thumbnails128x128/66598.png
inflating: /content/faces/thumbnails128x128/66599.png
inflating: /content/faces/thumbnails128x128/66600.png
inflating: /content/faces/thumbnails128x128/66601.png
inflating: /content/faces/thumbnails128x128/66602.png
inflating: /content/faces/thumbnails128x128/66603.png
inflating: /content/faces/thumbnails128x128/66604.png
inflating: /content/faces/thumbnails128x128/66605.png
inflating: /content/faces/thumbnails128x128/66606.png
inflating: /content/faces/thumbnails128x128/66607.png
inflating: /content/faces/thumbnails128x128/66608.png
inflating: /content/faces/thumbnails128x128/66609.png
inflating: /content/faces/thumbnails128x128/66610.png
inflating: /content/faces/thumbnails128x128/66611.png
inflating: /content/faces/thumbnails128x128/66612.png
inflating: /content/faces/thumbnails128x128/66613.png
inflating: /content/faces/thumbnails128x128/66614.png
inflating: /content/faces/thumbnails128x128/66615.png
inflating: /content/faces/thumbnails128x128/66616.png
inflating: /content/faces/thumbnails128x128/66617.png
inflating: /content/faces/thumbnails128x128/66618.png
inflating: /content/faces/thumbnails128x128/66619.png
inflating: /content/faces/thumbnails128x128/66620.png
inflating: /content/faces/thumbnails128x128/66621.png
inflating: /content/faces/thumbnails128x128/66622.png
inflating: /content/faces/thumbnails128x128/66623.png
inflating: /content/faces/thumbnails128x128/66624.png
inflating: /content/faces/thumbnails128x128/66625.png
inflating: /content/faces/thumbnails128x128/66626.png
inflating: /content/faces/thumbnails128x128/66627.png
inflating: /content/faces/thumbnails128x128/66628.png
inflating: /content/faces/thumbnails128x128/66629.png
inflating: /content/faces/thumbnails128x128/66630.png
inflating: /content/faces/thumbnails128x128/66631.png
inflating: /content/faces/thumbnails128x128/66632.png
inflating: /content/faces/thumbnails128x128/66633.png
inflating: /content/faces/thumbnails128x128/66634.png
inflating: /content/faces/thumbnails128x128/66635.png
inflating: /content/faces/thumbnails128x128/66636.png
inflating: /content/faces/thumbnails128x128/66637.png
inflating: /content/faces/thumbnails128x128/66638.png
inflating: /content/faces/thumbnails128x128/66639.png
inflating: /content/faces/thumbnails128x128/66640.png
inflating: /content/faces/thumbnails128x128/66641.png
inflating: /content/faces/thumbnails128x128/66642.png
inflating: /content/faces/thumbnails128x128/66643.png
inflating: /content/faces/thumbnails128x128/66644.png
inflating: /content/faces/thumbnails128x128/66645.png
inflating: /content/faces/thumbnails128x128/66646.png
inflating: /content/faces/thumbnails128x128/66647.png
inflating: /content/faces/thumbnails128x128/66648.png
inflating: /content/faces/thumbnails128x128/66649.png
inflating: /content/faces/thumbnails128x128/66650.png
inflating: /content/faces/thumbnails128x128/66651.png
inflating: /content/faces/thumbnails128x128/66652.png
inflating: /content/faces/thumbnails128x128/66653.png
inflating: /content/faces/thumbnails128x128/66654.png
inflating: /content/faces/thumbnails128x128/66655.png
inflating: /content/faces/thumbnails128x128/66656.png
inflating: /content/faces/thumbnails128x128/66657.png
inflating: /content/faces/thumbnails128x128/66658.png
inflating: /content/faces/thumbnails128x128/66659.png
inflating: /content/faces/thumbnails128x128/66660.png
inflating: /content/faces/thumbnails128x128/66661.png
inflating: /content/faces/thumbnails128x128/66662.png
inflating: /content/faces/thumbnails128x128/66663.png
inflating: /content/faces/thumbnails128x128/66664.png
inflating: /content/faces/thumbnails128x128/66665.png
inflating: /content/faces/thumbnails128x128/66666.png
inflating: /content/faces/thumbnails128x128/66667.png
inflating: /content/faces/thumbnails128x128/66668.png
inflating: /content/faces/thumbnails128x128/66669.png
inflating: /content/faces/thumbnails128x128/66670.png
inflating: /content/faces/thumbnails128x128/66671.png
inflating: /content/faces/thumbnails128x128/66672.png
inflating: /content/faces/thumbnails128x128/66673.png
inflating: /content/faces/thumbnails128x128/66674.png
inflating: /content/faces/thumbnails128x128/66675.png
inflating: /content/faces/thumbnails128x128/66676.png
inflating: /content/faces/thumbnails128x128/66677.png
inflating: /content/faces/thumbnails128x128/66678.png
inflating: /content/faces/thumbnails128x128/66679.png
inflating: /content/faces/thumbnails128x128/66680.png
inflating: /content/faces/thumbnails128x128/66681.png
inflating: /content/faces/thumbnails128x128/66682.png
inflating: /content/faces/thumbnails128x128/66683.png
inflating: /content/faces/thumbnails128x128/66684.png
inflating: /content/faces/thumbnails128x128/66685.png
inflating: /content/faces/thumbnails128x128/66686.png
inflating: /content/faces/thumbnails128x128/66687.png
inflating: /content/faces/thumbnails128x128/66688.png
inflating: /content/faces/thumbnails128x128/66689.png
inflating: /content/faces/thumbnails128x128/66690.png
inflating: /content/faces/thumbnails128x128/66691.png
inflating: /content/faces/thumbnails128x128/66692.png
inflating: /content/faces/thumbnails128x128/66693.png
inflating: /content/faces/thumbnails128x128/66694.png
inflating: /content/faces/thumbnails128x128/66695.png
inflating: /content/faces/thumbnails128x128/66696.png
inflating: /content/faces/thumbnails128x128/66697.png
inflating: /content/faces/thumbnails128x128/66698.png
inflating: /content/faces/thumbnails128x128/66699.png
inflating: /content/faces/thumbnails128x128/66700.png
inflating: /content/faces/thumbnails128x128/66701.png
inflating: /content/faces/thumbnails128x128/66702.png
inflating: /content/faces/thumbnails128x128/66703.png
inflating: /content/faces/thumbnails128x128/66704.png
inflating: /content/faces/thumbnails128x128/66705.png
inflating: /content/faces/thumbnails128x128/66706.png
inflating: /content/faces/thumbnails128x128/66707.png
inflating: /content/faces/thumbnails128x128/66708.png
inflating: /content/faces/thumbnails128x128/66709.png
inflating: /content/faces/thumbnails128x128/66710.png
inflating: /content/faces/thumbnails128x128/66711.png
inflating: /content/faces/thumbnails128x128/66712.png
inflating: /content/faces/thumbnails128x128/66713.png
inflating: /content/faces/thumbnails128x128/66714.png
inflating: /content/faces/thumbnails128x128/66715.png
inflating: /content/faces/thumbnails128x128/66716.png
inflating: /content/faces/thumbnails128x128/66717.png
inflating: /content/faces/thumbnails128x128/66718.png
inflating: /content/faces/thumbnails128x128/66719.png
inflating: /content/faces/thumbnails128x128/66720.png
inflating: /content/faces/thumbnails128x128/66721.png
inflating: /content/faces/thumbnails128x128/66722.png
inflating: /content/faces/thumbnails128x128/66723.png
inflating: /content/faces/thumbnails128x128/66724.png
inflating: /content/faces/thumbnails128x128/66725.png
inflating: /content/faces/thumbnails128x128/66726.png
inflating: /content/faces/thumbnails128x128/66727.png
inflating: /content/faces/thumbnails128x128/66728.png
inflating: /content/faces/thumbnails128x128/66729.png
inflating: /content/faces/thumbnails128x128/66730.png
inflating: /content/faces/thumbnails128x128/66731.png
inflating: /content/faces/thumbnails128x128/66732.png
inflating: /content/faces/thumbnails128x128/66733.png
inflating: /content/faces/thumbnails128x128/66734.png
inflating: /content/faces/thumbnails128x128/66735.png
inflating: /content/faces/thumbnails128x128/66736.png
inflating: /content/faces/thumbnails128x128/66737.png
inflating: /content/faces/thumbnails128x128/66738.png
inflating: /content/faces/thumbnails128x128/66739.png
inflating: /content/faces/thumbnails128x128/66740.png
inflating: /content/faces/thumbnails128x128/66741.png
inflating: /content/faces/thumbnails128x128/66742.png
inflating: /content/faces/thumbnails128x128/66743.png
inflating: /content/faces/thumbnails128x128/66744.png
inflating: /content/faces/thumbnails128x128/66745.png
inflating: /content/faces/thumbnails128x128/66746.png
inflating: /content/faces/thumbnails128x128/66747.png
inflating: /content/faces/thumbnails128x128/66748.png
inflating: /content/faces/thumbnails128x128/66749.png
inflating: /content/faces/thumbnails128x128/66750.png
inflating: /content/faces/thumbnails128x128/66751.png
inflating: /content/faces/thumbnails128x128/66752.png
inflating: /content/faces/thumbnails128x128/66753.png
inflating: /content/faces/thumbnails128x128/66754.png
inflating: /content/faces/thumbnails128x128/66755.png
inflating: /content/faces/thumbnails128x128/66756.png
inflating: /content/faces/thumbnails128x128/66757.png
inflating: /content/faces/thumbnails128x128/66758.png
inflating: /content/faces/thumbnails128x128/66759.png
inflating: /content/faces/thumbnails128x128/66760.png
inflating: /content/faces/thumbnails128x128/66761.png
inflating: /content/faces/thumbnails128x128/66762.png
inflating: /content/faces/thumbnails128x128/66763.png
inflating: /content/faces/thumbnails128x128/66764.png
inflating: /content/faces/thumbnails128x128/66765.png
inflating: /content/faces/thumbnails128x128/66766.png
inflating: /content/faces/thumbnails128x128/66767.png
inflating: /content/faces/thumbnails128x128/66768.png
inflating: /content/faces/thumbnails128x128/66769.png
inflating: /content/faces/thumbnails128x128/66770.png
inflating: /content/faces/thumbnails128x128/66771.png
inflating: /content/faces/thumbnails128x128/66772.png
inflating: /content/faces/thumbnails128x128/66773.png
inflating: /content/faces/thumbnails128x128/66774.png
inflating: /content/faces/thumbnails128x128/66775.png
inflating: /content/faces/thumbnails128x128/66776.png
inflating: /content/faces/thumbnails128x128/66777.png
inflating: /content/faces/thumbnails128x128/66778.png
inflating: /content/faces/thumbnails128x128/66779.png
inflating: /content/faces/thumbnails128x128/66780.png
inflating: /content/faces/thumbnails128x128/66781.png
inflating: /content/faces/thumbnails128x128/66782.png
inflating: /content/faces/thumbnails128x128/66783.png
inflating: /content/faces/thumbnails128x128/66784.png
inflating: /content/faces/thumbnails128x128/66785.png
inflating: /content/faces/thumbnails128x128/66786.png
inflating: /content/faces/thumbnails128x128/66787.png
inflating: /content/faces/thumbnails128x128/66788.png
inflating: /content/faces/thumbnails128x128/66789.png
inflating: /content/faces/thumbnails128x128/66790.png
inflating: /content/faces/thumbnails128x128/66791.png
inflating: /content/faces/thumbnails128x128/66792.png
inflating: /content/faces/thumbnails128x128/66793.png
inflating: /content/faces/thumbnails128x128/66794.png
inflating: /content/faces/thumbnails128x128/66795.png
inflating: /content/faces/thumbnails128x128/66796.png
inflating: /content/faces/thumbnails128x128/66797.png
inflating: /content/faces/thumbnails128x128/66798.png
inflating: /content/faces/thumbnails128x128/66799.png
inflating: /content/faces/thumbnails128x128/66800.png
inflating: /content/faces/thumbnails128x128/66801.png
inflating: /content/faces/thumbnails128x128/66802.png
inflating: /content/faces/thumbnails128x128/66803.png
inflating: /content/faces/thumbnails128x128/66804.png
inflating: /content/faces/thumbnails128x128/66805.png
inflating: /content/faces/thumbnails128x128/66806.png
inflating: /content/faces/thumbnails128x128/66807.png
inflating: /content/faces/thumbnails128x128/66808.png
inflating: /content/faces/thumbnails128x128/66809.png
inflating: /content/faces/thumbnails128x128/66810.png
inflating: /content/faces/thumbnails128x128/66811.png
inflating: /content/faces/thumbnails128x128/66812.png
inflating: /content/faces/thumbnails128x128/66813.png
inflating: /content/faces/thumbnails128x128/66814.png
inflating: /content/faces/thumbnails128x128/66815.png
inflating: /content/faces/thumbnails128x128/66816.png
inflating: /content/faces/thumbnails128x128/66817.png
inflating: /content/faces/thumbnails128x128/66818.png
inflating: /content/faces/thumbnails128x128/66819.png
inflating: /content/faces/thumbnails128x128/66820.png
inflating: /content/faces/thumbnails128x128/66821.png
inflating: /content/faces/thumbnails128x128/66822.png
inflating: /content/faces/thumbnails128x128/66823.png
inflating: /content/faces/thumbnails128x128/66824.png
inflating: /content/faces/thumbnails128x128/66825.png
inflating: /content/faces/thumbnails128x128/66826.png
inflating: /content/faces/thumbnails128x128/66827.png
inflating: /content/faces/thumbnails128x128/66828.png
inflating: /content/faces/thumbnails128x128/66829.png
inflating: /content/faces/thumbnails128x128/66830.png
inflating: /content/faces/thumbnails128x128/66831.png
inflating: /content/faces/thumbnails128x128/66832.png
inflating: /content/faces/thumbnails128x128/66833.png
inflating: /content/faces/thumbnails128x128/66834.png
inflating: /content/faces/thumbnails128x128/66835.png
inflating: /content/faces/thumbnails128x128/66836.png
inflating: /content/faces/thumbnails128x128/66837.png
inflating: /content/faces/thumbnails128x128/66838.png
inflating: /content/faces/thumbnails128x128/66839.png
inflating: /content/faces/thumbnails128x128/66840.png
inflating: /content/faces/thumbnails128x128/66841.png
inflating: /content/faces/thumbnails128x128/66842.png
inflating: /content/faces/thumbnails128x128/66843.png
inflating: /content/faces/thumbnails128x128/66844.png
inflating: /content/faces/thumbnails128x128/66845.png
inflating: /content/faces/thumbnails128x128/66846.png
inflating: /content/faces/thumbnails128x128/66847.png
inflating: /content/faces/thumbnails128x128/66848.png
inflating: /content/faces/thumbnails128x128/66849.png
inflating: /content/faces/thumbnails128x128/66850.png
inflating: /content/faces/thumbnails128x128/66851.png
inflating: /content/faces/thumbnails128x128/66852.png
inflating: /content/faces/thumbnails128x128/66853.png
inflating: /content/faces/thumbnails128x128/66854.png
inflating: /content/faces/thumbnails128x128/66855.png
inflating: /content/faces/thumbnails128x128/66856.png
inflating: /content/faces/thumbnails128x128/66857.png
inflating: /content/faces/thumbnails128x128/66858.png
inflating: /content/faces/thumbnails128x128/66859.png
inflating: /content/faces/thumbnails128x128/66860.png
inflating: /content/faces/thumbnails128x128/66861.png
inflating: /content/faces/thumbnails128x128/66862.png
inflating: /content/faces/thumbnails128x128/66863.png
inflating: /content/faces/thumbnails128x128/66864.png
inflating: /content/faces/thumbnails128x128/66865.png
inflating: /content/faces/thumbnails128x128/66866.png
inflating: /content/faces/thumbnails128x128/66867.png
inflating: /content/faces/thumbnails128x128/66868.png
inflating: /content/faces/thumbnails128x128/66869.png
inflating: /content/faces/thumbnails128x128/66870.png
inflating: /content/faces/thumbnails128x128/66871.png
inflating: /content/faces/thumbnails128x128/66872.png
inflating: /content/faces/thumbnails128x128/66873.png
inflating: /content/faces/thumbnails128x128/66874.png
inflating: /content/faces/thumbnails128x128/66875.png
inflating: /content/faces/thumbnails128x128/66876.png
inflating: /content/faces/thumbnails128x128/66877.png
inflating: /content/faces/thumbnails128x128/66878.png
inflating: /content/faces/thumbnails128x128/66879.png
inflating: /content/faces/thumbnails128x128/66880.png
inflating: /content/faces/thumbnails128x128/66881.png
inflating: /content/faces/thumbnails128x128/66882.png
inflating: /content/faces/thumbnails128x128/66883.png
inflating: /content/faces/thumbnails128x128/66884.png
inflating: /content/faces/thumbnails128x128/66885.png
inflating: /content/faces/thumbnails128x128/66886.png
inflating: /content/faces/thumbnails128x128/66887.png
inflating: /content/faces/thumbnails128x128/66888.png
inflating: /content/faces/thumbnails128x128/66889.png
inflating: /content/faces/thumbnails128x128/66890.png
inflating: /content/faces/thumbnails128x128/66891.png
inflating: /content/faces/thumbnails128x128/66892.png
inflating: /content/faces/thumbnails128x128/66893.png
inflating: /content/faces/thumbnails128x128/66894.png
inflating: /content/faces/thumbnails128x128/66895.png
inflating: /content/faces/thumbnails128x128/66896.png
inflating: /content/faces/thumbnails128x128/66897.png
inflating: /content/faces/thumbnails128x128/66898.png
inflating: /content/faces/thumbnails128x128/66899.png
inflating: /content/faces/thumbnails128x128/66900.png
inflating: /content/faces/thumbnails128x128/66901.png
inflating: /content/faces/thumbnails128x128/66902.png
inflating: /content/faces/thumbnails128x128/66903.png
inflating: /content/faces/thumbnails128x128/66904.png
inflating: /content/faces/thumbnails128x128/66905.png
inflating: /content/faces/thumbnails128x128/66906.png
inflating: /content/faces/thumbnails128x128/66907.png
inflating: /content/faces/thumbnails128x128/66908.png
inflating: /content/faces/thumbnails128x128/66909.png
inflating: /content/faces/thumbnails128x128/66910.png
inflating: /content/faces/thumbnails128x128/66911.png
inflating: /content/faces/thumbnails128x128/66912.png
inflating: /content/faces/thumbnails128x128/66913.png
inflating: /content/faces/thumbnails128x128/66914.png
inflating: /content/faces/thumbnails128x128/66915.png
inflating: /content/faces/thumbnails128x128/66916.png
inflating: /content/faces/thumbnails128x128/66917.png
inflating: /content/faces/thumbnails128x128/66918.png
inflating: /content/faces/thumbnails128x128/66919.png
inflating: /content/faces/thumbnails128x128/66920.png
inflating: /content/faces/thumbnails128x128/66921.png
inflating: /content/faces/thumbnails128x128/66922.png
inflating: /content/faces/thumbnails128x128/66923.png
inflating: /content/faces/thumbnails128x128/66924.png
inflating: /content/faces/thumbnails128x128/66925.png
inflating: /content/faces/thumbnails128x128/66926.png
inflating: /content/faces/thumbnails128x128/66927.png
inflating: /content/faces/thumbnails128x128/66928.png
inflating: /content/faces/thumbnails128x128/66929.png
inflating: /content/faces/thumbnails128x128/66930.png
inflating: /content/faces/thumbnails128x128/66931.png
inflating: /content/faces/thumbnails128x128/66932.png
inflating: /content/faces/thumbnails128x128/66933.png
inflating: /content/faces/thumbnails128x128/66934.png
inflating: /content/faces/thumbnails128x128/66935.png
inflating: /content/faces/thumbnails128x128/66936.png
inflating: /content/faces/thumbnails128x128/66937.png
inflating: /content/faces/thumbnails128x128/66938.png
inflating: /content/faces/thumbnails128x128/66939.png
inflating: /content/faces/thumbnails128x128/66940.png
inflating: /content/faces/thumbnails128x128/66941.png
inflating: /content/faces/thumbnails128x128/66942.png
inflating: /content/faces/thumbnails128x128/66943.png
inflating: /content/faces/thumbnails128x128/66944.png
inflating: /content/faces/thumbnails128x128/66945.png
inflating: /content/faces/thumbnails128x128/66946.png
inflating: /content/faces/thumbnails128x128/66947.png
inflating: /content/faces/thumbnails128x128/66948.png
inflating: /content/faces/thumbnails128x128/66949.png
inflating: /content/faces/thumbnails128x128/66950.png
inflating: /content/faces/thumbnails128x128/66951.png
inflating: /content/faces/thumbnails128x128/66952.png
inflating: /content/faces/thumbnails128x128/66953.png
inflating: /content/faces/thumbnails128x128/66954.png
inflating: /content/faces/thumbnails128x128/66955.png
inflating: /content/faces/thumbnails128x128/66956.png
inflating: /content/faces/thumbnails128x128/66957.png
inflating: /content/faces/thumbnails128x128/66958.png
inflating: /content/faces/thumbnails128x128/66959.png
inflating: /content/faces/thumbnails128x128/66960.png
inflating: /content/faces/thumbnails128x128/66961.png
inflating: /content/faces/thumbnails128x128/66962.png
inflating: /content/faces/thumbnails128x128/66963.png
inflating: /content/faces/thumbnails128x128/66964.png
inflating: /content/faces/thumbnails128x128/66965.png
inflating: /content/faces/thumbnails128x128/66966.png
inflating: /content/faces/thumbnails128x128/66967.png
inflating: /content/faces/thumbnails128x128/66968.png
inflating: /content/faces/thumbnails128x128/66969.png
inflating: /content/faces/thumbnails128x128/66970.png
inflating: /content/faces/thumbnails128x128/66971.png
inflating: /content/faces/thumbnails128x128/66972.png
inflating: /content/faces/thumbnails128x128/66973.png
inflating: /content/faces/thumbnails128x128/66974.png
inflating: /content/faces/thumbnails128x128/66975.png
inflating: /content/faces/thumbnails128x128/66976.png
inflating: /content/faces/thumbnails128x128/66977.png
inflating: /content/faces/thumbnails128x128/66978.png
inflating: /content/faces/thumbnails128x128/66979.png
inflating: /content/faces/thumbnails128x128/66980.png
inflating: /content/faces/thumbnails128x128/66981.png
inflating: /content/faces/thumbnails128x128/66982.png
inflating: /content/faces/thumbnails128x128/66983.png
inflating: /content/faces/thumbnails128x128/66984.png
inflating: /content/faces/thumbnails128x128/66985.png
inflating: /content/faces/thumbnails128x128/66986.png
inflating: /content/faces/thumbnails128x128/66987.png
inflating: /content/faces/thumbnails128x128/66988.png
inflating: /content/faces/thumbnails128x128/66989.png
inflating: /content/faces/thumbnails128x128/66990.png
inflating: /content/faces/thumbnails128x128/66991.png
inflating: /content/faces/thumbnails128x128/66992.png
inflating: /content/faces/thumbnails128x128/66993.png
inflating: /content/faces/thumbnails128x128/66994.png
inflating: /content/faces/thumbnails128x128/66995.png
inflating: /content/faces/thumbnails128x128/66996.png
inflating: /content/faces/thumbnails128x128/66997.png
inflating: /content/faces/thumbnails128x128/66998.png
inflating: /content/faces/thumbnails128x128/66999.png
inflating: /content/faces/thumbnails128x128/67000.png
inflating: /content/faces/thumbnails128x128/67001.png
inflating: /content/faces/thumbnails128x128/67002.png
inflating: /content/faces/thumbnails128x128/67003.png
inflating: /content/faces/thumbnails128x128/67004.png
inflating: /content/faces/thumbnails128x128/67005.png
inflating: /content/faces/thumbnails128x128/67006.png
inflating: /content/faces/thumbnails128x128/67007.png
inflating: /content/faces/thumbnails128x128/67008.png
inflating: /content/faces/thumbnails128x128/67009.png
inflating: /content/faces/thumbnails128x128/67010.png
inflating: /content/faces/thumbnails128x128/67011.png
inflating: /content/faces/thumbnails128x128/67012.png
inflating: /content/faces/thumbnails128x128/67013.png
inflating: /content/faces/thumbnails128x128/67014.png
inflating: /content/faces/thumbnails128x128/67015.png
inflating: /content/faces/thumbnails128x128/67016.png
inflating: /content/faces/thumbnails128x128/67017.png
inflating: /content/faces/thumbnails128x128/67018.png
inflating: /content/faces/thumbnails128x128/67019.png
inflating: /content/faces/thumbnails128x128/67020.png
inflating: /content/faces/thumbnails128x128/67021.png
inflating: /content/faces/thumbnails128x128/67022.png
inflating: /content/faces/thumbnails128x128/67023.png
inflating: /content/faces/thumbnails128x128/67024.png
inflating: /content/faces/thumbnails128x128/67025.png
inflating: /content/faces/thumbnails128x128/67026.png
inflating: /content/faces/thumbnails128x128/67027.png
inflating: /content/faces/thumbnails128x128/67028.png
inflating: /content/faces/thumbnails128x128/67029.png
inflating: /content/faces/thumbnails128x128/67030.png
inflating: /content/faces/thumbnails128x128/67031.png
inflating: /content/faces/thumbnails128x128/67032.png
inflating: /content/faces/thumbnails128x128/67033.png
inflating: /content/faces/thumbnails128x128/67034.png
inflating: /content/faces/thumbnails128x128/67035.png
inflating: /content/faces/thumbnails128x128/67036.png
inflating: /content/faces/thumbnails128x128/67037.png
inflating: /content/faces/thumbnails128x128/67038.png
inflating: /content/faces/thumbnails128x128/67039.png
inflating: /content/faces/thumbnails128x128/67040.png
inflating: /content/faces/thumbnails128x128/67041.png
inflating: /content/faces/thumbnails128x128/67042.png
inflating: /content/faces/thumbnails128x128/67043.png
inflating: /content/faces/thumbnails128x128/67044.png
inflating: /content/faces/thumbnails128x128/67045.png
inflating: /content/faces/thumbnails128x128/67046.png
inflating: /content/faces/thumbnails128x128/67047.png
inflating: /content/faces/thumbnails128x128/67048.png
inflating: /content/faces/thumbnails128x128/67049.png
inflating: /content/faces/thumbnails128x128/67050.png
inflating: /content/faces/thumbnails128x128/67051.png
inflating: /content/faces/thumbnails128x128/67052.png
inflating: /content/faces/thumbnails128x128/67053.png
inflating: /content/faces/thumbnails128x128/67054.png
inflating: /content/faces/thumbnails128x128/67055.png
inflating: /content/faces/thumbnails128x128/67056.png
inflating: /content/faces/thumbnails128x128/67057.png
inflating: /content/faces/thumbnails128x128/67058.png
inflating: /content/faces/thumbnails128x128/67059.png
inflating: /content/faces/thumbnails128x128/67060.png
inflating: /content/faces/thumbnails128x128/67061.png
inflating: /content/faces/thumbnails128x128/67062.png
inflating: /content/faces/thumbnails128x128/67063.png
inflating: /content/faces/thumbnails128x128/67064.png
inflating: /content/faces/thumbnails128x128/67065.png
inflating: /content/faces/thumbnails128x128/67066.png
inflating: /content/faces/thumbnails128x128/67067.png
inflating: /content/faces/thumbnails128x128/67068.png
inflating: /content/faces/thumbnails128x128/67069.png
inflating: /content/faces/thumbnails128x128/67070.png
inflating: /content/faces/thumbnails128x128/67071.png
inflating: /content/faces/thumbnails128x128/67072.png
inflating: /content/faces/thumbnails128x128/67073.png
inflating: /content/faces/thumbnails128x128/67074.png
inflating: /content/faces/thumbnails128x128/67075.png
inflating: /content/faces/thumbnails128x128/67076.png
inflating: /content/faces/thumbnails128x128/67077.png
inflating: /content/faces/thumbnails128x128/67078.png
inflating: /content/faces/thumbnails128x128/67079.png
inflating: /content/faces/thumbnails128x128/67080.png
inflating: /content/faces/thumbnails128x128/67081.png
inflating: /content/faces/thumbnails128x128/67082.png
inflating: /content/faces/thumbnails128x128/67083.png
inflating: /content/faces/thumbnails128x128/67084.png
inflating: /content/faces/thumbnails128x128/67085.png
inflating: /content/faces/thumbnails128x128/67086.png
inflating: /content/faces/thumbnails128x128/67087.png
inflating: /content/faces/thumbnails128x128/67088.png
inflating: /content/faces/thumbnails128x128/67089.png
inflating: /content/faces/thumbnails128x128/67090.png
inflating: /content/faces/thumbnails128x128/67091.png
inflating: /content/faces/thumbnails128x128/67092.png
inflating: /content/faces/thumbnails128x128/67093.png
inflating: /content/faces/thumbnails128x128/67094.png
inflating: /content/faces/thumbnails128x128/67095.png
inflating: /content/faces/thumbnails128x128/67096.png
inflating: /content/faces/thumbnails128x128/67097.png
inflating: /content/faces/thumbnails128x128/67098.png
inflating: /content/faces/thumbnails128x128/67099.png
inflating: /content/faces/thumbnails128x128/67100.png
inflating: /content/faces/thumbnails128x128/67101.png
inflating: /content/faces/thumbnails128x128/67102.png
inflating: /content/faces/thumbnails128x128/67103.png
inflating: /content/faces/thumbnails128x128/67104.png
inflating: /content/faces/thumbnails128x128/67105.png
inflating: /content/faces/thumbnails128x128/67106.png
inflating: /content/faces/thumbnails128x128/67107.png
inflating: /content/faces/thumbnails128x128/67108.png
inflating: /content/faces/thumbnails128x128/67109.png
inflating: /content/faces/thumbnails128x128/67110.png
inflating: /content/faces/thumbnails128x128/67111.png
inflating: /content/faces/thumbnails128x128/67112.png
inflating: /content/faces/thumbnails128x128/67113.png
inflating: /content/faces/thumbnails128x128/67114.png
inflating: /content/faces/thumbnails128x128/67115.png
inflating: /content/faces/thumbnails128x128/67116.png
inflating: /content/faces/thumbnails128x128/67117.png
inflating: /content/faces/thumbnails128x128/67118.png
inflating: /content/faces/thumbnails128x128/67119.png
inflating: /content/faces/thumbnails128x128/67120.png
inflating: /content/faces/thumbnails128x128/67121.png
inflating: /content/faces/thumbnails128x128/67122.png
inflating: /content/faces/thumbnails128x128/67123.png
inflating: /content/faces/thumbnails128x128/67124.png
inflating: /content/faces/thumbnails128x128/67125.png
inflating: /content/faces/thumbnails128x128/67126.png
inflating: /content/faces/thumbnails128x128/67127.png
inflating: /content/faces/thumbnails128x128/67128.png
inflating: /content/faces/thumbnails128x128/67129.png
inflating: /content/faces/thumbnails128x128/67130.png
inflating: /content/faces/thumbnails128x128/67131.png
inflating: /content/faces/thumbnails128x128/67132.png
inflating: /content/faces/thumbnails128x128/67133.png
inflating: /content/faces/thumbnails128x128/67134.png
inflating: /content/faces/thumbnails128x128/67135.png
inflating: /content/faces/thumbnails128x128/67136.png
inflating: /content/faces/thumbnails128x128/67137.png
inflating: /content/faces/thumbnails128x128/67138.png
inflating: /content/faces/thumbnails128x128/67139.png
inflating: /content/faces/thumbnails128x128/67140.png
inflating: /content/faces/thumbnails128x128/67141.png
inflating: /content/faces/thumbnails128x128/67142.png
inflating: /content/faces/thumbnails128x128/67143.png
inflating: /content/faces/thumbnails128x128/67144.png
inflating: /content/faces/thumbnails128x128/67145.png
inflating: /content/faces/thumbnails128x128/67146.png
inflating: /content/faces/thumbnails128x128/67147.png
inflating: /content/faces/thumbnails128x128/67148.png
inflating: /content/faces/thumbnails128x128/67149.png
inflating: /content/faces/thumbnails128x128/67150.png
inflating: /content/faces/thumbnails128x128/67151.png
inflating: /content/faces/thumbnails128x128/67152.png
inflating: /content/faces/thumbnails128x128/67153.png
inflating: /content/faces/thumbnails128x128/67154.png
inflating: /content/faces/thumbnails128x128/67155.png
inflating: /content/faces/thumbnails128x128/67156.png
inflating: /content/faces/thumbnails128x128/67157.png
inflating: /content/faces/thumbnails128x128/67158.png
inflating: /content/faces/thumbnails128x128/67159.png
inflating: /content/faces/thumbnails128x128/67160.png
inflating: /content/faces/thumbnails128x128/67161.png
inflating: /content/faces/thumbnails128x128/67162.png
inflating: /content/faces/thumbnails128x128/67163.png
inflating: /content/faces/thumbnails128x128/67164.png
inflating: /content/faces/thumbnails128x128/67165.png
inflating: /content/faces/thumbnails128x128/67166.png
inflating: /content/faces/thumbnails128x128/67167.png
inflating: /content/faces/thumbnails128x128/67168.png
inflating: /content/faces/thumbnails128x128/67169.png
inflating: /content/faces/thumbnails128x128/67170.png
inflating: /content/faces/thumbnails128x128/67171.png
inflating: /content/faces/thumbnails128x128/67172.png
inflating: /content/faces/thumbnails128x128/67173.png
inflating: /content/faces/thumbnails128x128/67174.png
inflating: /content/faces/thumbnails128x128/67175.png
inflating: /content/faces/thumbnails128x128/67176.png
inflating: /content/faces/thumbnails128x128/67177.png
inflating: /content/faces/thumbnails128x128/67178.png
inflating: /content/faces/thumbnails128x128/67179.png
inflating: /content/faces/thumbnails128x128/67180.png
inflating: /content/faces/thumbnails128x128/67181.png
inflating: /content/faces/thumbnails128x128/67182.png
inflating: /content/faces/thumbnails128x128/67183.png
inflating: /content/faces/thumbnails128x128/67184.png
inflating: /content/faces/thumbnails128x128/67185.png
inflating: /content/faces/thumbnails128x128/67186.png
inflating: /content/faces/thumbnails128x128/67187.png
inflating: /content/faces/thumbnails128x128/67188.png
inflating: /content/faces/thumbnails128x128/67189.png
inflating: /content/faces/thumbnails128x128/67190.png
inflating: /content/faces/thumbnails128x128/67191.png
inflating: /content/faces/thumbnails128x128/67192.png
inflating: /content/faces/thumbnails128x128/67193.png
inflating: /content/faces/thumbnails128x128/67194.png
inflating: /content/faces/thumbnails128x128/67195.png
inflating: /content/faces/thumbnails128x128/67196.png
inflating: /content/faces/thumbnails128x128/67197.png
inflating: /content/faces/thumbnails128x128/67198.png
inflating: /content/faces/thumbnails128x128/67199.png
inflating: /content/faces/thumbnails128x128/67200.png
inflating: /content/faces/thumbnails128x128/67201.png
inflating: /content/faces/thumbnails128x128/67202.png
inflating: /content/faces/thumbnails128x128/67203.png
inflating: /content/faces/thumbnails128x128/67204.png
inflating: /content/faces/thumbnails128x128/67205.png
inflating: /content/faces/thumbnails128x128/67206.png
inflating: /content/faces/thumbnails128x128/67207.png
inflating: /content/faces/thumbnails128x128/67208.png
inflating: /content/faces/thumbnails128x128/67209.png
inflating: /content/faces/thumbnails128x128/67210.png
inflating: /content/faces/thumbnails128x128/67211.png
inflating: /content/faces/thumbnails128x128/67212.png
inflating: /content/faces/thumbnails128x128/67213.png
inflating: /content/faces/thumbnails128x128/67214.png
inflating: /content/faces/thumbnails128x128/67215.png
inflating: /content/faces/thumbnails128x128/67216.png
inflating: /content/faces/thumbnails128x128/67217.png
inflating: /content/faces/thumbnails128x128/67218.png
inflating: /content/faces/thumbnails128x128/67219.png
inflating: /content/faces/thumbnails128x128/67220.png
inflating: /content/faces/thumbnails128x128/67221.png
inflating: /content/faces/thumbnails128x128/67222.png
inflating: /content/faces/thumbnails128x128/67223.png
inflating: /content/faces/thumbnails128x128/67224.png
inflating: /content/faces/thumbnails128x128/67225.png
inflating: /content/faces/thumbnails128x128/67226.png
inflating: /content/faces/thumbnails128x128/67227.png
inflating: /content/faces/thumbnails128x128/67228.png
inflating: /content/faces/thumbnails128x128/67229.png
inflating: /content/faces/thumbnails128x128/67230.png
inflating: /content/faces/thumbnails128x128/67231.png
inflating: /content/faces/thumbnails128x128/67232.png
inflating: /content/faces/thumbnails128x128/67233.png
inflating: /content/faces/thumbnails128x128/67234.png
inflating: /content/faces/thumbnails128x128/67235.png
inflating: /content/faces/thumbnails128x128/67236.png
inflating: /content/faces/thumbnails128x128/67237.png
inflating: /content/faces/thumbnails128x128/67238.png
inflating: /content/faces/thumbnails128x128/67239.png
inflating: /content/faces/thumbnails128x128/67240.png
inflating: /content/faces/thumbnails128x128/67241.png
inflating: /content/faces/thumbnails128x128/67242.png
inflating: /content/faces/thumbnails128x128/67243.png
inflating: /content/faces/thumbnails128x128/67244.png
inflating: /content/faces/thumbnails128x128/67245.png
inflating: /content/faces/thumbnails128x128/67246.png
inflating: /content/faces/thumbnails128x128/67247.png
inflating: /content/faces/thumbnails128x128/67248.png
inflating: /content/faces/thumbnails128x128/67249.png
inflating: /content/faces/thumbnails128x128/67250.png
inflating: /content/faces/thumbnails128x128/67251.png
inflating: /content/faces/thumbnails128x128/67252.png
inflating: /content/faces/thumbnails128x128/67253.png
inflating: /content/faces/thumbnails128x128/67254.png
inflating: /content/faces/thumbnails128x128/67255.png
inflating: /content/faces/thumbnails128x128/67256.png
inflating: /content/faces/thumbnails128x128/67257.png
inflating: /content/faces/thumbnails128x128/67258.png
inflating: /content/faces/thumbnails128x128/67259.png
inflating: /content/faces/thumbnails128x128/67260.png
inflating: /content/faces/thumbnails128x128/67261.png
inflating: /content/faces/thumbnails128x128/67262.png
inflating: /content/faces/thumbnails128x128/67263.png
inflating: /content/faces/thumbnails128x128/67264.png
inflating: /content/faces/thumbnails128x128/67265.png
inflating: /content/faces/thumbnails128x128/67266.png
inflating: /content/faces/thumbnails128x128/67267.png
inflating: /content/faces/thumbnails128x128/67268.png
inflating: /content/faces/thumbnails128x128/67269.png
inflating: /content/faces/thumbnails128x128/67270.png
inflating: /content/faces/thumbnails128x128/67271.png
inflating: /content/faces/thumbnails128x128/67272.png
inflating: /content/faces/thumbnails128x128/67273.png
inflating: /content/faces/thumbnails128x128/67274.png
inflating: /content/faces/thumbnails128x128/67275.png
inflating: /content/faces/thumbnails128x128/67276.png
inflating: /content/faces/thumbnails128x128/67277.png
inflating: /content/faces/thumbnails128x128/67278.png
inflating: /content/faces/thumbnails128x128/67279.png
inflating: /content/faces/thumbnails128x128/67280.png
inflating: /content/faces/thumbnails128x128/67281.png
inflating: /content/faces/thumbnails128x128/67282.png
inflating: /content/faces/thumbnails128x128/67283.png
inflating: /content/faces/thumbnails128x128/67284.png
inflating: /content/faces/thumbnails128x128/67285.png
inflating: /content/faces/thumbnails128x128/67286.png
inflating: /content/faces/thumbnails128x128/67287.png
inflating: /content/faces/thumbnails128x128/67288.png
inflating: /content/faces/thumbnails128x128/67289.png
inflating: /content/faces/thumbnails128x128/67290.png
inflating: /content/faces/thumbnails128x128/67291.png
inflating: /content/faces/thumbnails128x128/67292.png
inflating: /content/faces/thumbnails128x128/67293.png
inflating: /content/faces/thumbnails128x128/67294.png
inflating: /content/faces/thumbnails128x128/67295.png
inflating: /content/faces/thumbnails128x128/67296.png
inflating: /content/faces/thumbnails128x128/67297.png
inflating: /content/faces/thumbnails128x128/67298.png
inflating: /content/faces/thumbnails128x128/67299.png
inflating: /content/faces/thumbnails128x128/67300.png
inflating: /content/faces/thumbnails128x128/67301.png
inflating: /content/faces/thumbnails128x128/67302.png
inflating: /content/faces/thumbnails128x128/67303.png
inflating: /content/faces/thumbnails128x128/67304.png
inflating: /content/faces/thumbnails128x128/67305.png
inflating: /content/faces/thumbnails128x128/67306.png
inflating: /content/faces/thumbnails128x128/67307.png
inflating: /content/faces/thumbnails128x128/67308.png
inflating: /content/faces/thumbnails128x128/67309.png
inflating: /content/faces/thumbnails128x128/67310.png
inflating: /content/faces/thumbnails128x128/67311.png
inflating: /content/faces/thumbnails128x128/67312.png
inflating: /content/faces/thumbnails128x128/67313.png
inflating: /content/faces/thumbnails128x128/67314.png
inflating: /content/faces/thumbnails128x128/67315.png
inflating: /content/faces/thumbnails128x128/67316.png
inflating: /content/faces/thumbnails128x128/67317.png
inflating: /content/faces/thumbnails128x128/67318.png
inflating: /content/faces/thumbnails128x128/67319.png
inflating: /content/faces/thumbnails128x128/67320.png
inflating: /content/faces/thumbnails128x128/67321.png
inflating: /content/faces/thumbnails128x128/67322.png
inflating: /content/faces/thumbnails128x128/67323.png
inflating: /content/faces/thumbnails128x128/67324.png
inflating: /content/faces/thumbnails128x128/67325.png
inflating: /content/faces/thumbnails128x128/67326.png
inflating: /content/faces/thumbnails128x128/67327.png
inflating: /content/faces/thumbnails128x128/67328.png
inflating: /content/faces/thumbnails128x128/67329.png
inflating: /content/faces/thumbnails128x128/67330.png
inflating: /content/faces/thumbnails128x128/67331.png
inflating: /content/faces/thumbnails128x128/67332.png
inflating: /content/faces/thumbnails128x128/67333.png
inflating: /content/faces/thumbnails128x128/67334.png
inflating: /content/faces/thumbnails128x128/67335.png
inflating: /content/faces/thumbnails128x128/67336.png
inflating: /content/faces/thumbnails128x128/67337.png
inflating: /content/faces/thumbnails128x128/67338.png
inflating: /content/faces/thumbnails128x128/67339.png
inflating: /content/faces/thumbnails128x128/67340.png
inflating: /content/faces/thumbnails128x128/67341.png
inflating: /content/faces/thumbnails128x128/67342.png
inflating: /content/faces/thumbnails128x128/67343.png
inflating: /content/faces/thumbnails128x128/67344.png
inflating: /content/faces/thumbnails128x128/67345.png
inflating: /content/faces/thumbnails128x128/67346.png
inflating: /content/faces/thumbnails128x128/67347.png
inflating: /content/faces/thumbnails128x128/67348.png
inflating: /content/faces/thumbnails128x128/67349.png
inflating: /content/faces/thumbnails128x128/67350.png
inflating: /content/faces/thumbnails128x128/67351.png
inflating: /content/faces/thumbnails128x128/67352.png
inflating: /content/faces/thumbnails128x128/67353.png
inflating: /content/faces/thumbnails128x128/67354.png
inflating: /content/faces/thumbnails128x128/67355.png
inflating: /content/faces/thumbnails128x128/67356.png
inflating: /content/faces/thumbnails128x128/67357.png
inflating: /content/faces/thumbnails128x128/67358.png
inflating: /content/faces/thumbnails128x128/67359.png
inflating: /content/faces/thumbnails128x128/67360.png
inflating: /content/faces/thumbnails128x128/67361.png
inflating: /content/faces/thumbnails128x128/67362.png
inflating: /content/faces/thumbnails128x128/67363.png
inflating: /content/faces/thumbnails128x128/67364.png
inflating: /content/faces/thumbnails128x128/67365.png
inflating: /content/faces/thumbnails128x128/67366.png
inflating: /content/faces/thumbnails128x128/67367.png
inflating: /content/faces/thumbnails128x128/67368.png
inflating: /content/faces/thumbnails128x128/67369.png
inflating: /content/faces/thumbnails128x128/67370.png
inflating: /content/faces/thumbnails128x128/67371.png
inflating: /content/faces/thumbnails128x128/67372.png
inflating: /content/faces/thumbnails128x128/67373.png
inflating: /content/faces/thumbnails128x128/67374.png
inflating: /content/faces/thumbnails128x128/67375.png
inflating: /content/faces/thumbnails128x128/67376.png
inflating: /content/faces/thumbnails128x128/67377.png
inflating: /content/faces/thumbnails128x128/67378.png
inflating: /content/faces/thumbnails128x128/67379.png
inflating: /content/faces/thumbnails128x128/67380.png
inflating: /content/faces/thumbnails128x128/67381.png
inflating: /content/faces/thumbnails128x128/67382.png
inflating: /content/faces/thumbnails128x128/67383.png
inflating: /content/faces/thumbnails128x128/67384.png
inflating: /content/faces/thumbnails128x128/67385.png
inflating: /content/faces/thumbnails128x128/67386.png
inflating: /content/faces/thumbnails128x128/67387.png
inflating: /content/faces/thumbnails128x128/67388.png
inflating: /content/faces/thumbnails128x128/67389.png
inflating: /content/faces/thumbnails128x128/67390.png
inflating: /content/faces/thumbnails128x128/67391.png
inflating: /content/faces/thumbnails128x128/67392.png
inflating: /content/faces/thumbnails128x128/67393.png
inflating: /content/faces/thumbnails128x128/67394.png
inflating: /content/faces/thumbnails128x128/67395.png
inflating: /content/faces/thumbnails128x128/67396.png
inflating: /content/faces/thumbnails128x128/67397.png
inflating: /content/faces/thumbnails128x128/67398.png
inflating: /content/faces/thumbnails128x128/67399.png
inflating: /content/faces/thumbnails128x128/67400.png
inflating: /content/faces/thumbnails128x128/67401.png
inflating: /content/faces/thumbnails128x128/67402.png
inflating: /content/faces/thumbnails128x128/67403.png
inflating: /content/faces/thumbnails128x128/67404.png
inflating: /content/faces/thumbnails128x128/67405.png
inflating: /content/faces/thumbnails128x128/67406.png
inflating: /content/faces/thumbnails128x128/67407.png
inflating: /content/faces/thumbnails128x128/67408.png
inflating: /content/faces/thumbnails128x128/67409.png
inflating: /content/faces/thumbnails128x128/67410.png
inflating: /content/faces/thumbnails128x128/67411.png
inflating: /content/faces/thumbnails128x128/67412.png
inflating: /content/faces/thumbnails128x128/67413.png
inflating: /content/faces/thumbnails128x128/67414.png
inflating: /content/faces/thumbnails128x128/67415.png
inflating: /content/faces/thumbnails128x128/67416.png
inflating: /content/faces/thumbnails128x128/67417.png
inflating: /content/faces/thumbnails128x128/67418.png
inflating: /content/faces/thumbnails128x128/67419.png
inflating: /content/faces/thumbnails128x128/67420.png
inflating: /content/faces/thumbnails128x128/67421.png
inflating: /content/faces/thumbnails128x128/67422.png
inflating: /content/faces/thumbnails128x128/67423.png
inflating: /content/faces/thumbnails128x128/67424.png
inflating: /content/faces/thumbnails128x128/67425.png
inflating: /content/faces/thumbnails128x128/67426.png
inflating: /content/faces/thumbnails128x128/67427.png
inflating: /content/faces/thumbnails128x128/67428.png
inflating: /content/faces/thumbnails128x128/67429.png
inflating: /content/faces/thumbnails128x128/67430.png
inflating: /content/faces/thumbnails128x128/67431.png
inflating: /content/faces/thumbnails128x128/67432.png
inflating: /content/faces/thumbnails128x128/67433.png
inflating: /content/faces/thumbnails128x128/67434.png
inflating: /content/faces/thumbnails128x128/67435.png
inflating: /content/faces/thumbnails128x128/67436.png
inflating: /content/faces/thumbnails128x128/67437.png
inflating: /content/faces/thumbnails128x128/67438.png
inflating: /content/faces/thumbnails128x128/67439.png
inflating: /content/faces/thumbnails128x128/67440.png
inflating: /content/faces/thumbnails128x128/67441.png
inflating: /content/faces/thumbnails128x128/67442.png
inflating: /content/faces/thumbnails128x128/67443.png
inflating: /content/faces/thumbnails128x128/67444.png
inflating: /content/faces/thumbnails128x128/67445.png
inflating: /content/faces/thumbnails128x128/67446.png
inflating: /content/faces/thumbnails128x128/67447.png
inflating: /content/faces/thumbnails128x128/67448.png
inflating: /content/faces/thumbnails128x128/67449.png
inflating: /content/faces/thumbnails128x128/67450.png
inflating: /content/faces/thumbnails128x128/67451.png
inflating: /content/faces/thumbnails128x128/67452.png
inflating: /content/faces/thumbnails128x128/67453.png
inflating: /content/faces/thumbnails128x128/67454.png
inflating: /content/faces/thumbnails128x128/67455.png
inflating: /content/faces/thumbnails128x128/67456.png
inflating: /content/faces/thumbnails128x128/67457.png
inflating: /content/faces/thumbnails128x128/67458.png
inflating: /content/faces/thumbnails128x128/67459.png
inflating: /content/faces/thumbnails128x128/67460.png
inflating: /content/faces/thumbnails128x128/67461.png
inflating: /content/faces/thumbnails128x128/67462.png
inflating: /content/faces/thumbnails128x128/67463.png
inflating: /content/faces/thumbnails128x128/67464.png
inflating: /content/faces/thumbnails128x128/67465.png
inflating: /content/faces/thumbnails128x128/67466.png
inflating: /content/faces/thumbnails128x128/67467.png
inflating: /content/faces/thumbnails128x128/67468.png
inflating: /content/faces/thumbnails128x128/67469.png
inflating: /content/faces/thumbnails128x128/67470.png
inflating: /content/faces/thumbnails128x128/67471.png
inflating: /content/faces/thumbnails128x128/67472.png
inflating: /content/faces/thumbnails128x128/67473.png
inflating: /content/faces/thumbnails128x128/67474.png
inflating: /content/faces/thumbnails128x128/67475.png
inflating: /content/faces/thumbnails128x128/67476.png
inflating: /content/faces/thumbnails128x128/67477.png
inflating: /content/faces/thumbnails128x128/67478.png
inflating: /content/faces/thumbnails128x128/67479.png
inflating: /content/faces/thumbnails128x128/67480.png
inflating: /content/faces/thumbnails128x128/67481.png
inflating: /content/faces/thumbnails128x128/67482.png
inflating: /content/faces/thumbnails128x128/67483.png
inflating: /content/faces/thumbnails128x128/67484.png
inflating: /content/faces/thumbnails128x128/67485.png
inflating: /content/faces/thumbnails128x128/67486.png
inflating: /content/faces/thumbnails128x128/67487.png
inflating: /content/faces/thumbnails128x128/67488.png
inflating: /content/faces/thumbnails128x128/67489.png
inflating: /content/faces/thumbnails128x128/67490.png
inflating: /content/faces/thumbnails128x128/67491.png
inflating: /content/faces/thumbnails128x128/67492.png
inflating: /content/faces/thumbnails128x128/67493.png
inflating: /content/faces/thumbnails128x128/67494.png
inflating: /content/faces/thumbnails128x128/67495.png
inflating: /content/faces/thumbnails128x128/67496.png
inflating: /content/faces/thumbnails128x128/67497.png
inflating: /content/faces/thumbnails128x128/67498.png
inflating: /content/faces/thumbnails128x128/67499.png
inflating: /content/faces/thumbnails128x128/67500.png
inflating: /content/faces/thumbnails128x128/67501.png
inflating: /content/faces/thumbnails128x128/67502.png
inflating: /content/faces/thumbnails128x128/67503.png
inflating: /content/faces/thumbnails128x128/67504.png
inflating: /content/faces/thumbnails128x128/67505.png
inflating: /content/faces/thumbnails128x128/67506.png
inflating: /content/faces/thumbnails128x128/67507.png
inflating: /content/faces/thumbnails128x128/67508.png
inflating: /content/faces/thumbnails128x128/67509.png
inflating: /content/faces/thumbnails128x128/67510.png
inflating: /content/faces/thumbnails128x128/67511.png
inflating: /content/faces/thumbnails128x128/67512.png
inflating: /content/faces/thumbnails128x128/67513.png
inflating: /content/faces/thumbnails128x128/67514.png
inflating: /content/faces/thumbnails128x128/67515.png
inflating: /content/faces/thumbnails128x128/67516.png
inflating: /content/faces/thumbnails128x128/67517.png
inflating: /content/faces/thumbnails128x128/67518.png
inflating: /content/faces/thumbnails128x128/67519.png
inflating: /content/faces/thumbnails128x128/67520.png
inflating: /content/faces/thumbnails128x128/67521.png
inflating: /content/faces/thumbnails128x128/67522.png
inflating: /content/faces/thumbnails128x128/67523.png
inflating: /content/faces/thumbnails128x128/67524.png
inflating: /content/faces/thumbnails128x128/67525.png
inflating: /content/faces/thumbnails128x128/67526.png
inflating: /content/faces/thumbnails128x128/67527.png
inflating: /content/faces/thumbnails128x128/67528.png
inflating: /content/faces/thumbnails128x128/67529.png
inflating: /content/faces/thumbnails128x128/67530.png
inflating: /content/faces/thumbnails128x128/67531.png
inflating: /content/faces/thumbnails128x128/67532.png
inflating: /content/faces/thumbnails128x128/67533.png
inflating: /content/faces/thumbnails128x128/67534.png
inflating: /content/faces/thumbnails128x128/67535.png
inflating: /content/faces/thumbnails128x128/67536.png
inflating: /content/faces/thumbnails128x128/67537.png
inflating: /content/faces/thumbnails128x128/67538.png
inflating: /content/faces/thumbnails128x128/67539.png
inflating: /content/faces/thumbnails128x128/67540.png
inflating: /content/faces/thumbnails128x128/67541.png
inflating: /content/faces/thumbnails128x128/67542.png
inflating: /content/faces/thumbnails128x128/67543.png
inflating: /content/faces/thumbnails128x128/67544.png
inflating: /content/faces/thumbnails128x128/67545.png
inflating: /content/faces/thumbnails128x128/67546.png
inflating: /content/faces/thumbnails128x128/67547.png
inflating: /content/faces/thumbnails128x128/67548.png
inflating: /content/faces/thumbnails128x128/67549.png
inflating: /content/faces/thumbnails128x128/67550.png
inflating: /content/faces/thumbnails128x128/67551.png
inflating: /content/faces/thumbnails128x128/67552.png
inflating: /content/faces/thumbnails128x128/67553.png
inflating: /content/faces/thumbnails128x128/67554.png
inflating: /content/faces/thumbnails128x128/67555.png
inflating: /content/faces/thumbnails128x128/67556.png
inflating: /content/faces/thumbnails128x128/67557.png
inflating: /content/faces/thumbnails128x128/67558.png
inflating: /content/faces/thumbnails128x128/67559.png
inflating: /content/faces/thumbnails128x128/67560.png
inflating: /content/faces/thumbnails128x128/67561.png
inflating: /content/faces/thumbnails128x128/67562.png
inflating: /content/faces/thumbnails128x128/67563.png
inflating: /content/faces/thumbnails128x128/67564.png
inflating: /content/faces/thumbnails128x128/67565.png
inflating: /content/faces/thumbnails128x128/67566.png
inflating: /content/faces/thumbnails128x128/67567.png
inflating: /content/faces/thumbnails128x128/67568.png
inflating: /content/faces/thumbnails128x128/67569.png
inflating: /content/faces/thumbnails128x128/67570.png
inflating: /content/faces/thumbnails128x128/67571.png
inflating: /content/faces/thumbnails128x128/67572.png
inflating: /content/faces/thumbnails128x128/67573.png
inflating: /content/faces/thumbnails128x128/67574.png
inflating: /content/faces/thumbnails128x128/67575.png
inflating: /content/faces/thumbnails128x128/67576.png
inflating: /content/faces/thumbnails128x128/67577.png
inflating: /content/faces/thumbnails128x128/67578.png
inflating: /content/faces/thumbnails128x128/67579.png
inflating: /content/faces/thumbnails128x128/67580.png
inflating: /content/faces/thumbnails128x128/67581.png
inflating: /content/faces/thumbnails128x128/67582.png
inflating: /content/faces/thumbnails128x128/67583.png
inflating: /content/faces/thumbnails128x128/67584.png
inflating: /content/faces/thumbnails128x128/67585.png
inflating: /content/faces/thumbnails128x128/67586.png
inflating: /content/faces/thumbnails128x128/67587.png
inflating: /content/faces/thumbnails128x128/67588.png
inflating: /content/faces/thumbnails128x128/67589.png
inflating: /content/faces/thumbnails128x128/67590.png
inflating: /content/faces/thumbnails128x128/67591.png
inflating: /content/faces/thumbnails128x128/67592.png
inflating: /content/faces/thumbnails128x128/67593.png
inflating: /content/faces/thumbnails128x128/67594.png
inflating: /content/faces/thumbnails128x128/67595.png
inflating: /content/faces/thumbnails128x128/67596.png
inflating: /content/faces/thumbnails128x128/67597.png
inflating: /content/faces/thumbnails128x128/67598.png
inflating: /content/faces/thumbnails128x128/67599.png
inflating: /content/faces/thumbnails128x128/67600.png
inflating: /content/faces/thumbnails128x128/67601.png
inflating: /content/faces/thumbnails128x128/67602.png
inflating: /content/faces/thumbnails128x128/67603.png
inflating: /content/faces/thumbnails128x128/67604.png
inflating: /content/faces/thumbnails128x128/67605.png
inflating: /content/faces/thumbnails128x128/67606.png
inflating: /content/faces/thumbnails128x128/67607.png
inflating: /content/faces/thumbnails128x128/67608.png
inflating: /content/faces/thumbnails128x128/67609.png
inflating: /content/faces/thumbnails128x128/67610.png
inflating: /content/faces/thumbnails128x128/67611.png
inflating: /content/faces/thumbnails128x128/67612.png
inflating: /content/faces/thumbnails128x128/67613.png
inflating: /content/faces/thumbnails128x128/67614.png
inflating: /content/faces/thumbnails128x128/67615.png
inflating: /content/faces/thumbnails128x128/67616.png
inflating: /content/faces/thumbnails128x128/67617.png
inflating: /content/faces/thumbnails128x128/67618.png
inflating: /content/faces/thumbnails128x128/67619.png
inflating: /content/faces/thumbnails128x128/67620.png
inflating: /content/faces/thumbnails128x128/67621.png
inflating: /content/faces/thumbnails128x128/67622.png
inflating: /content/faces/thumbnails128x128/67623.png
inflating: /content/faces/thumbnails128x128/67624.png
inflating: /content/faces/thumbnails128x128/67625.png
inflating: /content/faces/thumbnails128x128/67626.png
inflating: /content/faces/thumbnails128x128/67627.png
inflating: /content/faces/thumbnails128x128/67628.png
inflating: /content/faces/thumbnails128x128/67629.png
inflating: /content/faces/thumbnails128x128/67630.png
inflating: /content/faces/thumbnails128x128/67631.png
inflating: /content/faces/thumbnails128x128/67632.png
inflating: /content/faces/thumbnails128x128/67633.png
inflating: /content/faces/thumbnails128x128/67634.png
inflating: /content/faces/thumbnails128x128/67635.png
inflating: /content/faces/thumbnails128x128/67636.png
inflating: /content/faces/thumbnails128x128/67637.png
inflating: /content/faces/thumbnails128x128/67638.png
inflating: /content/faces/thumbnails128x128/67639.png
inflating: /content/faces/thumbnails128x128/67640.png
inflating: /content/faces/thumbnails128x128/67641.png
inflating: /content/faces/thumbnails128x128/67642.png
inflating: /content/faces/thumbnails128x128/67643.png
inflating: /content/faces/thumbnails128x128/67644.png
inflating: /content/faces/thumbnails128x128/67645.png
inflating: /content/faces/thumbnails128x128/67646.png
inflating: /content/faces/thumbnails128x128/67647.png
inflating: /content/faces/thumbnails128x128/67648.png
inflating: /content/faces/thumbnails128x128/67649.png
inflating: /content/faces/thumbnails128x128/67650.png
inflating: /content/faces/thumbnails128x128/67651.png
inflating: /content/faces/thumbnails128x128/67652.png
inflating: /content/faces/thumbnails128x128/67653.png
inflating: /content/faces/thumbnails128x128/67654.png
inflating: /content/faces/thumbnails128x128/67655.png
inflating: /content/faces/thumbnails128x128/67656.png
inflating: /content/faces/thumbnails128x128/67657.png
inflating: /content/faces/thumbnails128x128/67658.png
inflating: /content/faces/thumbnails128x128/67659.png
inflating: /content/faces/thumbnails128x128/67660.png
inflating: /content/faces/thumbnails128x128/67661.png
inflating: /content/faces/thumbnails128x128/67662.png
inflating: /content/faces/thumbnails128x128/67663.png
inflating: /content/faces/thumbnails128x128/67664.png
inflating: /content/faces/thumbnails128x128/67665.png
inflating: /content/faces/thumbnails128x128/67666.png
inflating: /content/faces/thumbnails128x128/67667.png
inflating: /content/faces/thumbnails128x128/67668.png
inflating: /content/faces/thumbnails128x128/67669.png
inflating: /content/faces/thumbnails128x128/67670.png
inflating: /content/faces/thumbnails128x128/67671.png
inflating: /content/faces/thumbnails128x128/67672.png
inflating: /content/faces/thumbnails128x128/67673.png
inflating: /content/faces/thumbnails128x128/67674.png
inflating: /content/faces/thumbnails128x128/67675.png
inflating: /content/faces/thumbnails128x128/67676.png
inflating: /content/faces/thumbnails128x128/67677.png
inflating: /content/faces/thumbnails128x128/67678.png
inflating: /content/faces/thumbnails128x128/67679.png
inflating: /content/faces/thumbnails128x128/67680.png
inflating: /content/faces/thumbnails128x128/67681.png
inflating: /content/faces/thumbnails128x128/67682.png
inflating: /content/faces/thumbnails128x128/67683.png
inflating: /content/faces/thumbnails128x128/67684.png
inflating: /content/faces/thumbnails128x128/67685.png
inflating: /content/faces/thumbnails128x128/67686.png
inflating: /content/faces/thumbnails128x128/67687.png
inflating: /content/faces/thumbnails128x128/67688.png
inflating: /content/faces/thumbnails128x128/67689.png
inflating: /content/faces/thumbnails128x128/67690.png
inflating: /content/faces/thumbnails128x128/67691.png
inflating: /content/faces/thumbnails128x128/67692.png
inflating: /content/faces/thumbnails128x128/67693.png
inflating: /content/faces/thumbnails128x128/67694.png
inflating: /content/faces/thumbnails128x128/67695.png
inflating: /content/faces/thumbnails128x128/67696.png
inflating: /content/faces/thumbnails128x128/67697.png
inflating: /content/faces/thumbnails128x128/67698.png
inflating: /content/faces/thumbnails128x128/67699.png
inflating: /content/faces/thumbnails128x128/67700.png
inflating: /content/faces/thumbnails128x128/67701.png
inflating: /content/faces/thumbnails128x128/67702.png
inflating: /content/faces/thumbnails128x128/67703.png
inflating: /content/faces/thumbnails128x128/67704.png
inflating: /content/faces/thumbnails128x128/67705.png
inflating: /content/faces/thumbnails128x128/67706.png
inflating: /content/faces/thumbnails128x128/67707.png
inflating: /content/faces/thumbnails128x128/67708.png
inflating: /content/faces/thumbnails128x128/67709.png
inflating: /content/faces/thumbnails128x128/67710.png
inflating: /content/faces/thumbnails128x128/67711.png
inflating: /content/faces/thumbnails128x128/67712.png
inflating: /content/faces/thumbnails128x128/67713.png
inflating: /content/faces/thumbnails128x128/67714.png
inflating: /content/faces/thumbnails128x128/67715.png
inflating: /content/faces/thumbnails128x128/67716.png
inflating: /content/faces/thumbnails128x128/67717.png
inflating: /content/faces/thumbnails128x128/67718.png
inflating: /content/faces/thumbnails128x128/67719.png
inflating: /content/faces/thumbnails128x128/67720.png
inflating: /content/faces/thumbnails128x128/67721.png
inflating: /content/faces/thumbnails128x128/67722.png
inflating: /content/faces/thumbnails128x128/67723.png
inflating: /content/faces/thumbnails128x128/67724.png
inflating: /content/faces/thumbnails128x128/67725.png
inflating: /content/faces/thumbnails128x128/67726.png
inflating: /content/faces/thumbnails128x128/67727.png
inflating: /content/faces/thumbnails128x128/67728.png
inflating: /content/faces/thumbnails128x128/67729.png
inflating: /content/faces/thumbnails128x128/67730.png
inflating: /content/faces/thumbnails128x128/67731.png
inflating: /content/faces/thumbnails128x128/67732.png
inflating: /content/faces/thumbnails128x128/67733.png
inflating: /content/faces/thumbnails128x128/67734.png
inflating: /content/faces/thumbnails128x128/67735.png
inflating: /content/faces/thumbnails128x128/67736.png
inflating: /content/faces/thumbnails128x128/67737.png
inflating: /content/faces/thumbnails128x128/67738.png
inflating: /content/faces/thumbnails128x128/67739.png
inflating: /content/faces/thumbnails128x128/67740.png
inflating: /content/faces/thumbnails128x128/67741.png
inflating: /content/faces/thumbnails128x128/67742.png
inflating: /content/faces/thumbnails128x128/67743.png
inflating: /content/faces/thumbnails128x128/67744.png
inflating: /content/faces/thumbnails128x128/67745.png
inflating: /content/faces/thumbnails128x128/67746.png
inflating: /content/faces/thumbnails128x128/67747.png
inflating: /content/faces/thumbnails128x128/67748.png
inflating: /content/faces/thumbnails128x128/67749.png
inflating: /content/faces/thumbnails128x128/67750.png
inflating: /content/faces/thumbnails128x128/67751.png
inflating: /content/faces/thumbnails128x128/67752.png
inflating: /content/faces/thumbnails128x128/67753.png
inflating: /content/faces/thumbnails128x128/67754.png
inflating: /content/faces/thumbnails128x128/67755.png
inflating: /content/faces/thumbnails128x128/67756.png
inflating: /content/faces/thumbnails128x128/67757.png
inflating: /content/faces/thumbnails128x128/67758.png
inflating: /content/faces/thumbnails128x128/67759.png
inflating: /content/faces/thumbnails128x128/67760.png
inflating: /content/faces/thumbnails128x128/67761.png
inflating: /content/faces/thumbnails128x128/67762.png
inflating: /content/faces/thumbnails128x128/67763.png
inflating: /content/faces/thumbnails128x128/67764.png
inflating: /content/faces/thumbnails128x128/67765.png
inflating: /content/faces/thumbnails128x128/67766.png
inflating: /content/faces/thumbnails128x128/67767.png
inflating: /content/faces/thumbnails128x128/67768.png
inflating: /content/faces/thumbnails128x128/67769.png
inflating: /content/faces/thumbnails128x128/67770.png
inflating: /content/faces/thumbnails128x128/67771.png
inflating: /content/faces/thumbnails128x128/67772.png
inflating: /content/faces/thumbnails128x128/67773.png
inflating: /content/faces/thumbnails128x128/67774.png
inflating: /content/faces/thumbnails128x128/67775.png
inflating: /content/faces/thumbnails128x128/67776.png
inflating: /content/faces/thumbnails128x128/67777.png
inflating: /content/faces/thumbnails128x128/67778.png
inflating: /content/faces/thumbnails128x128/67779.png
inflating: /content/faces/thumbnails128x128/67780.png
inflating: /content/faces/thumbnails128x128/67781.png
inflating: /content/faces/thumbnails128x128/67782.png
inflating: /content/faces/thumbnails128x128/67783.png
inflating: /content/faces/thumbnails128x128/67784.png
inflating: /content/faces/thumbnails128x128/67785.png
inflating: /content/faces/thumbnails128x128/67786.png
inflating: /content/faces/thumbnails128x128/67787.png
inflating: /content/faces/thumbnails128x128/67788.png
inflating: /content/faces/thumbnails128x128/67789.png
inflating: /content/faces/thumbnails128x128/67790.png
inflating: /content/faces/thumbnails128x128/67791.png
inflating: /content/faces/thumbnails128x128/67792.png
inflating: /content/faces/thumbnails128x128/67793.png
inflating: /content/faces/thumbnails128x128/67794.png
inflating: /content/faces/thumbnails128x128/67795.png
inflating: /content/faces/thumbnails128x128/67796.png
inflating: /content/faces/thumbnails128x128/67797.png
inflating: /content/faces/thumbnails128x128/67798.png
inflating: /content/faces/thumbnails128x128/67799.png
inflating: /content/faces/thumbnails128x128/67800.png
inflating: /content/faces/thumbnails128x128/67801.png
inflating: /content/faces/thumbnails128x128/67802.png
inflating: /content/faces/thumbnails128x128/67803.png
inflating: /content/faces/thumbnails128x128/67804.png
inflating: /content/faces/thumbnails128x128/67805.png
inflating: /content/faces/thumbnails128x128/67806.png
inflating: /content/faces/thumbnails128x128/67807.png
inflating: /content/faces/thumbnails128x128/67808.png
inflating: /content/faces/thumbnails128x128/67809.png
inflating: /content/faces/thumbnails128x128/67810.png
inflating: /content/faces/thumbnails128x128/67811.png
inflating: /content/faces/thumbnails128x128/67812.png
inflating: /content/faces/thumbnails128x128/67813.png
inflating: /content/faces/thumbnails128x128/67814.png
inflating: /content/faces/thumbnails128x128/67815.png
inflating: /content/faces/thumbnails128x128/67816.png
inflating: /content/faces/thumbnails128x128/67817.png
inflating: /content/faces/thumbnails128x128/67818.png
inflating: /content/faces/thumbnails128x128/67819.png
inflating: /content/faces/thumbnails128x128/67820.png
inflating: /content/faces/thumbnails128x128/67821.png
inflating: /content/faces/thumbnails128x128/67822.png
inflating: /content/faces/thumbnails128x128/67823.png
inflating: /content/faces/thumbnails128x128/67824.png
inflating: /content/faces/thumbnails128x128/67825.png
inflating: /content/faces/thumbnails128x128/67826.png
inflating: /content/faces/thumbnails128x128/67827.png
inflating: /content/faces/thumbnails128x128/67828.png
inflating: /content/faces/thumbnails128x128/67829.png
inflating: /content/faces/thumbnails128x128/67830.png
inflating: /content/faces/thumbnails128x128/67831.png
inflating: /content/faces/thumbnails128x128/67832.png
inflating: /content/faces/thumbnails128x128/67833.png
inflating: /content/faces/thumbnails128x128/67834.png
inflating: /content/faces/thumbnails128x128/67835.png
inflating: /content/faces/thumbnails128x128/67836.png
inflating: /content/faces/thumbnails128x128/67837.png
inflating: /content/faces/thumbnails128x128/67838.png
inflating: /content/faces/thumbnails128x128/67839.png
inflating: /content/faces/thumbnails128x128/67840.png
inflating: /content/faces/thumbnails128x128/67841.png
inflating: /content/faces/thumbnails128x128/67842.png
inflating: /content/faces/thumbnails128x128/67843.png
inflating: /content/faces/thumbnails128x128/67844.png
inflating: /content/faces/thumbnails128x128/67845.png
inflating: /content/faces/thumbnails128x128/67846.png
inflating: /content/faces/thumbnails128x128/67847.png
inflating: /content/faces/thumbnails128x128/67848.png
inflating: /content/faces/thumbnails128x128/67849.png
inflating: /content/faces/thumbnails128x128/67850.png
inflating: /content/faces/thumbnails128x128/67851.png
inflating: /content/faces/thumbnails128x128/67852.png
inflating: /content/faces/thumbnails128x128/67853.png
inflating: /content/faces/thumbnails128x128/67854.png
inflating: /content/faces/thumbnails128x128/67855.png
inflating: /content/faces/thumbnails128x128/67856.png
inflating: /content/faces/thumbnails128x128/67857.png
inflating: /content/faces/thumbnails128x128/67858.png
inflating: /content/faces/thumbnails128x128/67859.png
inflating: /content/faces/thumbnails128x128/67860.png
inflating: /content/faces/thumbnails128x128/67861.png
inflating: /content/faces/thumbnails128x128/67862.png
inflating: /content/faces/thumbnails128x128/67863.png
inflating: /content/faces/thumbnails128x128/67864.png
inflating: /content/faces/thumbnails128x128/67865.png
inflating: /content/faces/thumbnails128x128/67866.png
inflating: /content/faces/thumbnails128x128/67867.png
inflating: /content/faces/thumbnails128x128/67868.png
inflating: /content/faces/thumbnails128x128/67869.png
inflating: /content/faces/thumbnails128x128/67870.png
inflating: /content/faces/thumbnails128x128/67871.png
inflating: /content/faces/thumbnails128x128/67872.png
inflating: /content/faces/thumbnails128x128/67873.png
inflating: /content/faces/thumbnails128x128/67874.png
inflating: /content/faces/thumbnails128x128/67875.png
inflating: /content/faces/thumbnails128x128/67876.png
inflating: /content/faces/thumbnails128x128/67877.png
inflating: /content/faces/thumbnails128x128/67878.png
inflating: /content/faces/thumbnails128x128/67879.png
inflating: /content/faces/thumbnails128x128/67880.png
inflating: /content/faces/thumbnails128x128/67881.png
inflating: /content/faces/thumbnails128x128/67882.png
inflating: /content/faces/thumbnails128x128/67883.png
inflating: /content/faces/thumbnails128x128/67884.png
inflating: /content/faces/thumbnails128x128/67885.png
inflating: /content/faces/thumbnails128x128/67886.png
inflating: /content/faces/thumbnails128x128/67887.png
inflating: /content/faces/thumbnails128x128/67888.png
inflating: /content/faces/thumbnails128x128/67889.png
inflating: /content/faces/thumbnails128x128/67890.png
inflating: /content/faces/thumbnails128x128/67891.png
inflating: /content/faces/thumbnails128x128/67892.png
inflating: /content/faces/thumbnails128x128/67893.png
inflating: /content/faces/thumbnails128x128/67894.png
inflating: /content/faces/thumbnails128x128/67895.png
inflating: /content/faces/thumbnails128x128/67896.png
inflating: /content/faces/thumbnails128x128/67897.png
inflating: /content/faces/thumbnails128x128/67898.png
inflating: /content/faces/thumbnails128x128/67899.png
inflating: /content/faces/thumbnails128x128/67900.png
inflating: /content/faces/thumbnails128x128/67901.png
inflating: /content/faces/thumbnails128x128/67902.png
inflating: /content/faces/thumbnails128x128/67903.png
inflating: /content/faces/thumbnails128x128/67904.png
inflating: /content/faces/thumbnails128x128/67905.png
inflating: /content/faces/thumbnails128x128/67906.png
inflating: /content/faces/thumbnails128x128/67907.png
inflating: /content/faces/thumbnails128x128/67908.png
inflating: /content/faces/thumbnails128x128/67909.png
inflating: /content/faces/thumbnails128x128/67910.png
inflating: /content/faces/thumbnails128x128/67911.png
inflating: /content/faces/thumbnails128x128/67912.png
inflating: /content/faces/thumbnails128x128/67913.png
inflating: /content/faces/thumbnails128x128/67914.png
inflating: /content/faces/thumbnails128x128/67915.png
inflating: /content/faces/thumbnails128x128/67916.png
inflating: /content/faces/thumbnails128x128/67917.png
inflating: /content/faces/thumbnails128x128/67918.png
inflating: /content/faces/thumbnails128x128/67919.png
inflating: /content/faces/thumbnails128x128/67920.png
inflating: /content/faces/thumbnails128x128/67921.png
inflating: /content/faces/thumbnails128x128/67922.png
inflating: /content/faces/thumbnails128x128/67923.png
inflating: /content/faces/thumbnails128x128/67924.png
inflating: /content/faces/thumbnails128x128/67925.png
inflating: /content/faces/thumbnails128x128/67926.png
inflating: /content/faces/thumbnails128x128/67927.png
inflating: /content/faces/thumbnails128x128/67928.png
inflating: /content/faces/thumbnails128x128/67929.png
inflating: /content/faces/thumbnails128x128/67930.png
inflating: /content/faces/thumbnails128x128/67931.png
inflating: /content/faces/thumbnails128x128/67932.png
inflating: /content/faces/thumbnails128x128/67933.png
inflating: /content/faces/thumbnails128x128/67934.png
inflating: /content/faces/thumbnails128x128/67935.png
inflating: /content/faces/thumbnails128x128/67936.png
inflating: /content/faces/thumbnails128x128/67937.png
inflating: /content/faces/thumbnails128x128/67938.png
inflating: /content/faces/thumbnails128x128/67939.png
inflating: /content/faces/thumbnails128x128/67940.png
inflating: /content/faces/thumbnails128x128/67941.png
inflating: /content/faces/thumbnails128x128/67942.png
inflating: /content/faces/thumbnails128x128/67943.png
inflating: /content/faces/thumbnails128x128/67944.png
inflating: /content/faces/thumbnails128x128/67945.png
inflating: /content/faces/thumbnails128x128/67946.png
inflating: /content/faces/thumbnails128x128/67947.png
inflating: /content/faces/thumbnails128x128/67948.png
inflating: /content/faces/thumbnails128x128/67949.png
inflating: /content/faces/thumbnails128x128/67950.png
inflating: /content/faces/thumbnails128x128/67951.png
inflating: /content/faces/thumbnails128x128/67952.png
inflating: /content/faces/thumbnails128x128/67953.png
inflating: /content/faces/thumbnails128x128/67954.png
inflating: /content/faces/thumbnails128x128/67955.png
inflating: /content/faces/thumbnails128x128/67956.png
inflating: /content/faces/thumbnails128x128/67957.png
inflating: /content/faces/thumbnails128x128/67958.png
inflating: /content/faces/thumbnails128x128/67959.png
inflating: /content/faces/thumbnails128x128/67960.png
inflating: /content/faces/thumbnails128x128/67961.png
inflating: /content/faces/thumbnails128x128/67962.png
inflating: /content/faces/thumbnails128x128/67963.png
inflating: /content/faces/thumbnails128x128/67964.png
inflating: /content/faces/thumbnails128x128/67965.png
inflating: /content/faces/thumbnails128x128/67966.png
inflating: /content/faces/thumbnails128x128/67967.png
inflating: /content/faces/thumbnails128x128/67968.png
inflating: /content/faces/thumbnails128x128/67969.png
inflating: /content/faces/thumbnails128x128/67970.png
inflating: /content/faces/thumbnails128x128/67971.png
inflating: /content/faces/thumbnails128x128/67972.png
inflating: /content/faces/thumbnails128x128/67973.png
inflating: /content/faces/thumbnails128x128/67974.png
inflating: /content/faces/thumbnails128x128/67975.png
inflating: /content/faces/thumbnails128x128/67976.png
inflating: /content/faces/thumbnails128x128/67977.png
inflating: /content/faces/thumbnails128x128/67978.png
inflating: /content/faces/thumbnails128x128/67979.png
inflating: /content/faces/thumbnails128x128/67980.png
inflating: /content/faces/thumbnails128x128/67981.png
inflating: /content/faces/thumbnails128x128/67982.png
inflating: /content/faces/thumbnails128x128/67983.png
inflating: /content/faces/thumbnails128x128/67984.png
inflating: /content/faces/thumbnails128x128/67985.png
inflating: /content/faces/thumbnails128x128/67986.png
inflating: /content/faces/thumbnails128x128/67987.png
inflating: /content/faces/thumbnails128x128/67988.png
inflating: /content/faces/thumbnails128x128/67989.png
inflating: /content/faces/thumbnails128x128/67990.png
inflating: /content/faces/thumbnails128x128/67991.png
inflating: /content/faces/thumbnails128x128/67992.png
inflating: /content/faces/thumbnails128x128/67993.png
inflating: /content/faces/thumbnails128x128/67994.png
inflating: /content/faces/thumbnails128x128/67995.png
inflating: /content/faces/thumbnails128x128/67996.png
inflating: /content/faces/thumbnails128x128/67997.png
inflating: /content/faces/thumbnails128x128/67998.png
inflating: /content/faces/thumbnails128x128/67999.png
inflating: /content/faces/thumbnails128x128/68000.png
inflating: /content/faces/thumbnails128x128/68001.png
inflating: /content/faces/thumbnails128x128/68002.png
inflating: /content/faces/thumbnails128x128/68003.png
inflating: /content/faces/thumbnails128x128/68004.png
inflating: /content/faces/thumbnails128x128/68005.png
inflating: /content/faces/thumbnails128x128/68006.png
inflating: /content/faces/thumbnails128x128/68007.png
inflating: /content/faces/thumbnails128x128/68008.png
inflating: /content/faces/thumbnails128x128/68009.png
inflating: /content/faces/thumbnails128x128/68010.png
inflating: /content/faces/thumbnails128x128/68011.png
inflating: /content/faces/thumbnails128x128/68012.png
inflating: /content/faces/thumbnails128x128/68013.png
inflating: /content/faces/thumbnails128x128/68014.png
inflating: /content/faces/thumbnails128x128/68015.png
inflating: /content/faces/thumbnails128x128/68016.png
inflating: /content/faces/thumbnails128x128/68017.png
inflating: /content/faces/thumbnails128x128/68018.png
inflating: /content/faces/thumbnails128x128/68019.png
inflating: /content/faces/thumbnails128x128/68020.png
inflating: /content/faces/thumbnails128x128/68021.png
inflating: /content/faces/thumbnails128x128/68022.png
inflating: /content/faces/thumbnails128x128/68023.png
inflating: /content/faces/thumbnails128x128/68024.png
inflating: /content/faces/thumbnails128x128/68025.png
inflating: /content/faces/thumbnails128x128/68026.png
inflating: /content/faces/thumbnails128x128/68027.png
inflating: /content/faces/thumbnails128x128/68028.png
inflating: /content/faces/thumbnails128x128/68029.png
inflating: /content/faces/thumbnails128x128/68030.png
inflating: /content/faces/thumbnails128x128/68031.png
inflating: /content/faces/thumbnails128x128/68032.png
inflating: /content/faces/thumbnails128x128/68033.png
inflating: /content/faces/thumbnails128x128/68034.png
inflating: /content/faces/thumbnails128x128/68035.png
inflating: /content/faces/thumbnails128x128/68036.png
inflating: /content/faces/thumbnails128x128/68037.png
inflating: /content/faces/thumbnails128x128/68038.png
inflating: /content/faces/thumbnails128x128/68039.png
inflating: /content/faces/thumbnails128x128/68040.png
inflating: /content/faces/thumbnails128x128/68041.png
inflating: /content/faces/thumbnails128x128/68042.png
inflating: /content/faces/thumbnails128x128/68043.png
inflating: /content/faces/thumbnails128x128/68044.png
inflating: /content/faces/thumbnails128x128/68045.png
inflating: /content/faces/thumbnails128x128/68046.png
inflating: /content/faces/thumbnails128x128/68047.png
inflating: /content/faces/thumbnails128x128/68048.png
inflating: /content/faces/thumbnails128x128/68049.png
inflating: /content/faces/thumbnails128x128/68050.png
inflating: /content/faces/thumbnails128x128/68051.png
inflating: /content/faces/thumbnails128x128/68052.png
inflating: /content/faces/thumbnails128x128/68053.png
inflating: /content/faces/thumbnails128x128/68054.png
inflating: /content/faces/thumbnails128x128/68055.png
inflating: /content/faces/thumbnails128x128/68056.png
inflating: /content/faces/thumbnails128x128/68057.png
inflating: /content/faces/thumbnails128x128/68058.png
inflating: /content/faces/thumbnails128x128/68059.png
inflating: /content/faces/thumbnails128x128/68060.png
inflating: /content/faces/thumbnails128x128/68061.png
inflating: /content/faces/thumbnails128x128/68062.png
inflating: /content/faces/thumbnails128x128/68063.png
inflating: /content/faces/thumbnails128x128/68064.png
inflating: /content/faces/thumbnails128x128/68065.png
inflating: /content/faces/thumbnails128x128/68066.png
inflating: /content/faces/thumbnails128x128/68067.png
inflating: /content/faces/thumbnails128x128/68068.png
inflating: /content/faces/thumbnails128x128/68069.png
inflating: /content/faces/thumbnails128x128/68070.png
inflating: /content/faces/thumbnails128x128/68071.png
inflating: /content/faces/thumbnails128x128/68072.png
inflating: /content/faces/thumbnails128x128/68073.png
inflating: /content/faces/thumbnails128x128/68074.png
inflating: /content/faces/thumbnails128x128/68075.png
inflating: /content/faces/thumbnails128x128/68076.png
inflating: /content/faces/thumbnails128x128/68077.png
inflating: /content/faces/thumbnails128x128/68078.png
inflating: /content/faces/thumbnails128x128/68079.png
inflating: /content/faces/thumbnails128x128/68080.png
inflating: /content/faces/thumbnails128x128/68081.png
inflating: /content/faces/thumbnails128x128/68082.png
inflating: /content/faces/thumbnails128x128/68083.png
inflating: /content/faces/thumbnails128x128/68084.png
inflating: /content/faces/thumbnails128x128/68085.png
inflating: /content/faces/thumbnails128x128/68086.png
inflating: /content/faces/thumbnails128x128/68087.png
inflating: /content/faces/thumbnails128x128/68088.png
inflating: /content/faces/thumbnails128x128/68089.png
inflating: /content/faces/thumbnails128x128/68090.png
inflating: /content/faces/thumbnails128x128/68091.png
inflating: /content/faces/thumbnails128x128/68092.png
inflating: /content/faces/thumbnails128x128/68093.png
inflating: /content/faces/thumbnails128x128/68094.png
inflating: /content/faces/thumbnails128x128/68095.png
inflating: /content/faces/thumbnails128x128/68096.png
inflating: /content/faces/thumbnails128x128/68097.png
inflating: /content/faces/thumbnails128x128/68098.png
inflating: /content/faces/thumbnails128x128/68099.png
inflating: /content/faces/thumbnails128x128/68100.png
inflating: /content/faces/thumbnails128x128/68101.png
inflating: /content/faces/thumbnails128x128/68102.png
inflating: /content/faces/thumbnails128x128/68103.png
inflating: /content/faces/thumbnails128x128/68104.png
inflating: /content/faces/thumbnails128x128/68105.png
inflating: /content/faces/thumbnails128x128/68106.png
inflating: /content/faces/thumbnails128x128/68107.png
inflating: /content/faces/thumbnails128x128/68108.png
inflating: /content/faces/thumbnails128x128/68109.png
inflating: /content/faces/thumbnails128x128/68110.png
inflating: /content/faces/thumbnails128x128/68111.png
inflating: /content/faces/thumbnails128x128/68112.png
inflating: /content/faces/thumbnails128x128/68113.png
inflating: /content/faces/thumbnails128x128/68114.png
inflating: /content/faces/thumbnails128x128/68115.png
inflating: /content/faces/thumbnails128x128/68116.png
inflating: /content/faces/thumbnails128x128/68117.png
inflating: /content/faces/thumbnails128x128/68118.png
inflating: /content/faces/thumbnails128x128/68119.png
inflating: /content/faces/thumbnails128x128/68120.png
inflating: /content/faces/thumbnails128x128/68121.png
inflating: /content/faces/thumbnails128x128/68122.png
inflating: /content/faces/thumbnails128x128/68123.png
inflating: /content/faces/thumbnails128x128/68124.png
inflating: /content/faces/thumbnails128x128/68125.png
inflating: /content/faces/thumbnails128x128/68126.png
inflating: /content/faces/thumbnails128x128/68127.png
inflating: /content/faces/thumbnails128x128/68128.png
inflating: /content/faces/thumbnails128x128/68129.png
inflating: /content/faces/thumbnails128x128/68130.png
inflating: /content/faces/thumbnails128x128/68131.png
inflating: /content/faces/thumbnails128x128/68132.png
inflating: /content/faces/thumbnails128x128/68133.png
inflating: /content/faces/thumbnails128x128/68134.png
inflating: /content/faces/thumbnails128x128/68135.png
inflating: /content/faces/thumbnails128x128/68136.png
inflating: /content/faces/thumbnails128x128/68137.png
inflating: /content/faces/thumbnails128x128/68138.png
inflating: /content/faces/thumbnails128x128/68139.png
inflating: /content/faces/thumbnails128x128/68140.png
inflating: /content/faces/thumbnails128x128/68141.png
inflating: /content/faces/thumbnails128x128/68142.png
inflating: /content/faces/thumbnails128x128/68143.png
inflating: /content/faces/thumbnails128x128/68144.png
inflating: /content/faces/thumbnails128x128/68145.png
inflating: /content/faces/thumbnails128x128/68146.png
inflating: /content/faces/thumbnails128x128/68147.png
inflating: /content/faces/thumbnails128x128/68148.png
inflating: /content/faces/thumbnails128x128/68149.png
inflating: /content/faces/thumbnails128x128/68150.png
inflating: /content/faces/thumbnails128x128/68151.png
inflating: /content/faces/thumbnails128x128/68152.png
inflating: /content/faces/thumbnails128x128/68153.png
inflating: /content/faces/thumbnails128x128/68154.png
inflating: /content/faces/thumbnails128x128/68155.png
inflating: /content/faces/thumbnails128x128/68156.png
inflating: /content/faces/thumbnails128x128/68157.png
inflating: /content/faces/thumbnails128x128/68158.png
inflating: /content/faces/thumbnails128x128/68159.png
inflating: /content/faces/thumbnails128x128/68160.png
inflating: /content/faces/thumbnails128x128/68161.png
inflating: /content/faces/thumbnails128x128/68162.png
inflating: /content/faces/thumbnails128x128/68163.png
inflating: /content/faces/thumbnails128x128/68164.png
inflating: /content/faces/thumbnails128x128/68165.png
inflating: /content/faces/thumbnails128x128/68166.png
inflating: /content/faces/thumbnails128x128/68167.png
inflating: /content/faces/thumbnails128x128/68168.png
inflating: /content/faces/thumbnails128x128/68169.png
inflating: /content/faces/thumbnails128x128/68170.png
inflating: /content/faces/thumbnails128x128/68171.png
inflating: /content/faces/thumbnails128x128/68172.png
inflating: /content/faces/thumbnails128x128/68173.png
inflating: /content/faces/thumbnails128x128/68174.png
inflating: /content/faces/thumbnails128x128/68175.png
inflating: /content/faces/thumbnails128x128/68176.png
inflating: /content/faces/thumbnails128x128/68177.png
inflating: /content/faces/thumbnails128x128/68178.png
inflating: /content/faces/thumbnails128x128/68179.png
inflating: /content/faces/thumbnails128x128/68180.png
inflating: /content/faces/thumbnails128x128/68181.png
inflating: /content/faces/thumbnails128x128/68182.png
inflating: /content/faces/thumbnails128x128/68183.png
inflating: /content/faces/thumbnails128x128/68184.png
inflating: /content/faces/thumbnails128x128/68185.png
inflating: /content/faces/thumbnails128x128/68186.png
inflating: /content/faces/thumbnails128x128/68187.png
inflating: /content/faces/thumbnails128x128/68188.png
inflating: /content/faces/thumbnails128x128/68189.png
inflating: /content/faces/thumbnails128x128/68190.png
inflating: /content/faces/thumbnails128x128/68191.png
inflating: /content/faces/thumbnails128x128/68192.png
inflating: /content/faces/thumbnails128x128/68193.png
inflating: /content/faces/thumbnails128x128/68194.png
inflating: /content/faces/thumbnails128x128/68195.png
inflating: /content/faces/thumbnails128x128/68196.png
inflating: /content/faces/thumbnails128x128/68197.png
inflating: /content/faces/thumbnails128x128/68198.png
inflating: /content/faces/thumbnails128x128/68199.png
inflating: /content/faces/thumbnails128x128/68200.png
inflating: /content/faces/thumbnails128x128/68201.png
inflating: /content/faces/thumbnails128x128/68202.png
inflating: /content/faces/thumbnails128x128/68203.png
inflating: /content/faces/thumbnails128x128/68204.png
inflating: /content/faces/thumbnails128x128/68205.png
inflating: /content/faces/thumbnails128x128/68206.png
inflating: /content/faces/thumbnails128x128/68207.png
inflating: /content/faces/thumbnails128x128/68208.png
inflating: /content/faces/thumbnails128x128/68209.png
inflating: /content/faces/thumbnails128x128/68210.png
inflating: /content/faces/thumbnails128x128/68211.png
inflating: /content/faces/thumbnails128x128/68212.png
inflating: /content/faces/thumbnails128x128/68213.png
inflating: /content/faces/thumbnails128x128/68214.png
inflating: /content/faces/thumbnails128x128/68215.png
inflating: /content/faces/thumbnails128x128/68216.png
inflating: /content/faces/thumbnails128x128/68217.png
inflating: /content/faces/thumbnails128x128/68218.png
inflating: /content/faces/thumbnails128x128/68219.png
inflating: /content/faces/thumbnails128x128/68220.png
inflating: /content/faces/thumbnails128x128/68221.png
inflating: /content/faces/thumbnails128x128/68222.png
inflating: /content/faces/thumbnails128x128/68223.png
inflating: /content/faces/thumbnails128x128/68224.png
inflating: /content/faces/thumbnails128x128/68225.png
inflating: /content/faces/thumbnails128x128/68226.png
inflating: /content/faces/thumbnails128x128/68227.png
inflating: /content/faces/thumbnails128x128/68228.png
inflating: /content/faces/thumbnails128x128/68229.png
inflating: /content/faces/thumbnails128x128/68230.png
inflating: /content/faces/thumbnails128x128/68231.png
inflating: /content/faces/thumbnails128x128/68232.png
inflating: /content/faces/thumbnails128x128/68233.png
inflating: /content/faces/thumbnails128x128/68234.png
inflating: /content/faces/thumbnails128x128/68235.png
inflating: /content/faces/thumbnails128x128/68236.png
inflating: /content/faces/thumbnails128x128/68237.png
inflating: /content/faces/thumbnails128x128/68238.png
inflating: /content/faces/thumbnails128x128/68239.png
inflating: /content/faces/thumbnails128x128/68240.png
inflating: /content/faces/thumbnails128x128/68241.png
inflating: /content/faces/thumbnails128x128/68242.png
inflating: /content/faces/thumbnails128x128/68243.png
inflating: /content/faces/thumbnails128x128/68244.png
inflating: /content/faces/thumbnails128x128/68245.png
inflating: /content/faces/thumbnails128x128/68246.png
inflating: /content/faces/thumbnails128x128/68247.png
inflating: /content/faces/thumbnails128x128/68248.png
inflating: /content/faces/thumbnails128x128/68249.png
inflating: /content/faces/thumbnails128x128/68250.png
inflating: /content/faces/thumbnails128x128/68251.png
inflating: /content/faces/thumbnails128x128/68252.png
inflating: /content/faces/thumbnails128x128/68253.png
inflating: /content/faces/thumbnails128x128/68254.png
inflating: /content/faces/thumbnails128x128/68255.png
inflating: /content/faces/thumbnails128x128/68256.png
inflating: /content/faces/thumbnails128x128/68257.png
inflating: /content/faces/thumbnails128x128/68258.png
inflating: /content/faces/thumbnails128x128/68259.png
inflating: /content/faces/thumbnails128x128/68260.png
inflating: /content/faces/thumbnails128x128/68261.png
inflating: /content/faces/thumbnails128x128/68262.png
inflating: /content/faces/thumbnails128x128/68263.png
inflating: /content/faces/thumbnails128x128/68264.png
inflating: /content/faces/thumbnails128x128/68265.png
inflating: /content/faces/thumbnails128x128/68266.png
inflating: /content/faces/thumbnails128x128/68267.png
inflating: /content/faces/thumbnails128x128/68268.png
inflating: /content/faces/thumbnails128x128/68269.png
inflating: /content/faces/thumbnails128x128/68270.png
inflating: /content/faces/thumbnails128x128/68271.png
inflating: /content/faces/thumbnails128x128/68272.png
inflating: /content/faces/thumbnails128x128/68273.png
inflating: /content/faces/thumbnails128x128/68274.png
inflating: /content/faces/thumbnails128x128/68275.png
inflating: /content/faces/thumbnails128x128/68276.png
inflating: /content/faces/thumbnails128x128/68277.png
inflating: /content/faces/thumbnails128x128/68278.png
inflating: /content/faces/thumbnails128x128/68279.png
inflating: /content/faces/thumbnails128x128/68280.png
inflating: /content/faces/thumbnails128x128/68281.png
inflating: /content/faces/thumbnails128x128/68282.png
inflating: /content/faces/thumbnails128x128/68283.png
inflating: /content/faces/thumbnails128x128/68284.png
inflating: /content/faces/thumbnails128x128/68285.png
inflating: /content/faces/thumbnails128x128/68286.png
inflating: /content/faces/thumbnails128x128/68287.png
inflating: /content/faces/thumbnails128x128/68288.png
inflating: /content/faces/thumbnails128x128/68289.png
inflating: /content/faces/thumbnails128x128/68290.png
inflating: /content/faces/thumbnails128x128/68291.png
inflating: /content/faces/thumbnails128x128/68292.png
inflating: /content/faces/thumbnails128x128/68293.png
inflating: /content/faces/thumbnails128x128/68294.png
inflating: /content/faces/thumbnails128x128/68295.png
inflating: /content/faces/thumbnails128x128/68296.png
inflating: /content/faces/thumbnails128x128/68297.png
inflating: /content/faces/thumbnails128x128/68298.png
inflating: /content/faces/thumbnails128x128/68299.png
inflating: /content/faces/thumbnails128x128/68300.png
inflating: /content/faces/thumbnails128x128/68301.png
inflating: /content/faces/thumbnails128x128/68302.png
inflating: /content/faces/thumbnails128x128/68303.png
inflating: /content/faces/thumbnails128x128/68304.png
inflating: /content/faces/thumbnails128x128/68305.png
inflating: /content/faces/thumbnails128x128/68306.png
inflating: /content/faces/thumbnails128x128/68307.png
inflating: /content/faces/thumbnails128x128/68308.png
inflating: /content/faces/thumbnails128x128/68309.png
inflating: /content/faces/thumbnails128x128/68310.png
inflating: /content/faces/thumbnails128x128/68311.png
inflating: /content/faces/thumbnails128x128/68312.png
inflating: /content/faces/thumbnails128x128/68313.png
inflating: /content/faces/thumbnails128x128/68314.png
inflating: /content/faces/thumbnails128x128/68315.png
inflating: /content/faces/thumbnails128x128/68316.png
inflating: /content/faces/thumbnails128x128/68317.png
inflating: /content/faces/thumbnails128x128/68318.png
inflating: /content/faces/thumbnails128x128/68319.png
inflating: /content/faces/thumbnails128x128/68320.png
inflating: /content/faces/thumbnails128x128/68321.png
inflating: /content/faces/thumbnails128x128/68322.png
inflating: /content/faces/thumbnails128x128/68323.png
inflating: /content/faces/thumbnails128x128/68324.png
inflating: /content/faces/thumbnails128x128/68325.png
inflating: /content/faces/thumbnails128x128/68326.png
inflating: /content/faces/thumbnails128x128/68327.png
inflating: /content/faces/thumbnails128x128/68328.png
inflating: /content/faces/thumbnails128x128/68329.png
inflating: /content/faces/thumbnails128x128/68330.png
inflating: /content/faces/thumbnails128x128/68331.png
inflating: /content/faces/thumbnails128x128/68332.png
inflating: /content/faces/thumbnails128x128/68333.png
inflating: /content/faces/thumbnails128x128/68334.png
inflating: /content/faces/thumbnails128x128/68335.png
inflating: /content/faces/thumbnails128x128/68336.png
inflating: /content/faces/thumbnails128x128/68337.png
inflating: /content/faces/thumbnails128x128/68338.png
inflating: /content/faces/thumbnails128x128/68339.png
inflating: /content/faces/thumbnails128x128/68340.png
inflating: /content/faces/thumbnails128x128/68341.png
inflating: /content/faces/thumbnails128x128/68342.png
inflating: /content/faces/thumbnails128x128/68343.png
inflating: /content/faces/thumbnails128x128/68344.png
inflating: /content/faces/thumbnails128x128/68345.png
inflating: /content/faces/thumbnails128x128/68346.png
inflating: /content/faces/thumbnails128x128/68347.png
inflating: /content/faces/thumbnails128x128/68348.png
inflating: /content/faces/thumbnails128x128/68349.png
inflating: /content/faces/thumbnails128x128/68350.png
inflating: /content/faces/thumbnails128x128/68351.png
inflating: /content/faces/thumbnails128x128/68352.png
inflating: /content/faces/thumbnails128x128/68353.png
inflating: /content/faces/thumbnails128x128/68354.png
inflating: /content/faces/thumbnails128x128/68355.png
inflating: /content/faces/thumbnails128x128/68356.png
inflating: /content/faces/thumbnails128x128/68357.png
inflating: /content/faces/thumbnails128x128/68358.png
inflating: /content/faces/thumbnails128x128/68359.png
inflating: /content/faces/thumbnails128x128/68360.png
inflating: /content/faces/thumbnails128x128/68361.png
inflating: /content/faces/thumbnails128x128/68362.png
inflating: /content/faces/thumbnails128x128/68363.png
inflating: /content/faces/thumbnails128x128/68364.png
inflating: /content/faces/thumbnails128x128/68365.png
inflating: /content/faces/thumbnails128x128/68366.png
inflating: /content/faces/thumbnails128x128/68367.png
inflating: /content/faces/thumbnails128x128/68368.png
inflating: /content/faces/thumbnails128x128/68369.png
inflating: /content/faces/thumbnails128x128/68370.png
inflating: /content/faces/thumbnails128x128/68371.png
inflating: /content/faces/thumbnails128x128/68372.png
inflating: /content/faces/thumbnails128x128/68373.png
inflating: /content/faces/thumbnails128x128/68374.png
inflating: /content/faces/thumbnails128x128/68375.png
inflating: /content/faces/thumbnails128x128/68376.png
inflating: /content/faces/thumbnails128x128/68377.png
inflating: /content/faces/thumbnails128x128/68378.png
inflating: /content/faces/thumbnails128x128/68379.png
inflating: /content/faces/thumbnails128x128/68380.png
inflating: /content/faces/thumbnails128x128/68381.png
inflating: /content/faces/thumbnails128x128/68382.png
inflating: /content/faces/thumbnails128x128/68383.png
inflating: /content/faces/thumbnails128x128/68384.png
inflating: /content/faces/thumbnails128x128/68385.png
inflating: /content/faces/thumbnails128x128/68386.png
inflating: /content/faces/thumbnails128x128/68387.png
inflating: /content/faces/thumbnails128x128/68388.png
inflating: /content/faces/thumbnails128x128/68389.png
inflating: /content/faces/thumbnails128x128/68390.png
inflating: /content/faces/thumbnails128x128/68391.png
inflating: /content/faces/thumbnails128x128/68392.png
inflating: /content/faces/thumbnails128x128/68393.png
inflating: /content/faces/thumbnails128x128/68394.png
inflating: /content/faces/thumbnails128x128/68395.png
inflating: /content/faces/thumbnails128x128/68396.png
inflating: /content/faces/thumbnails128x128/68397.png
inflating: /content/faces/thumbnails128x128/68398.png
inflating: /content/faces/thumbnails128x128/68399.png
inflating: /content/faces/thumbnails128x128/68400.png
inflating: /content/faces/thumbnails128x128/68401.png
inflating: /content/faces/thumbnails128x128/68402.png
inflating: /content/faces/thumbnails128x128/68403.png
inflating: /content/faces/thumbnails128x128/68404.png
inflating: /content/faces/thumbnails128x128/68405.png
inflating: /content/faces/thumbnails128x128/68406.png
inflating: /content/faces/thumbnails128x128/68407.png
inflating: /content/faces/thumbnails128x128/68408.png
inflating: /content/faces/thumbnails128x128/68409.png
inflating: /content/faces/thumbnails128x128/68410.png
inflating: /content/faces/thumbnails128x128/68411.png
inflating: /content/faces/thumbnails128x128/68412.png
inflating: /content/faces/thumbnails128x128/68413.png
inflating: /content/faces/thumbnails128x128/68414.png
inflating: /content/faces/thumbnails128x128/68415.png
inflating: /content/faces/thumbnails128x128/68416.png
inflating: /content/faces/thumbnails128x128/68417.png
inflating: /content/faces/thumbnails128x128/68418.png
inflating: /content/faces/thumbnails128x128/68419.png
inflating: /content/faces/thumbnails128x128/68420.png
inflating: /content/faces/thumbnails128x128/68421.png
inflating: /content/faces/thumbnails128x128/68422.png
inflating: /content/faces/thumbnails128x128/68423.png
inflating: /content/faces/thumbnails128x128/68424.png
inflating: /content/faces/thumbnails128x128/68425.png
inflating: /content/faces/thumbnails128x128/68426.png
inflating: /content/faces/thumbnails128x128/68427.png
inflating: /content/faces/thumbnails128x128/68428.png
inflating: /content/faces/thumbnails128x128/68429.png
inflating: /content/faces/thumbnails128x128/68430.png
inflating: /content/faces/thumbnails128x128/68431.png
inflating: /content/faces/thumbnails128x128/68432.png
inflating: /content/faces/thumbnails128x128/68433.png
inflating: /content/faces/thumbnails128x128/68434.png
inflating: /content/faces/thumbnails128x128/68435.png
inflating: /content/faces/thumbnails128x128/68436.png
inflating: /content/faces/thumbnails128x128/68437.png
inflating: /content/faces/thumbnails128x128/68438.png
inflating: /content/faces/thumbnails128x128/68439.png
inflating: /content/faces/thumbnails128x128/68440.png
inflating: /content/faces/thumbnails128x128/68441.png
inflating: /content/faces/thumbnails128x128/68442.png
inflating: /content/faces/thumbnails128x128/68443.png
inflating: /content/faces/thumbnails128x128/68444.png
inflating: /content/faces/thumbnails128x128/68445.png
inflating: /content/faces/thumbnails128x128/68446.png
inflating: /content/faces/thumbnails128x128/68447.png
inflating: /content/faces/thumbnails128x128/68448.png
inflating: /content/faces/thumbnails128x128/68449.png
inflating: /content/faces/thumbnails128x128/68450.png
inflating: /content/faces/thumbnails128x128/68451.png
inflating: /content/faces/thumbnails128x128/68452.png
inflating: /content/faces/thumbnails128x128/68453.png
inflating: /content/faces/thumbnails128x128/68454.png
inflating: /content/faces/thumbnails128x128/68455.png
inflating: /content/faces/thumbnails128x128/68456.png
inflating: /content/faces/thumbnails128x128/68457.png
inflating: /content/faces/thumbnails128x128/68458.png
inflating: /content/faces/thumbnails128x128/68459.png
inflating: /content/faces/thumbnails128x128/68460.png
inflating: /content/faces/thumbnails128x128/68461.png
inflating: /content/faces/thumbnails128x128/68462.png
inflating: /content/faces/thumbnails128x128/68463.png
inflating: /content/faces/thumbnails128x128/68464.png
inflating: /content/faces/thumbnails128x128/68465.png
inflating: /content/faces/thumbnails128x128/68466.png
inflating: /content/faces/thumbnails128x128/68467.png
inflating: /content/faces/thumbnails128x128/68468.png
inflating: /content/faces/thumbnails128x128/68469.png
inflating: /content/faces/thumbnails128x128/68470.png
inflating: /content/faces/thumbnails128x128/68471.png
inflating: /content/faces/thumbnails128x128/68472.png
inflating: /content/faces/thumbnails128x128/68473.png
inflating: /content/faces/thumbnails128x128/68474.png
inflating: /content/faces/thumbnails128x128/68475.png
inflating: /content/faces/thumbnails128x128/68476.png
inflating: /content/faces/thumbnails128x128/68477.png
inflating: /content/faces/thumbnails128x128/68478.png
inflating: /content/faces/thumbnails128x128/68479.png
inflating: /content/faces/thumbnails128x128/68480.png
inflating: /content/faces/thumbnails128x128/68481.png
inflating: /content/faces/thumbnails128x128/68482.png
inflating: /content/faces/thumbnails128x128/68483.png
inflating: /content/faces/thumbnails128x128/68484.png
inflating: /content/faces/thumbnails128x128/68485.png
inflating: /content/faces/thumbnails128x128/68486.png
inflating: /content/faces/thumbnails128x128/68487.png
inflating: /content/faces/thumbnails128x128/68488.png
inflating: /content/faces/thumbnails128x128/68489.png
inflating: /content/faces/thumbnails128x128/68490.png
inflating: /content/faces/thumbnails128x128/68491.png
inflating: /content/faces/thumbnails128x128/68492.png
inflating: /content/faces/thumbnails128x128/68493.png
inflating: /content/faces/thumbnails128x128/68494.png
inflating: /content/faces/thumbnails128x128/68495.png
inflating: /content/faces/thumbnails128x128/68496.png
inflating: /content/faces/thumbnails128x128/68497.png
inflating: /content/faces/thumbnails128x128/68498.png
inflating: /content/faces/thumbnails128x128/68499.png
inflating: /content/faces/thumbnails128x128/68500.png
inflating: /content/faces/thumbnails128x128/68501.png
inflating: /content/faces/thumbnails128x128/68502.png
inflating: /content/faces/thumbnails128x128/68503.png
inflating: /content/faces/thumbnails128x128/68504.png
inflating: /content/faces/thumbnails128x128/68505.png
inflating: /content/faces/thumbnails128x128/68506.png
inflating: /content/faces/thumbnails128x128/68507.png
inflating: /content/faces/thumbnails128x128/68508.png
inflating: /content/faces/thumbnails128x128/68509.png
inflating: /content/faces/thumbnails128x128/68510.png
inflating: /content/faces/thumbnails128x128/68511.png
inflating: /content/faces/thumbnails128x128/68512.png
inflating: /content/faces/thumbnails128x128/68513.png
inflating: /content/faces/thumbnails128x128/68514.png
inflating: /content/faces/thumbnails128x128/68515.png
inflating: /content/faces/thumbnails128x128/68516.png
inflating: /content/faces/thumbnails128x128/68517.png
inflating: /content/faces/thumbnails128x128/68518.png
inflating: /content/faces/thumbnails128x128/68519.png
inflating: /content/faces/thumbnails128x128/68520.png
inflating: /content/faces/thumbnails128x128/68521.png
inflating: /content/faces/thumbnails128x128/68522.png
inflating: /content/faces/thumbnails128x128/68523.png
inflating: /content/faces/thumbnails128x128/68524.png
inflating: /content/faces/thumbnails128x128/68525.png
inflating: /content/faces/thumbnails128x128/68526.png
inflating: /content/faces/thumbnails128x128/68527.png
inflating: /content/faces/thumbnails128x128/68528.png
inflating: /content/faces/thumbnails128x128/68529.png
inflating: /content/faces/thumbnails128x128/68530.png
inflating: /content/faces/thumbnails128x128/68531.png
inflating: /content/faces/thumbnails128x128/68532.png
inflating: /content/faces/thumbnails128x128/68533.png
inflating: /content/faces/thumbnails128x128/68534.png
inflating: /content/faces/thumbnails128x128/68535.png
inflating: /content/faces/thumbnails128x128/68536.png
inflating: /content/faces/thumbnails128x128/68537.png
inflating: /content/faces/thumbnails128x128/68538.png
inflating: /content/faces/thumbnails128x128/68539.png
inflating: /content/faces/thumbnails128x128/68540.png
inflating: /content/faces/thumbnails128x128/68541.png
inflating: /content/faces/thumbnails128x128/68542.png
inflating: /content/faces/thumbnails128x128/68543.png
inflating: /content/faces/thumbnails128x128/68544.png
inflating: /content/faces/thumbnails128x128/68545.png
inflating: /content/faces/thumbnails128x128/68546.png
inflating: /content/faces/thumbnails128x128/68547.png
inflating: /content/faces/thumbnails128x128/68548.png
inflating: /content/faces/thumbnails128x128/68549.png
inflating: /content/faces/thumbnails128x128/68550.png
inflating: /content/faces/thumbnails128x128/68551.png
inflating: /content/faces/thumbnails128x128/68552.png
inflating: /content/faces/thumbnails128x128/68553.png
inflating: /content/faces/thumbnails128x128/68554.png
inflating: /content/faces/thumbnails128x128/68555.png
inflating: /content/faces/thumbnails128x128/68556.png
inflating: /content/faces/thumbnails128x128/68557.png
inflating: /content/faces/thumbnails128x128/68558.png
inflating: /content/faces/thumbnails128x128/68559.png
inflating: /content/faces/thumbnails128x128/68560.png
inflating: /content/faces/thumbnails128x128/68561.png
inflating: /content/faces/thumbnails128x128/68562.png
inflating: /content/faces/thumbnails128x128/68563.png
inflating: /content/faces/thumbnails128x128/68564.png
inflating: /content/faces/thumbnails128x128/68565.png
inflating: /content/faces/thumbnails128x128/68566.png
inflating: /content/faces/thumbnails128x128/68567.png
inflating: /content/faces/thumbnails128x128/68568.png
inflating: /content/faces/thumbnails128x128/68569.png
inflating: /content/faces/thumbnails128x128/68570.png
inflating: /content/faces/thumbnails128x128/68571.png
inflating: /content/faces/thumbnails128x128/68572.png
inflating: /content/faces/thumbnails128x128/68573.png
inflating: /content/faces/thumbnails128x128/68574.png
inflating: /content/faces/thumbnails128x128/68575.png
inflating: /content/faces/thumbnails128x128/68576.png
inflating: /content/faces/thumbnails128x128/68577.png
inflating: /content/faces/thumbnails128x128/68578.png
inflating: /content/faces/thumbnails128x128/68579.png
inflating: /content/faces/thumbnails128x128/68580.png
inflating: /content/faces/thumbnails128x128/68581.png
inflating: /content/faces/thumbnails128x128/68582.png
inflating: /content/faces/thumbnails128x128/68583.png
inflating: /content/faces/thumbnails128x128/68584.png
inflating: /content/faces/thumbnails128x128/68585.png
inflating: /content/faces/thumbnails128x128/68586.png
inflating: /content/faces/thumbnails128x128/68587.png
inflating: /content/faces/thumbnails128x128/68588.png
inflating: /content/faces/thumbnails128x128/68589.png
inflating: /content/faces/thumbnails128x128/68590.png
inflating: /content/faces/thumbnails128x128/68591.png
inflating: /content/faces/thumbnails128x128/68592.png
inflating: /content/faces/thumbnails128x128/68593.png
inflating: /content/faces/thumbnails128x128/68594.png
inflating: /content/faces/thumbnails128x128/68595.png
inflating: /content/faces/thumbnails128x128/68596.png
inflating: /content/faces/thumbnails128x128/68597.png
inflating: /content/faces/thumbnails128x128/68598.png
inflating: /content/faces/thumbnails128x128/68599.png
inflating: /content/faces/thumbnails128x128/68600.png
inflating: /content/faces/thumbnails128x128/68601.png
inflating: /content/faces/thumbnails128x128/68602.png
inflating: /content/faces/thumbnails128x128/68603.png
inflating: /content/faces/thumbnails128x128/68604.png
inflating: /content/faces/thumbnails128x128/68605.png
inflating: /content/faces/thumbnails128x128/68606.png
inflating: /content/faces/thumbnails128x128/68607.png
inflating: /content/faces/thumbnails128x128/68608.png
inflating: /content/faces/thumbnails128x128/68609.png
inflating: /content/faces/thumbnails128x128/68610.png
inflating: /content/faces/thumbnails128x128/68611.png
inflating: /content/faces/thumbnails128x128/68612.png
inflating: /content/faces/thumbnails128x128/68613.png
inflating: /content/faces/thumbnails128x128/68614.png
inflating: /content/faces/thumbnails128x128/68615.png
inflating: /content/faces/thumbnails128x128/68616.png
inflating: /content/faces/thumbnails128x128/68617.png
inflating: /content/faces/thumbnails128x128/68618.png
inflating: /content/faces/thumbnails128x128/68619.png
inflating: /content/faces/thumbnails128x128/68620.png
inflating: /content/faces/thumbnails128x128/68621.png
inflating: /content/faces/thumbnails128x128/68622.png
inflating: /content/faces/thumbnails128x128/68623.png
inflating: /content/faces/thumbnails128x128/68624.png
inflating: /content/faces/thumbnails128x128/68625.png
inflating: /content/faces/thumbnails128x128/68626.png
inflating: /content/faces/thumbnails128x128/68627.png
inflating: /content/faces/thumbnails128x128/68628.png
inflating: /content/faces/thumbnails128x128/68629.png
inflating: /content/faces/thumbnails128x128/68630.png
inflating: /content/faces/thumbnails128x128/68631.png
inflating: /content/faces/thumbnails128x128/68632.png
inflating: /content/faces/thumbnails128x128/68633.png
inflating: /content/faces/thumbnails128x128/68634.png
inflating: /content/faces/thumbnails128x128/68635.png
inflating: /content/faces/thumbnails128x128/68636.png
inflating: /content/faces/thumbnails128x128/68637.png
inflating: /content/faces/thumbnails128x128/68638.png
inflating: /content/faces/thumbnails128x128/68639.png
inflating: /content/faces/thumbnails128x128/68640.png
inflating: /content/faces/thumbnails128x128/68641.png
inflating: /content/faces/thumbnails128x128/68642.png
inflating: /content/faces/thumbnails128x128/68643.png
inflating: /content/faces/thumbnails128x128/68644.png
inflating: /content/faces/thumbnails128x128/68645.png
inflating: /content/faces/thumbnails128x128/68646.png
inflating: /content/faces/thumbnails128x128/68647.png
inflating: /content/faces/thumbnails128x128/68648.png
inflating: /content/faces/thumbnails128x128/68649.png
inflating: /content/faces/thumbnails128x128/68650.png
inflating: /content/faces/thumbnails128x128/68651.png
inflating: /content/faces/thumbnails128x128/68652.png
inflating: /content/faces/thumbnails128x128/68653.png
inflating: /content/faces/thumbnails128x128/68654.png
inflating: /content/faces/thumbnails128x128/68655.png
inflating: /content/faces/thumbnails128x128/68656.png
inflating: /content/faces/thumbnails128x128/68657.png
inflating: /content/faces/thumbnails128x128/68658.png
inflating: /content/faces/thumbnails128x128/68659.png
inflating: /content/faces/thumbnails128x128/68660.png
inflating: /content/faces/thumbnails128x128/68661.png
inflating: /content/faces/thumbnails128x128/68662.png
inflating: /content/faces/thumbnails128x128/68663.png
inflating: /content/faces/thumbnails128x128/68664.png
inflating: /content/faces/thumbnails128x128/68665.png
inflating: /content/faces/thumbnails128x128/68666.png
inflating: /content/faces/thumbnails128x128/68667.png
inflating: /content/faces/thumbnails128x128/68668.png
inflating: /content/faces/thumbnails128x128/68669.png
inflating: /content/faces/thumbnails128x128/68670.png
inflating: /content/faces/thumbnails128x128/68671.png
inflating: /content/faces/thumbnails128x128/68672.png
inflating: /content/faces/thumbnails128x128/68673.png
inflating: /content/faces/thumbnails128x128/68674.png
inflating: /content/faces/thumbnails128x128/68675.png
inflating: /content/faces/thumbnails128x128/68676.png
inflating: /content/faces/thumbnails128x128/68677.png
inflating: /content/faces/thumbnails128x128/68678.png
inflating: /content/faces/thumbnails128x128/68679.png
inflating: /content/faces/thumbnails128x128/68680.png
inflating: /content/faces/thumbnails128x128/68681.png
inflating: /content/faces/thumbnails128x128/68682.png
inflating: /content/faces/thumbnails128x128/68683.png
inflating: /content/faces/thumbnails128x128/68684.png
inflating: /content/faces/thumbnails128x128/68685.png
inflating: /content/faces/thumbnails128x128/68686.png
inflating: /content/faces/thumbnails128x128/68687.png
inflating: /content/faces/thumbnails128x128/68688.png
inflating: /content/faces/thumbnails128x128/68689.png
inflating: /content/faces/thumbnails128x128/68690.png
inflating: /content/faces/thumbnails128x128/68691.png
inflating: /content/faces/thumbnails128x128/68692.png
inflating: /content/faces/thumbnails128x128/68693.png
inflating: /content/faces/thumbnails128x128/68694.png
inflating: /content/faces/thumbnails128x128/68695.png
inflating: /content/faces/thumbnails128x128/68696.png
inflating: /content/faces/thumbnails128x128/68697.png
inflating: /content/faces/thumbnails128x128/68698.png
inflating: /content/faces/thumbnails128x128/68699.png
inflating: /content/faces/thumbnails128x128/68700.png
inflating: /content/faces/thumbnails128x128/68701.png
inflating: /content/faces/thumbnails128x128/68702.png
inflating: /content/faces/thumbnails128x128/68703.png
inflating: /content/faces/thumbnails128x128/68704.png
inflating: /content/faces/thumbnails128x128/68705.png
inflating: /content/faces/thumbnails128x128/68706.png
inflating: /content/faces/thumbnails128x128/68707.png
inflating: /content/faces/thumbnails128x128/68708.png
inflating: /content/faces/thumbnails128x128/68709.png
inflating: /content/faces/thumbnails128x128/68710.png
inflating: /content/faces/thumbnails128x128/68711.png
inflating: /content/faces/thumbnails128x128/68712.png
inflating: /content/faces/thumbnails128x128/68713.png
inflating: /content/faces/thumbnails128x128/68714.png
inflating: /content/faces/thumbnails128x128/68715.png
inflating: /content/faces/thumbnails128x128/68716.png
inflating: /content/faces/thumbnails128x128/68717.png
inflating: /content/faces/thumbnails128x128/68718.png
inflating: /content/faces/thumbnails128x128/68719.png
inflating: /content/faces/thumbnails128x128/68720.png
inflating: /content/faces/thumbnails128x128/68721.png
inflating: /content/faces/thumbnails128x128/68722.png
inflating: /content/faces/thumbnails128x128/68723.png
inflating: /content/faces/thumbnails128x128/68724.png
inflating: /content/faces/thumbnails128x128/68725.png
inflating: /content/faces/thumbnails128x128/68726.png
inflating: /content/faces/thumbnails128x128/68727.png
inflating: /content/faces/thumbnails128x128/68728.png
inflating: /content/faces/thumbnails128x128/68729.png
inflating: /content/faces/thumbnails128x128/68730.png
inflating: /content/faces/thumbnails128x128/68731.png
inflating: /content/faces/thumbnails128x128/68732.png
inflating: /content/faces/thumbnails128x128/68733.png
inflating: /content/faces/thumbnails128x128/68734.png
inflating: /content/faces/thumbnails128x128/68735.png
inflating: /content/faces/thumbnails128x128/68736.png
inflating: /content/faces/thumbnails128x128/68737.png
inflating: /content/faces/thumbnails128x128/68738.png
inflating: /content/faces/thumbnails128x128/68739.png
inflating: /content/faces/thumbnails128x128/68740.png
inflating: /content/faces/thumbnails128x128/68741.png
inflating: /content/faces/thumbnails128x128/68742.png
inflating: /content/faces/thumbnails128x128/68743.png
inflating: /content/faces/thumbnails128x128/68744.png
inflating: /content/faces/thumbnails128x128/68745.png
inflating: /content/faces/thumbnails128x128/68746.png
inflating: /content/faces/thumbnails128x128/68747.png
inflating: /content/faces/thumbnails128x128/68748.png
inflating: /content/faces/thumbnails128x128/68749.png
inflating: /content/faces/thumbnails128x128/68750.png
inflating: /content/faces/thumbnails128x128/68751.png
inflating: /content/faces/thumbnails128x128/68752.png
inflating: /content/faces/thumbnails128x128/68753.png
inflating: /content/faces/thumbnails128x128/68754.png
inflating: /content/faces/thumbnails128x128/68755.png
inflating: /content/faces/thumbnails128x128/68756.png
inflating: /content/faces/thumbnails128x128/68757.png
inflating: /content/faces/thumbnails128x128/68758.png
inflating: /content/faces/thumbnails128x128/68759.png
inflating: /content/faces/thumbnails128x128/68760.png
inflating: /content/faces/thumbnails128x128/68761.png
inflating: /content/faces/thumbnails128x128/68762.png
inflating: /content/faces/thumbnails128x128/68763.png
inflating: /content/faces/thumbnails128x128/68764.png
inflating: /content/faces/thumbnails128x128/68765.png
inflating: /content/faces/thumbnails128x128/68766.png
inflating: /content/faces/thumbnails128x128/68767.png
inflating: /content/faces/thumbnails128x128/68768.png
inflating: /content/faces/thumbnails128x128/68769.png
inflating: /content/faces/thumbnails128x128/68770.png
inflating: /content/faces/thumbnails128x128/68771.png
inflating: /content/faces/thumbnails128x128/68772.png
inflating: /content/faces/thumbnails128x128/68773.png
inflating: /content/faces/thumbnails128x128/68774.png
inflating: /content/faces/thumbnails128x128/68775.png
inflating: /content/faces/thumbnails128x128/68776.png
inflating: /content/faces/thumbnails128x128/68777.png
inflating: /content/faces/thumbnails128x128/68778.png
inflating: /content/faces/thumbnails128x128/68779.png
inflating: /content/faces/thumbnails128x128/68780.png
inflating: /content/faces/thumbnails128x128/68781.png
inflating: /content/faces/thumbnails128x128/68782.png
inflating: /content/faces/thumbnails128x128/68783.png
inflating: /content/faces/thumbnails128x128/68784.png
inflating: /content/faces/thumbnails128x128/68785.png
inflating: /content/faces/thumbnails128x128/68786.png
inflating: /content/faces/thumbnails128x128/68787.png
inflating: /content/faces/thumbnails128x128/68788.png
inflating: /content/faces/thumbnails128x128/68789.png
inflating: /content/faces/thumbnails128x128/68790.png
inflating: /content/faces/thumbnails128x128/68791.png
inflating: /content/faces/thumbnails128x128/68792.png
inflating: /content/faces/thumbnails128x128/68793.png
inflating: /content/faces/thumbnails128x128/68794.png
inflating: /content/faces/thumbnails128x128/68795.png
inflating: /content/faces/thumbnails128x128/68796.png
inflating: /content/faces/thumbnails128x128/68797.png
inflating: /content/faces/thumbnails128x128/68798.png
inflating: /content/faces/thumbnails128x128/68799.png
inflating: /content/faces/thumbnails128x128/68800.png
inflating: /content/faces/thumbnails128x128/68801.png
inflating: /content/faces/thumbnails128x128/68802.png
inflating: /content/faces/thumbnails128x128/68803.png
inflating: /content/faces/thumbnails128x128/68804.png
inflating: /content/faces/thumbnails128x128/68805.png
inflating: /content/faces/thumbnails128x128/68806.png
inflating: /content/faces/thumbnails128x128/68807.png
inflating: /content/faces/thumbnails128x128/68808.png
inflating: /content/faces/thumbnails128x128/68809.png
inflating: /content/faces/thumbnails128x128/68810.png
inflating: /content/faces/thumbnails128x128/68811.png
inflating: /content/faces/thumbnails128x128/68812.png
inflating: /content/faces/thumbnails128x128/68813.png
inflating: /content/faces/thumbnails128x128/68814.png
inflating: /content/faces/thumbnails128x128/68815.png
inflating: /content/faces/thumbnails128x128/68816.png
inflating: /content/faces/thumbnails128x128/68817.png
inflating: /content/faces/thumbnails128x128/68818.png
inflating: /content/faces/thumbnails128x128/68819.png
inflating: /content/faces/thumbnails128x128/68820.png
inflating: /content/faces/thumbnails128x128/68821.png
inflating: /content/faces/thumbnails128x128/68822.png
inflating: /content/faces/thumbnails128x128/68823.png
inflating: /content/faces/thumbnails128x128/68824.png
inflating: /content/faces/thumbnails128x128/68825.png
inflating: /content/faces/thumbnails128x128/68826.png
inflating: /content/faces/thumbnails128x128/68827.png
inflating: /content/faces/thumbnails128x128/68828.png
inflating: /content/faces/thumbnails128x128/68829.png
inflating: /content/faces/thumbnails128x128/68830.png
inflating: /content/faces/thumbnails128x128/68831.png
inflating: /content/faces/thumbnails128x128/68832.png
inflating: /content/faces/thumbnails128x128/68833.png
inflating: /content/faces/thumbnails128x128/68834.png
inflating: /content/faces/thumbnails128x128/68835.png
inflating: /content/faces/thumbnails128x128/68836.png
inflating: /content/faces/thumbnails128x128/68837.png
inflating: /content/faces/thumbnails128x128/68838.png
inflating: /content/faces/thumbnails128x128/68839.png
inflating: /content/faces/thumbnails128x128/68840.png
inflating: /content/faces/thumbnails128x128/68841.png
inflating: /content/faces/thumbnails128x128/68842.png
inflating: /content/faces/thumbnails128x128/68843.png
inflating: /content/faces/thumbnails128x128/68844.png
inflating: /content/faces/thumbnails128x128/68845.png
inflating: /content/faces/thumbnails128x128/68846.png
inflating: /content/faces/thumbnails128x128/68847.png
inflating: /content/faces/thumbnails128x128/68848.png
inflating: /content/faces/thumbnails128x128/68849.png
inflating: /content/faces/thumbnails128x128/68850.png
inflating: /content/faces/thumbnails128x128/68851.png
inflating: /content/faces/thumbnails128x128/68852.png
inflating: /content/faces/thumbnails128x128/68853.png
inflating: /content/faces/thumbnails128x128/68854.png
inflating: /content/faces/thumbnails128x128/68855.png
inflating: /content/faces/thumbnails128x128/68856.png
inflating: /content/faces/thumbnails128x128/68857.png
inflating: /content/faces/thumbnails128x128/68858.png
inflating: /content/faces/thumbnails128x128/68859.png
inflating: /content/faces/thumbnails128x128/68860.png
inflating: /content/faces/thumbnails128x128/68861.png
inflating: /content/faces/thumbnails128x128/68862.png
inflating: /content/faces/thumbnails128x128/68863.png
inflating: /content/faces/thumbnails128x128/68864.png
inflating: /content/faces/thumbnails128x128/68865.png
inflating: /content/faces/thumbnails128x128/68866.png
inflating: /content/faces/thumbnails128x128/68867.png
inflating: /content/faces/thumbnails128x128/68868.png
inflating: /content/faces/thumbnails128x128/68869.png
inflating: /content/faces/thumbnails128x128/68870.png
inflating: /content/faces/thumbnails128x128/68871.png
inflating: /content/faces/thumbnails128x128/68872.png
inflating: /content/faces/thumbnails128x128/68873.png
inflating: /content/faces/thumbnails128x128/68874.png
inflating: /content/faces/thumbnails128x128/68875.png
inflating: /content/faces/thumbnails128x128/68876.png
inflating: /content/faces/thumbnails128x128/68877.png
inflating: /content/faces/thumbnails128x128/68878.png
inflating: /content/faces/thumbnails128x128/68879.png
inflating: /content/faces/thumbnails128x128/68880.png
inflating: /content/faces/thumbnails128x128/68881.png
inflating: /content/faces/thumbnails128x128/68882.png
inflating: /content/faces/thumbnails128x128/68883.png
inflating: /content/faces/thumbnails128x128/68884.png
inflating: /content/faces/thumbnails128x128/68885.png
inflating: /content/faces/thumbnails128x128/68886.png
inflating: /content/faces/thumbnails128x128/68887.png
inflating: /content/faces/thumbnails128x128/68888.png
inflating: /content/faces/thumbnails128x128/68889.png
inflating: /content/faces/thumbnails128x128/68890.png
inflating: /content/faces/thumbnails128x128/68891.png
inflating: /content/faces/thumbnails128x128/68892.png
inflating: /content/faces/thumbnails128x128/68893.png
inflating: /content/faces/thumbnails128x128/68894.png
inflating: /content/faces/thumbnails128x128/68895.png
inflating: /content/faces/thumbnails128x128/68896.png
inflating: /content/faces/thumbnails128x128/68897.png
inflating: /content/faces/thumbnails128x128/68898.png
inflating: /content/faces/thumbnails128x128/68899.png
inflating: /content/faces/thumbnails128x128/68900.png
inflating: /content/faces/thumbnails128x128/68901.png
inflating: /content/faces/thumbnails128x128/68902.png
inflating: /content/faces/thumbnails128x128/68903.png
inflating: /content/faces/thumbnails128x128/68904.png
inflating: /content/faces/thumbnails128x128/68905.png
inflating: /content/faces/thumbnails128x128/68906.png
inflating: /content/faces/thumbnails128x128/68907.png
inflating: /content/faces/thumbnails128x128/68908.png
inflating: /content/faces/thumbnails128x128/68909.png
inflating: /content/faces/thumbnails128x128/68910.png
inflating: /content/faces/thumbnails128x128/68911.png
inflating: /content/faces/thumbnails128x128/68912.png
inflating: /content/faces/thumbnails128x128/68913.png
inflating: /content/faces/thumbnails128x128/68914.png
inflating: /content/faces/thumbnails128x128/68915.png
inflating: /content/faces/thumbnails128x128/68916.png
inflating: /content/faces/thumbnails128x128/68917.png
inflating: /content/faces/thumbnails128x128/68918.png
inflating: /content/faces/thumbnails128x128/68919.png
inflating: /content/faces/thumbnails128x128/68920.png
inflating: /content/faces/thumbnails128x128/68921.png
inflating: /content/faces/thumbnails128x128/68922.png
inflating: /content/faces/thumbnails128x128/68923.png
inflating: /content/faces/thumbnails128x128/68924.png
inflating: /content/faces/thumbnails128x128/68925.png
inflating: /content/faces/thumbnails128x128/68926.png
inflating: /content/faces/thumbnails128x128/68927.png
inflating: /content/faces/thumbnails128x128/68928.png
inflating: /content/faces/thumbnails128x128/68929.png
inflating: /content/faces/thumbnails128x128/68930.png
inflating: /content/faces/thumbnails128x128/68931.png
inflating: /content/faces/thumbnails128x128/68932.png
inflating: /content/faces/thumbnails128x128/68933.png
inflating: /content/faces/thumbnails128x128/68934.png
inflating: /content/faces/thumbnails128x128/68935.png
inflating: /content/faces/thumbnails128x128/68936.png
inflating: /content/faces/thumbnails128x128/68937.png
inflating: /content/faces/thumbnails128x128/68938.png
inflating: /content/faces/thumbnails128x128/68939.png
inflating: /content/faces/thumbnails128x128/68940.png
inflating: /content/faces/thumbnails128x128/68941.png
inflating: /content/faces/thumbnails128x128/68942.png
inflating: /content/faces/thumbnails128x128/68943.png
inflating: /content/faces/thumbnails128x128/68944.png
inflating: /content/faces/thumbnails128x128/68945.png
inflating: /content/faces/thumbnails128x128/68946.png
inflating: /content/faces/thumbnails128x128/68947.png
inflating: /content/faces/thumbnails128x128/68948.png
inflating: /content/faces/thumbnails128x128/68949.png
inflating: /content/faces/thumbnails128x128/68950.png
inflating: /content/faces/thumbnails128x128/68951.png
inflating: /content/faces/thumbnails128x128/68952.png
inflating: /content/faces/thumbnails128x128/68953.png
inflating: /content/faces/thumbnails128x128/68954.png
inflating: /content/faces/thumbnails128x128/68955.png
inflating: /content/faces/thumbnails128x128/68956.png
inflating: /content/faces/thumbnails128x128/68957.png
inflating: /content/faces/thumbnails128x128/68958.png
inflating: /content/faces/thumbnails128x128/68959.png
inflating: /content/faces/thumbnails128x128/68960.png
inflating: /content/faces/thumbnails128x128/68961.png
inflating: /content/faces/thumbnails128x128/68962.png
inflating: /content/faces/thumbnails128x128/68963.png
inflating: /content/faces/thumbnails128x128/68964.png
inflating: /content/faces/thumbnails128x128/68965.png
inflating: /content/faces/thumbnails128x128/68966.png
inflating: /content/faces/thumbnails128x128/68967.png
inflating: /content/faces/thumbnails128x128/68968.png
inflating: /content/faces/thumbnails128x128/68969.png
inflating: /content/faces/thumbnails128x128/68970.png
inflating: /content/faces/thumbnails128x128/68971.png
inflating: /content/faces/thumbnails128x128/68972.png
inflating: /content/faces/thumbnails128x128/68973.png
inflating: /content/faces/thumbnails128x128/68974.png
inflating: /content/faces/thumbnails128x128/68975.png
inflating: /content/faces/thumbnails128x128/68976.png
inflating: /content/faces/thumbnails128x128/68977.png
inflating: /content/faces/thumbnails128x128/68978.png
inflating: /content/faces/thumbnails128x128/68979.png
inflating: /content/faces/thumbnails128x128/68980.png
inflating: /content/faces/thumbnails128x128/68981.png
inflating: /content/faces/thumbnails128x128/68982.png
inflating: /content/faces/thumbnails128x128/68983.png
inflating: /content/faces/thumbnails128x128/68984.png
inflating: /content/faces/thumbnails128x128/68985.png
inflating: /content/faces/thumbnails128x128/68986.png
inflating: /content/faces/thumbnails128x128/68987.png
inflating: /content/faces/thumbnails128x128/68988.png
inflating: /content/faces/thumbnails128x128/68989.png
inflating: /content/faces/thumbnails128x128/68990.png
inflating: /content/faces/thumbnails128x128/68991.png
inflating: /content/faces/thumbnails128x128/68992.png
inflating: /content/faces/thumbnails128x128/68993.png
inflating: /content/faces/thumbnails128x128/68994.png
inflating: /content/faces/thumbnails128x128/68995.png
inflating: /content/faces/thumbnails128x128/68996.png
inflating: /content/faces/thumbnails128x128/68997.png
inflating: /content/faces/thumbnails128x128/68998.png
inflating: /content/faces/thumbnails128x128/68999.png
inflating: /content/faces/thumbnails128x128/69000.png
inflating: /content/faces/thumbnails128x128/69001.png
inflating: /content/faces/thumbnails128x128/69002.png
inflating: /content/faces/thumbnails128x128/69003.png
inflating: /content/faces/thumbnails128x128/69004.png
inflating: /content/faces/thumbnails128x128/69005.png
inflating: /content/faces/thumbnails128x128/69006.png
inflating: /content/faces/thumbnails128x128/69007.png
inflating: /content/faces/thumbnails128x128/69008.png
inflating: /content/faces/thumbnails128x128/69009.png
inflating: /content/faces/thumbnails128x128/69010.png
inflating: /content/faces/thumbnails128x128/69011.png
inflating: /content/faces/thumbnails128x128/69012.png
inflating: /content/faces/thumbnails128x128/69013.png
inflating: /content/faces/thumbnails128x128/69014.png
inflating: /content/faces/thumbnails128x128/69015.png
inflating: /content/faces/thumbnails128x128/69016.png
inflating: /content/faces/thumbnails128x128/69017.png
inflating: /content/faces/thumbnails128x128/69018.png
inflating: /content/faces/thumbnails128x128/69019.png
inflating: /content/faces/thumbnails128x128/69020.png
inflating: /content/faces/thumbnails128x128/69021.png
inflating: /content/faces/thumbnails128x128/69022.png
inflating: /content/faces/thumbnails128x128/69023.png
inflating: /content/faces/thumbnails128x128/69024.png
inflating: /content/faces/thumbnails128x128/69025.png
inflating: /content/faces/thumbnails128x128/69026.png
inflating: /content/faces/thumbnails128x128/69027.png
inflating: /content/faces/thumbnails128x128/69028.png
inflating: /content/faces/thumbnails128x128/69029.png
inflating: /content/faces/thumbnails128x128/69030.png
inflating: /content/faces/thumbnails128x128/69031.png
inflating: /content/faces/thumbnails128x128/69032.png
inflating: /content/faces/thumbnails128x128/69033.png
inflating: /content/faces/thumbnails128x128/69034.png
inflating: /content/faces/thumbnails128x128/69035.png
inflating: /content/faces/thumbnails128x128/69036.png
inflating: /content/faces/thumbnails128x128/69037.png
inflating: /content/faces/thumbnails128x128/69038.png
inflating: /content/faces/thumbnails128x128/69039.png
inflating: /content/faces/thumbnails128x128/69040.png
inflating: /content/faces/thumbnails128x128/69041.png
inflating: /content/faces/thumbnails128x128/69042.png
inflating: /content/faces/thumbnails128x128/69043.png
inflating: /content/faces/thumbnails128x128/69044.png
inflating: /content/faces/thumbnails128x128/69045.png
inflating: /content/faces/thumbnails128x128/69046.png
inflating: /content/faces/thumbnails128x128/69047.png
inflating: /content/faces/thumbnails128x128/69048.png
inflating: /content/faces/thumbnails128x128/69049.png
inflating: /content/faces/thumbnails128x128/69050.png
inflating: /content/faces/thumbnails128x128/69051.png
inflating: /content/faces/thumbnails128x128/69052.png
inflating: /content/faces/thumbnails128x128/69053.png
inflating: /content/faces/thumbnails128x128/69054.png
inflating: /content/faces/thumbnails128x128/69055.png
inflating: /content/faces/thumbnails128x128/69056.png
inflating: /content/faces/thumbnails128x128/69057.png
inflating: /content/faces/thumbnails128x128/69058.png
inflating: /content/faces/thumbnails128x128/69059.png
inflating: /content/faces/thumbnails128x128/69060.png
inflating: /content/faces/thumbnails128x128/69061.png
inflating: /content/faces/thumbnails128x128/69062.png
inflating: /content/faces/thumbnails128x128/69063.png
inflating: /content/faces/thumbnails128x128/69064.png
inflating: /content/faces/thumbnails128x128/69065.png
inflating: /content/faces/thumbnails128x128/69066.png
inflating: /content/faces/thumbnails128x128/69067.png
inflating: /content/faces/thumbnails128x128/69068.png
inflating: /content/faces/thumbnails128x128/69069.png
inflating: /content/faces/thumbnails128x128/69070.png
inflating: /content/faces/thumbnails128x128/69071.png
inflating: /content/faces/thumbnails128x128/69072.png
inflating: /content/faces/thumbnails128x128/69073.png
inflating: /content/faces/thumbnails128x128/69074.png
inflating: /content/faces/thumbnails128x128/69075.png
inflating: /content/faces/thumbnails128x128/69076.png
inflating: /content/faces/thumbnails128x128/69077.png
inflating: /content/faces/thumbnails128x128/69078.png
inflating: /content/faces/thumbnails128x128/69079.png
inflating: /content/faces/thumbnails128x128/69080.png
inflating: /content/faces/thumbnails128x128/69081.png
inflating: /content/faces/thumbnails128x128/69082.png
inflating: /content/faces/thumbnails128x128/69083.png
inflating: /content/faces/thumbnails128x128/69084.png
inflating: /content/faces/thumbnails128x128/69085.png
inflating: /content/faces/thumbnails128x128/69086.png
inflating: /content/faces/thumbnails128x128/69087.png
inflating: /content/faces/thumbnails128x128/69088.png
inflating: /content/faces/thumbnails128x128/69089.png
inflating: /content/faces/thumbnails128x128/69090.png
inflating: /content/faces/thumbnails128x128/69091.png
inflating: /content/faces/thumbnails128x128/69092.png
inflating: /content/faces/thumbnails128x128/69093.png
inflating: /content/faces/thumbnails128x128/69094.png
inflating: /content/faces/thumbnails128x128/69095.png
inflating: /content/faces/thumbnails128x128/69096.png
inflating: /content/faces/thumbnails128x128/69097.png
inflating: /content/faces/thumbnails128x128/69098.png
inflating: /content/faces/thumbnails128x128/69099.png
inflating: /content/faces/thumbnails128x128/69100.png
inflating: /content/faces/thumbnails128x128/69101.png
inflating: /content/faces/thumbnails128x128/69102.png
inflating: /content/faces/thumbnails128x128/69103.png
inflating: /content/faces/thumbnails128x128/69104.png
inflating: /content/faces/thumbnails128x128/69105.png
inflating: /content/faces/thumbnails128x128/69106.png
inflating: /content/faces/thumbnails128x128/69107.png
inflating: /content/faces/thumbnails128x128/69108.png
inflating: /content/faces/thumbnails128x128/69109.png
inflating: /content/faces/thumbnails128x128/69110.png
inflating: /content/faces/thumbnails128x128/69111.png
inflating: /content/faces/thumbnails128x128/69112.png
inflating: /content/faces/thumbnails128x128/69113.png
inflating: /content/faces/thumbnails128x128/69114.png
inflating: /content/faces/thumbnails128x128/69115.png
inflating: /content/faces/thumbnails128x128/69116.png
inflating: /content/faces/thumbnails128x128/69117.png
inflating: /content/faces/thumbnails128x128/69118.png
inflating: /content/faces/thumbnails128x128/69119.png
inflating: /content/faces/thumbnails128x128/69120.png
inflating: /content/faces/thumbnails128x128/69121.png
inflating: /content/faces/thumbnails128x128/69122.png
inflating: /content/faces/thumbnails128x128/69123.png
inflating: /content/faces/thumbnails128x128/69124.png
inflating: /content/faces/thumbnails128x128/69125.png
inflating: /content/faces/thumbnails128x128/69126.png
inflating: /content/faces/thumbnails128x128/69127.png
inflating: /content/faces/thumbnails128x128/69128.png
inflating: /content/faces/thumbnails128x128/69129.png
inflating: /content/faces/thumbnails128x128/69130.png
inflating: /content/faces/thumbnails128x128/69131.png
inflating: /content/faces/thumbnails128x128/69132.png
inflating: /content/faces/thumbnails128x128/69133.png
inflating: /content/faces/thumbnails128x128/69134.png
inflating: /content/faces/thumbnails128x128/69135.png
inflating: /content/faces/thumbnails128x128/69136.png
inflating: /content/faces/thumbnails128x128/69137.png
inflating: /content/faces/thumbnails128x128/69138.png
inflating: /content/faces/thumbnails128x128/69139.png
inflating: /content/faces/thumbnails128x128/69140.png
inflating: /content/faces/thumbnails128x128/69141.png
inflating: /content/faces/thumbnails128x128/69142.png
inflating: /content/faces/thumbnails128x128/69143.png
inflating: /content/faces/thumbnails128x128/69144.png
inflating: /content/faces/thumbnails128x128/69145.png
inflating: /content/faces/thumbnails128x128/69146.png
inflating: /content/faces/thumbnails128x128/69147.png
inflating: /content/faces/thumbnails128x128/69148.png
inflating: /content/faces/thumbnails128x128/69149.png
inflating: /content/faces/thumbnails128x128/69150.png
inflating: /content/faces/thumbnails128x128/69151.png
inflating: /content/faces/thumbnails128x128/69152.png
inflating: /content/faces/thumbnails128x128/69153.png
inflating: /content/faces/thumbnails128x128/69154.png
inflating: /content/faces/thumbnails128x128/69155.png
inflating: /content/faces/thumbnails128x128/69156.png
inflating: /content/faces/thumbnails128x128/69157.png
inflating: /content/faces/thumbnails128x128/69158.png
inflating: /content/faces/thumbnails128x128/69159.png
inflating: /content/faces/thumbnails128x128/69160.png
inflating: /content/faces/thumbnails128x128/69161.png
inflating: /content/faces/thumbnails128x128/69162.png
inflating: /content/faces/thumbnails128x128/69163.png
inflating: /content/faces/thumbnails128x128/69164.png
inflating: /content/faces/thumbnails128x128/69165.png
inflating: /content/faces/thumbnails128x128/69166.png
inflating: /content/faces/thumbnails128x128/69167.png
inflating: /content/faces/thumbnails128x128/69168.png
inflating: /content/faces/thumbnails128x128/69169.png
inflating: /content/faces/thumbnails128x128/69170.png
inflating: /content/faces/thumbnails128x128/69171.png
inflating: /content/faces/thumbnails128x128/69172.png
inflating: /content/faces/thumbnails128x128/69173.png
inflating: /content/faces/thumbnails128x128/69174.png
inflating: /content/faces/thumbnails128x128/69175.png
inflating: /content/faces/thumbnails128x128/69176.png
inflating: /content/faces/thumbnails128x128/69177.png
inflating: /content/faces/thumbnails128x128/69178.png
inflating: /content/faces/thumbnails128x128/69179.png
inflating: /content/faces/thumbnails128x128/69180.png
inflating: /content/faces/thumbnails128x128/69181.png
inflating: /content/faces/thumbnails128x128/69182.png
inflating: /content/faces/thumbnails128x128/69183.png
inflating: /content/faces/thumbnails128x128/69184.png
inflating: /content/faces/thumbnails128x128/69185.png
inflating: /content/faces/thumbnails128x128/69186.png
inflating: /content/faces/thumbnails128x128/69187.png
inflating: /content/faces/thumbnails128x128/69188.png
inflating: /content/faces/thumbnails128x128/69189.png
inflating: /content/faces/thumbnails128x128/69190.png
inflating: /content/faces/thumbnails128x128/69191.png
inflating: /content/faces/thumbnails128x128/69192.png
inflating: /content/faces/thumbnails128x128/69193.png
inflating: /content/faces/thumbnails128x128/69194.png
inflating: /content/faces/thumbnails128x128/69195.png
inflating: /content/faces/thumbnails128x128/69196.png
inflating: /content/faces/thumbnails128x128/69197.png
inflating: /content/faces/thumbnails128x128/69198.png
inflating: /content/faces/thumbnails128x128/69199.png
inflating: /content/faces/thumbnails128x128/69200.png
inflating: /content/faces/thumbnails128x128/69201.png
inflating: /content/faces/thumbnails128x128/69202.png
inflating: /content/faces/thumbnails128x128/69203.png
inflating: /content/faces/thumbnails128x128/69204.png
inflating: /content/faces/thumbnails128x128/69205.png
inflating: /content/faces/thumbnails128x128/69206.png
inflating: /content/faces/thumbnails128x128/69207.png
inflating: /content/faces/thumbnails128x128/69208.png
inflating: /content/faces/thumbnails128x128/69209.png
inflating: /content/faces/thumbnails128x128/69210.png
inflating: /content/faces/thumbnails128x128/69211.png
inflating: /content/faces/thumbnails128x128/69212.png
inflating: /content/faces/thumbnails128x128/69213.png
inflating: /content/faces/thumbnails128x128/69214.png
inflating: /content/faces/thumbnails128x128/69215.png
inflating: /content/faces/thumbnails128x128/69216.png
inflating: /content/faces/thumbnails128x128/69217.png
inflating: /content/faces/thumbnails128x128/69218.png
inflating: /content/faces/thumbnails128x128/69219.png
inflating: /content/faces/thumbnails128x128/69220.png
inflating: /content/faces/thumbnails128x128/69221.png
inflating: /content/faces/thumbnails128x128/69222.png
inflating: /content/faces/thumbnails128x128/69223.png
inflating: /content/faces/thumbnails128x128/69224.png
inflating: /content/faces/thumbnails128x128/69225.png
inflating: /content/faces/thumbnails128x128/69226.png
inflating: /content/faces/thumbnails128x128/69227.png
inflating: /content/faces/thumbnails128x128/69228.png
inflating: /content/faces/thumbnails128x128/69229.png
inflating: /content/faces/thumbnails128x128/69230.png
inflating: /content/faces/thumbnails128x128/69231.png
inflating: /content/faces/thumbnails128x128/69232.png
inflating: /content/faces/thumbnails128x128/69233.png
inflating: /content/faces/thumbnails128x128/69234.png
inflating: /content/faces/thumbnails128x128/69235.png
inflating: /content/faces/thumbnails128x128/69236.png
inflating: /content/faces/thumbnails128x128/69237.png
inflating: /content/faces/thumbnails128x128/69238.png
inflating: /content/faces/thumbnails128x128/69239.png
inflating: /content/faces/thumbnails128x128/69240.png
inflating: /content/faces/thumbnails128x128/69241.png
inflating: /content/faces/thumbnails128x128/69242.png
inflating: /content/faces/thumbnails128x128/69243.png
inflating: /content/faces/thumbnails128x128/69244.png
inflating: /content/faces/thumbnails128x128/69245.png
inflating: /content/faces/thumbnails128x128/69246.png
inflating: /content/faces/thumbnails128x128/69247.png
inflating: /content/faces/thumbnails128x128/69248.png
inflating: /content/faces/thumbnails128x128/69249.png
inflating: /content/faces/thumbnails128x128/69250.png
inflating: /content/faces/thumbnails128x128/69251.png
inflating: /content/faces/thumbnails128x128/69252.png
inflating: /content/faces/thumbnails128x128/69253.png
inflating: /content/faces/thumbnails128x128/69254.png
inflating: /content/faces/thumbnails128x128/69255.png
inflating: /content/faces/thumbnails128x128/69256.png
inflating: /content/faces/thumbnails128x128/69257.png
inflating: /content/faces/thumbnails128x128/69258.png
inflating: /content/faces/thumbnails128x128/69259.png
inflating: /content/faces/thumbnails128x128/69260.png
inflating: /content/faces/thumbnails128x128/69261.png
inflating: /content/faces/thumbnails128x128/69262.png
inflating: /content/faces/thumbnails128x128/69263.png
inflating: /content/faces/thumbnails128x128/69264.png
inflating: /content/faces/thumbnails128x128/69265.png
inflating: /content/faces/thumbnails128x128/69266.png
inflating: /content/faces/thumbnails128x128/69267.png
inflating: /content/faces/thumbnails128x128/69268.png
inflating: /content/faces/thumbnails128x128/69269.png
inflating: /content/faces/thumbnails128x128/69270.png
inflating: /content/faces/thumbnails128x128/69271.png
inflating: /content/faces/thumbnails128x128/69272.png
inflating: /content/faces/thumbnails128x128/69273.png
inflating: /content/faces/thumbnails128x128/69274.png
inflating: /content/faces/thumbnails128x128/69275.png
inflating: /content/faces/thumbnails128x128/69276.png
inflating: /content/faces/thumbnails128x128/69277.png
inflating: /content/faces/thumbnails128x128/69278.png
inflating: /content/faces/thumbnails128x128/69279.png
inflating: /content/faces/thumbnails128x128/69280.png
inflating: /content/faces/thumbnails128x128/69281.png
inflating: /content/faces/thumbnails128x128/69282.png
inflating: /content/faces/thumbnails128x128/69283.png
inflating: /content/faces/thumbnails128x128/69284.png
inflating: /content/faces/thumbnails128x128/69285.png
inflating: /content/faces/thumbnails128x128/69286.png
inflating: /content/faces/thumbnails128x128/69287.png
inflating: /content/faces/thumbnails128x128/69288.png
inflating: /content/faces/thumbnails128x128/69289.png
inflating: /content/faces/thumbnails128x128/69290.png
inflating: /content/faces/thumbnails128x128/69291.png
inflating: /content/faces/thumbnails128x128/69292.png
inflating: /content/faces/thumbnails128x128/69293.png
inflating: /content/faces/thumbnails128x128/69294.png
inflating: /content/faces/thumbnails128x128/69295.png
inflating: /content/faces/thumbnails128x128/69296.png
inflating: /content/faces/thumbnails128x128/69297.png
inflating: /content/faces/thumbnails128x128/69298.png
inflating: /content/faces/thumbnails128x128/69299.png
inflating: /content/faces/thumbnails128x128/69300.png
inflating: /content/faces/thumbnails128x128/69301.png
inflating: /content/faces/thumbnails128x128/69302.png
inflating: /content/faces/thumbnails128x128/69303.png
inflating: /content/faces/thumbnails128x128/69304.png
inflating: /content/faces/thumbnails128x128/69305.png
inflating: /content/faces/thumbnails128x128/69306.png
inflating: /content/faces/thumbnails128x128/69307.png
inflating: /content/faces/thumbnails128x128/69308.png
inflating: /content/faces/thumbnails128x128/69309.png
inflating: /content/faces/thumbnails128x128/69310.png
inflating: /content/faces/thumbnails128x128/69311.png
inflating: /content/faces/thumbnails128x128/69312.png
inflating: /content/faces/thumbnails128x128/69313.png
inflating: /content/faces/thumbnails128x128/69314.png
inflating: /content/faces/thumbnails128x128/69315.png
inflating: /content/faces/thumbnails128x128/69316.png
inflating: /content/faces/thumbnails128x128/69317.png
inflating: /content/faces/thumbnails128x128/69318.png
inflating: /content/faces/thumbnails128x128/69319.png
inflating: /content/faces/thumbnails128x128/69320.png
inflating: /content/faces/thumbnails128x128/69321.png
inflating: /content/faces/thumbnails128x128/69322.png
inflating: /content/faces/thumbnails128x128/69323.png
inflating: /content/faces/thumbnails128x128/69324.png
inflating: /content/faces/thumbnails128x128/69325.png
inflating: /content/faces/thumbnails128x128/69326.png
inflating: /content/faces/thumbnails128x128/69327.png
inflating: /content/faces/thumbnails128x128/69328.png
inflating: /content/faces/thumbnails128x128/69329.png
inflating: /content/faces/thumbnails128x128/69330.png
inflating: /content/faces/thumbnails128x128/69331.png
inflating: /content/faces/thumbnails128x128/69332.png
inflating: /content/faces/thumbnails128x128/69333.png
inflating: /content/faces/thumbnails128x128/69334.png
inflating: /content/faces/thumbnails128x128/69335.png
inflating: /content/faces/thumbnails128x128/69336.png
inflating: /content/faces/thumbnails128x128/69337.png
inflating: /content/faces/thumbnails128x128/69338.png
inflating: /content/faces/thumbnails128x128/69339.png
inflating: /content/faces/thumbnails128x128/69340.png
inflating: /content/faces/thumbnails128x128/69341.png
inflating: /content/faces/thumbnails128x128/69342.png
inflating: /content/faces/thumbnails128x128/69343.png
inflating: /content/faces/thumbnails128x128/69344.png
inflating: /content/faces/thumbnails128x128/69345.png
inflating: /content/faces/thumbnails128x128/69346.png
inflating: /content/faces/thumbnails128x128/69347.png
inflating: /content/faces/thumbnails128x128/69348.png
inflating: /content/faces/thumbnails128x128/69349.png
inflating: /content/faces/thumbnails128x128/69350.png
inflating: /content/faces/thumbnails128x128/69351.png
inflating: /content/faces/thumbnails128x128/69352.png
inflating: /content/faces/thumbnails128x128/69353.png
inflating: /content/faces/thumbnails128x128/69354.png
inflating: /content/faces/thumbnails128x128/69355.png
inflating: /content/faces/thumbnails128x128/69356.png
inflating: /content/faces/thumbnails128x128/69357.png
inflating: /content/faces/thumbnails128x128/69358.png
inflating: /content/faces/thumbnails128x128/69359.png
inflating: /content/faces/thumbnails128x128/69360.png
inflating: /content/faces/thumbnails128x128/69361.png
inflating: /content/faces/thumbnails128x128/69362.png
inflating: /content/faces/thumbnails128x128/69363.png
inflating: /content/faces/thumbnails128x128/69364.png
inflating: /content/faces/thumbnails128x128/69365.png
inflating: /content/faces/thumbnails128x128/69366.png
inflating: /content/faces/thumbnails128x128/69367.png
inflating: /content/faces/thumbnails128x128/69368.png
inflating: /content/faces/thumbnails128x128/69369.png
inflating: /content/faces/thumbnails128x128/69370.png
inflating: /content/faces/thumbnails128x128/69371.png
inflating: /content/faces/thumbnails128x128/69372.png
inflating: /content/faces/thumbnails128x128/69373.png
inflating: /content/faces/thumbnails128x128/69374.png
inflating: /content/faces/thumbnails128x128/69375.png
inflating: /content/faces/thumbnails128x128/69376.png
inflating: /content/faces/thumbnails128x128/69377.png
inflating: /content/faces/thumbnails128x128/69378.png
inflating: /content/faces/thumbnails128x128/69379.png
inflating: /content/faces/thumbnails128x128/69380.png
inflating: /content/faces/thumbnails128x128/69381.png
inflating: /content/faces/thumbnails128x128/69382.png
inflating: /content/faces/thumbnails128x128/69383.png
inflating: /content/faces/thumbnails128x128/69384.png
inflating: /content/faces/thumbnails128x128/69385.png
inflating: /content/faces/thumbnails128x128/69386.png
inflating: /content/faces/thumbnails128x128/69387.png
inflating: /content/faces/thumbnails128x128/69388.png
inflating: /content/faces/thumbnails128x128/69389.png
inflating: /content/faces/thumbnails128x128/69390.png
inflating: /content/faces/thumbnails128x128/69391.png
inflating: /content/faces/thumbnails128x128/69392.png
inflating: /content/faces/thumbnails128x128/69393.png
inflating: /content/faces/thumbnails128x128/69394.png
inflating: /content/faces/thumbnails128x128/69395.png
inflating: /content/faces/thumbnails128x128/69396.png
inflating: /content/faces/thumbnails128x128/69397.png
inflating: /content/faces/thumbnails128x128/69398.png
inflating: /content/faces/thumbnails128x128/69399.png
inflating: /content/faces/thumbnails128x128/69400.png
inflating: /content/faces/thumbnails128x128/69401.png
inflating: /content/faces/thumbnails128x128/69402.png
inflating: /content/faces/thumbnails128x128/69403.png
inflating: /content/faces/thumbnails128x128/69404.png
inflating: /content/faces/thumbnails128x128/69405.png
inflating: /content/faces/thumbnails128x128/69406.png
inflating: /content/faces/thumbnails128x128/69407.png
inflating: /content/faces/thumbnails128x128/69408.png
inflating: /content/faces/thumbnails128x128/69409.png
inflating: /content/faces/thumbnails128x128/69410.png
inflating: /content/faces/thumbnails128x128/69411.png
inflating: /content/faces/thumbnails128x128/69412.png
inflating: /content/faces/thumbnails128x128/69413.png
inflating: /content/faces/thumbnails128x128/69414.png
inflating: /content/faces/thumbnails128x128/69415.png
inflating: /content/faces/thumbnails128x128/69416.png
inflating: /content/faces/thumbnails128x128/69417.png
inflating: /content/faces/thumbnails128x128/69418.png
inflating: /content/faces/thumbnails128x128/69419.png
inflating: /content/faces/thumbnails128x128/69420.png
inflating: /content/faces/thumbnails128x128/69421.png
inflating: /content/faces/thumbnails128x128/69422.png
inflating: /content/faces/thumbnails128x128/69423.png
inflating: /content/faces/thumbnails128x128/69424.png
inflating: /content/faces/thumbnails128x128/69425.png
inflating: /content/faces/thumbnails128x128/69426.png
inflating: /content/faces/thumbnails128x128/69427.png
inflating: /content/faces/thumbnails128x128/69428.png
inflating: /content/faces/thumbnails128x128/69429.png
inflating: /content/faces/thumbnails128x128/69430.png
inflating: /content/faces/thumbnails128x128/69431.png
inflating: /content/faces/thumbnails128x128/69432.png
inflating: /content/faces/thumbnails128x128/69433.png
inflating: /content/faces/thumbnails128x128/69434.png
inflating: /content/faces/thumbnails128x128/69435.png
inflating: /content/faces/thumbnails128x128/69436.png
inflating: /content/faces/thumbnails128x128/69437.png
inflating: /content/faces/thumbnails128x128/69438.png
inflating: /content/faces/thumbnails128x128/69439.png
inflating: /content/faces/thumbnails128x128/69440.png
inflating: /content/faces/thumbnails128x128/69441.png
inflating: /content/faces/thumbnails128x128/69442.png
inflating: /content/faces/thumbnails128x128/69443.png
inflating: /content/faces/thumbnails128x128/69444.png
inflating: /content/faces/thumbnails128x128/69445.png
inflating: /content/faces/thumbnails128x128/69446.png
inflating: /content/faces/thumbnails128x128/69447.png
inflating: /content/faces/thumbnails128x128/69448.png
inflating: /content/faces/thumbnails128x128/69449.png
inflating: /content/faces/thumbnails128x128/69450.png
inflating: /content/faces/thumbnails128x128/69451.png
inflating: /content/faces/thumbnails128x128/69452.png
inflating: /content/faces/thumbnails128x128/69453.png
inflating: /content/faces/thumbnails128x128/69454.png
inflating: /content/faces/thumbnails128x128/69455.png
inflating: /content/faces/thumbnails128x128/69456.png
inflating: /content/faces/thumbnails128x128/69457.png
inflating: /content/faces/thumbnails128x128/69458.png
inflating: /content/faces/thumbnails128x128/69459.png
inflating: /content/faces/thumbnails128x128/69460.png
inflating: /content/faces/thumbnails128x128/69461.png
inflating: /content/faces/thumbnails128x128/69462.png
inflating: /content/faces/thumbnails128x128/69463.png
inflating: /content/faces/thumbnails128x128/69464.png
inflating: /content/faces/thumbnails128x128/69465.png
inflating: /content/faces/thumbnails128x128/69466.png
inflating: /content/faces/thumbnails128x128/69467.png
inflating: /content/faces/thumbnails128x128/69468.png
inflating: /content/faces/thumbnails128x128/69469.png
inflating: /content/faces/thumbnails128x128/69470.png
inflating: /content/faces/thumbnails128x128/69471.png
inflating: /content/faces/thumbnails128x128/69472.png
inflating: /content/faces/thumbnails128x128/69473.png
inflating: /content/faces/thumbnails128x128/69474.png
inflating: /content/faces/thumbnails128x128/69475.png
inflating: /content/faces/thumbnails128x128/69476.png
inflating: /content/faces/thumbnails128x128/69477.png
inflating: /content/faces/thumbnails128x128/69478.png
inflating: /content/faces/thumbnails128x128/69479.png
inflating: /content/faces/thumbnails128x128/69480.png
inflating: /content/faces/thumbnails128x128/69481.png
inflating: /content/faces/thumbnails128x128/69482.png
inflating: /content/faces/thumbnails128x128/69483.png
inflating: /content/faces/thumbnails128x128/69484.png
inflating: /content/faces/thumbnails128x128/69485.png
inflating: /content/faces/thumbnails128x128/69486.png
inflating: /content/faces/thumbnails128x128/69487.png
inflating: /content/faces/thumbnails128x128/69488.png
inflating: /content/faces/thumbnails128x128/69489.png
inflating: /content/faces/thumbnails128x128/69490.png
inflating: /content/faces/thumbnails128x128/69491.png
inflating: /content/faces/thumbnails128x128/69492.png
inflating: /content/faces/thumbnails128x128/69493.png
inflating: /content/faces/thumbnails128x128/69494.png
inflating: /content/faces/thumbnails128x128/69495.png
inflating: /content/faces/thumbnails128x128/69496.png
inflating: /content/faces/thumbnails128x128/69497.png
inflating: /content/faces/thumbnails128x128/69498.png
inflating: /content/faces/thumbnails128x128/69499.png
inflating: /content/faces/thumbnails128x128/69500.png
inflating: /content/faces/thumbnails128x128/69501.png
inflating: /content/faces/thumbnails128x128/69502.png
inflating: /content/faces/thumbnails128x128/69503.png
inflating: /content/faces/thumbnails128x128/69504.png
inflating: /content/faces/thumbnails128x128/69505.png
inflating: /content/faces/thumbnails128x128/69506.png
inflating: /content/faces/thumbnails128x128/69507.png
inflating: /content/faces/thumbnails128x128/69508.png
inflating: /content/faces/thumbnails128x128/69509.png
inflating: /content/faces/thumbnails128x128/69510.png
inflating: /content/faces/thumbnails128x128/69511.png
inflating: /content/faces/thumbnails128x128/69512.png
inflating: /content/faces/thumbnails128x128/69513.png
inflating: /content/faces/thumbnails128x128/69514.png
inflating: /content/faces/thumbnails128x128/69515.png
inflating: /content/faces/thumbnails128x128/69516.png
inflating: /content/faces/thumbnails128x128/69517.png
inflating: /content/faces/thumbnails128x128/69518.png
inflating: /content/faces/thumbnails128x128/69519.png
inflating: /content/faces/thumbnails128x128/69520.png
inflating: /content/faces/thumbnails128x128/69521.png
inflating: /content/faces/thumbnails128x128/69522.png
inflating: /content/faces/thumbnails128x128/69523.png
inflating: /content/faces/thumbnails128x128/69524.png
inflating: /content/faces/thumbnails128x128/69525.png
inflating: /content/faces/thumbnails128x128/69526.png
inflating: /content/faces/thumbnails128x128/69527.png
inflating: /content/faces/thumbnails128x128/69528.png
inflating: /content/faces/thumbnails128x128/69529.png
inflating: /content/faces/thumbnails128x128/69530.png
inflating: /content/faces/thumbnails128x128/69531.png
inflating: /content/faces/thumbnails128x128/69532.png
inflating: /content/faces/thumbnails128x128/69533.png
inflating: /content/faces/thumbnails128x128/69534.png
inflating: /content/faces/thumbnails128x128/69535.png
inflating: /content/faces/thumbnails128x128/69536.png
inflating: /content/faces/thumbnails128x128/69537.png
inflating: /content/faces/thumbnails128x128/69538.png
inflating: /content/faces/thumbnails128x128/69539.png
inflating: /content/faces/thumbnails128x128/69540.png
inflating: /content/faces/thumbnails128x128/69541.png
inflating: /content/faces/thumbnails128x128/69542.png
inflating: /content/faces/thumbnails128x128/69543.png
inflating: /content/faces/thumbnails128x128/69544.png
inflating: /content/faces/thumbnails128x128/69545.png
inflating: /content/faces/thumbnails128x128/69546.png
inflating: /content/faces/thumbnails128x128/69547.png
inflating: /content/faces/thumbnails128x128/69548.png
inflating: /content/faces/thumbnails128x128/69549.png
inflating: /content/faces/thumbnails128x128/69550.png
inflating: /content/faces/thumbnails128x128/69551.png
inflating: /content/faces/thumbnails128x128/69552.png
inflating: /content/faces/thumbnails128x128/69553.png
inflating: /content/faces/thumbnails128x128/69554.png
inflating: /content/faces/thumbnails128x128/69555.png
inflating: /content/faces/thumbnails128x128/69556.png
inflating: /content/faces/thumbnails128x128/69557.png
inflating: /content/faces/thumbnails128x128/69558.png
inflating: /content/faces/thumbnails128x128/69559.png
inflating: /content/faces/thumbnails128x128/69560.png
inflating: /content/faces/thumbnails128x128/69561.png
inflating: /content/faces/thumbnails128x128/69562.png
inflating: /content/faces/thumbnails128x128/69563.png
inflating: /content/faces/thumbnails128x128/69564.png
inflating: /content/faces/thumbnails128x128/69565.png
inflating: /content/faces/thumbnails128x128/69566.png
inflating: /content/faces/thumbnails128x128/69567.png
inflating: /content/faces/thumbnails128x128/69568.png
inflating: /content/faces/thumbnails128x128/69569.png
inflating: /content/faces/thumbnails128x128/69570.png
inflating: /content/faces/thumbnails128x128/69571.png
inflating: /content/faces/thumbnails128x128/69572.png
inflating: /content/faces/thumbnails128x128/69573.png
inflating: /content/faces/thumbnails128x128/69574.png
inflating: /content/faces/thumbnails128x128/69575.png
inflating: /content/faces/thumbnails128x128/69576.png
inflating: /content/faces/thumbnails128x128/69577.png
inflating: /content/faces/thumbnails128x128/69578.png
inflating: /content/faces/thumbnails128x128/69579.png
inflating: /content/faces/thumbnails128x128/69580.png
inflating: /content/faces/thumbnails128x128/69581.png
inflating: /content/faces/thumbnails128x128/69582.png
inflating: /content/faces/thumbnails128x128/69583.png
inflating: /content/faces/thumbnails128x128/69584.png
inflating: /content/faces/thumbnails128x128/69585.png
inflating: /content/faces/thumbnails128x128/69586.png
inflating: /content/faces/thumbnails128x128/69587.png
inflating: /content/faces/thumbnails128x128/69588.png
inflating: /content/faces/thumbnails128x128/69589.png
inflating: /content/faces/thumbnails128x128/69590.png
inflating: /content/faces/thumbnails128x128/69591.png
inflating: /content/faces/thumbnails128x128/69592.png
inflating: /content/faces/thumbnails128x128/69593.png
inflating: /content/faces/thumbnails128x128/69594.png
inflating: /content/faces/thumbnails128x128/69595.png
inflating: /content/faces/thumbnails128x128/69596.png
inflating: /content/faces/thumbnails128x128/69597.png
inflating: /content/faces/thumbnails128x128/69598.png
inflating: /content/faces/thumbnails128x128/69599.png
inflating: /content/faces/thumbnails128x128/69600.png
inflating: /content/faces/thumbnails128x128/69601.png
inflating: /content/faces/thumbnails128x128/69602.png
inflating: /content/faces/thumbnails128x128/69603.png
inflating: /content/faces/thumbnails128x128/69604.png
inflating: /content/faces/thumbnails128x128/69605.png
inflating: /content/faces/thumbnails128x128/69606.png
inflating: /content/faces/thumbnails128x128/69607.png
inflating: /content/faces/thumbnails128x128/69608.png
inflating: /content/faces/thumbnails128x128/69609.png
inflating: /content/faces/thumbnails128x128/69610.png
inflating: /content/faces/thumbnails128x128/69611.png
inflating: /content/faces/thumbnails128x128/69612.png
inflating: /content/faces/thumbnails128x128/69613.png
inflating: /content/faces/thumbnails128x128/69614.png
inflating: /content/faces/thumbnails128x128/69615.png
inflating: /content/faces/thumbnails128x128/69616.png
inflating: /content/faces/thumbnails128x128/69617.png
inflating: /content/faces/thumbnails128x128/69618.png
inflating: /content/faces/thumbnails128x128/69619.png
inflating: /content/faces/thumbnails128x128/69620.png
inflating: /content/faces/thumbnails128x128/69621.png
inflating: /content/faces/thumbnails128x128/69622.png
inflating: /content/faces/thumbnails128x128/69623.png
inflating: /content/faces/thumbnails128x128/69624.png
inflating: /content/faces/thumbnails128x128/69625.png
inflating: /content/faces/thumbnails128x128/69626.png
inflating: /content/faces/thumbnails128x128/69627.png
inflating: /content/faces/thumbnails128x128/69628.png
inflating: /content/faces/thumbnails128x128/69629.png
inflating: /content/faces/thumbnails128x128/69630.png
inflating: /content/faces/thumbnails128x128/69631.png
inflating: /content/faces/thumbnails128x128/69632.png
inflating: /content/faces/thumbnails128x128/69633.png
inflating: /content/faces/thumbnails128x128/69634.png
inflating: /content/faces/thumbnails128x128/69635.png
inflating: /content/faces/thumbnails128x128/69636.png
inflating: /content/faces/thumbnails128x128/69637.png
inflating: /content/faces/thumbnails128x128/69638.png
inflating: /content/faces/thumbnails128x128/69639.png
inflating: /content/faces/thumbnails128x128/69640.png
inflating: /content/faces/thumbnails128x128/69641.png
inflating: /content/faces/thumbnails128x128/69642.png
inflating: /content/faces/thumbnails128x128/69643.png
inflating: /content/faces/thumbnails128x128/69644.png
inflating: /content/faces/thumbnails128x128/69645.png
inflating: /content/faces/thumbnails128x128/69646.png
inflating: /content/faces/thumbnails128x128/69647.png
inflating: /content/faces/thumbnails128x128/69648.png
inflating: /content/faces/thumbnails128x128/69649.png
inflating: /content/faces/thumbnails128x128/69650.png
inflating: /content/faces/thumbnails128x128/69651.png
inflating: /content/faces/thumbnails128x128/69652.png
inflating: /content/faces/thumbnails128x128/69653.png
inflating: /content/faces/thumbnails128x128/69654.png
inflating: /content/faces/thumbnails128x128/69655.png
inflating: /content/faces/thumbnails128x128/69656.png
inflating: /content/faces/thumbnails128x128/69657.png
inflating: /content/faces/thumbnails128x128/69658.png
inflating: /content/faces/thumbnails128x128/69659.png
inflating: /content/faces/thumbnails128x128/69660.png
inflating: /content/faces/thumbnails128x128/69661.png
inflating: /content/faces/thumbnails128x128/69662.png
inflating: /content/faces/thumbnails128x128/69663.png
inflating: /content/faces/thumbnails128x128/69664.png
inflating: /content/faces/thumbnails128x128/69665.png
inflating: /content/faces/thumbnails128x128/69666.png
inflating: /content/faces/thumbnails128x128/69667.png
inflating: /content/faces/thumbnails128x128/69668.png
inflating: /content/faces/thumbnails128x128/69669.png
inflating: /content/faces/thumbnails128x128/69670.png
inflating: /content/faces/thumbnails128x128/69671.png
inflating: /content/faces/thumbnails128x128/69672.png
inflating: /content/faces/thumbnails128x128/69673.png
inflating: /content/faces/thumbnails128x128/69674.png
inflating: /content/faces/thumbnails128x128/69675.png
inflating: /content/faces/thumbnails128x128/69676.png
inflating: /content/faces/thumbnails128x128/69677.png
inflating: /content/faces/thumbnails128x128/69678.png
inflating: /content/faces/thumbnails128x128/69679.png
inflating: /content/faces/thumbnails128x128/69680.png
inflating: /content/faces/thumbnails128x128/69681.png
inflating: /content/faces/thumbnails128x128/69682.png
inflating: /content/faces/thumbnails128x128/69683.png
inflating: /content/faces/thumbnails128x128/69684.png
inflating: /content/faces/thumbnails128x128/69685.png
inflating: /content/faces/thumbnails128x128/69686.png
inflating: /content/faces/thumbnails128x128/69687.png
inflating: /content/faces/thumbnails128x128/69688.png
inflating: /content/faces/thumbnails128x128/69689.png
inflating: /content/faces/thumbnails128x128/69690.png
inflating: /content/faces/thumbnails128x128/69691.png
inflating: /content/faces/thumbnails128x128/69692.png
inflating: /content/faces/thumbnails128x128/69693.png
inflating: /content/faces/thumbnails128x128/69694.png
inflating: /content/faces/thumbnails128x128/69695.png
inflating: /content/faces/thumbnails128x128/69696.png
inflating: /content/faces/thumbnails128x128/69697.png
inflating: /content/faces/thumbnails128x128/69698.png
inflating: /content/faces/thumbnails128x128/69699.png
inflating: /content/faces/thumbnails128x128/69700.png
inflating: /content/faces/thumbnails128x128/69701.png
inflating: /content/faces/thumbnails128x128/69702.png
inflating: /content/faces/thumbnails128x128/69703.png
inflating: /content/faces/thumbnails128x128/69704.png
inflating: /content/faces/thumbnails128x128/69705.png
inflating: /content/faces/thumbnails128x128/69706.png
inflating: /content/faces/thumbnails128x128/69707.png
inflating: /content/faces/thumbnails128x128/69708.png
inflating: /content/faces/thumbnails128x128/69709.png
inflating: /content/faces/thumbnails128x128/69710.png
inflating: /content/faces/thumbnails128x128/69711.png
inflating: /content/faces/thumbnails128x128/69712.png
inflating: /content/faces/thumbnails128x128/69713.png
inflating: /content/faces/thumbnails128x128/69714.png
inflating: /content/faces/thumbnails128x128/69715.png
inflating: /content/faces/thumbnails128x128/69716.png
inflating: /content/faces/thumbnails128x128/69717.png
inflating: /content/faces/thumbnails128x128/69718.png
inflating: /content/faces/thumbnails128x128/69719.png
inflating: /content/faces/thumbnails128x128/69720.png
inflating: /content/faces/thumbnails128x128/69721.png
inflating: /content/faces/thumbnails128x128/69722.png
inflating: /content/faces/thumbnails128x128/69723.png
inflating: /content/faces/thumbnails128x128/69724.png
inflating: /content/faces/thumbnails128x128/69725.png
inflating: /content/faces/thumbnails128x128/69726.png
inflating: /content/faces/thumbnails128x128/69727.png
inflating: /content/faces/thumbnails128x128/69728.png
inflating: /content/faces/thumbnails128x128/69729.png
inflating: /content/faces/thumbnails128x128/69730.png
inflating: /content/faces/thumbnails128x128/69731.png
inflating: /content/faces/thumbnails128x128/69732.png
inflating: /content/faces/thumbnails128x128/69733.png
inflating: /content/faces/thumbnails128x128/69734.png
inflating: /content/faces/thumbnails128x128/69735.png
inflating: /content/faces/thumbnails128x128/69736.png
inflating: /content/faces/thumbnails128x128/69737.png
inflating: /content/faces/thumbnails128x128/69738.png
inflating: /content/faces/thumbnails128x128/69739.png
inflating: /content/faces/thumbnails128x128/69740.png
inflating: /content/faces/thumbnails128x128/69741.png
inflating: /content/faces/thumbnails128x128/69742.png
inflating: /content/faces/thumbnails128x128/69743.png
inflating: /content/faces/thumbnails128x128/69744.png
inflating: /content/faces/thumbnails128x128/69745.png
inflating: /content/faces/thumbnails128x128/69746.png
inflating: /content/faces/thumbnails128x128/69747.png
inflating: /content/faces/thumbnails128x128/69748.png
inflating: /content/faces/thumbnails128x128/69749.png
inflating: /content/faces/thumbnails128x128/69750.png
inflating: /content/faces/thumbnails128x128/69751.png
inflating: /content/faces/thumbnails128x128/69752.png
inflating: /content/faces/thumbnails128x128/69753.png
inflating: /content/faces/thumbnails128x128/69754.png
inflating: /content/faces/thumbnails128x128/69755.png
inflating: /content/faces/thumbnails128x128/69756.png
inflating: /content/faces/thumbnails128x128/69757.png
inflating: /content/faces/thumbnails128x128/69758.png
inflating: /content/faces/thumbnails128x128/69759.png
inflating: /content/faces/thumbnails128x128/69760.png
inflating: /content/faces/thumbnails128x128/69761.png
inflating: /content/faces/thumbnails128x128/69762.png
inflating: /content/faces/thumbnails128x128/69763.png
inflating: /content/faces/thumbnails128x128/69764.png
inflating: /content/faces/thumbnails128x128/69765.png
inflating: /content/faces/thumbnails128x128/69766.png
inflating: /content/faces/thumbnails128x128/69767.png
inflating: /content/faces/thumbnails128x128/69768.png
inflating: /content/faces/thumbnails128x128/69769.png
inflating: /content/faces/thumbnails128x128/69770.png
inflating: /content/faces/thumbnails128x128/69771.png
inflating: /content/faces/thumbnails128x128/69772.png
inflating: /content/faces/thumbnails128x128/69773.png
inflating: /content/faces/thumbnails128x128/69774.png
inflating: /content/faces/thumbnails128x128/69775.png
inflating: /content/faces/thumbnails128x128/69776.png
inflating: /content/faces/thumbnails128x128/69777.png
inflating: /content/faces/thumbnails128x128/69778.png
inflating: /content/faces/thumbnails128x128/69779.png
inflating: /content/faces/thumbnails128x128/69780.png
inflating: /content/faces/thumbnails128x128/69781.png
inflating: /content/faces/thumbnails128x128/69782.png
inflating: /content/faces/thumbnails128x128/69783.png
inflating: /content/faces/thumbnails128x128/69784.png
inflating: /content/faces/thumbnails128x128/69785.png
inflating: /content/faces/thumbnails128x128/69786.png
inflating: /content/faces/thumbnails128x128/69787.png
inflating: /content/faces/thumbnails128x128/69788.png
inflating: /content/faces/thumbnails128x128/69789.png
inflating: /content/faces/thumbnails128x128/69790.png
inflating: /content/faces/thumbnails128x128/69791.png
inflating: /content/faces/thumbnails128x128/69792.png
inflating: /content/faces/thumbnails128x128/69793.png
inflating: /content/faces/thumbnails128x128/69794.png
inflating: /content/faces/thumbnails128x128/69795.png
inflating: /content/faces/thumbnails128x128/69796.png
inflating: /content/faces/thumbnails128x128/69797.png
inflating: /content/faces/thumbnails128x128/69798.png
inflating: /content/faces/thumbnails128x128/69799.png
inflating: /content/faces/thumbnails128x128/69800.png
inflating: /content/faces/thumbnails128x128/69801.png
inflating: /content/faces/thumbnails128x128/69802.png
inflating: /content/faces/thumbnails128x128/69803.png
inflating: /content/faces/thumbnails128x128/69804.png
inflating: /content/faces/thumbnails128x128/69805.png
inflating: /content/faces/thumbnails128x128/69806.png
inflating: /content/faces/thumbnails128x128/69807.png
inflating: /content/faces/thumbnails128x128/69808.png
inflating: /content/faces/thumbnails128x128/69809.png
inflating: /content/faces/thumbnails128x128/69810.png
inflating: /content/faces/thumbnails128x128/69811.png
inflating: /content/faces/thumbnails128x128/69812.png
inflating: /content/faces/thumbnails128x128/69813.png
inflating: /content/faces/thumbnails128x128/69814.png
inflating: /content/faces/thumbnails128x128/69815.png
inflating: /content/faces/thumbnails128x128/69816.png
inflating: /content/faces/thumbnails128x128/69817.png
inflating: /content/faces/thumbnails128x128/69818.png
inflating: /content/faces/thumbnails128x128/69819.png
inflating: /content/faces/thumbnails128x128/69820.png
inflating: /content/faces/thumbnails128x128/69821.png
inflating: /content/faces/thumbnails128x128/69822.png
inflating: /content/faces/thumbnails128x128/69823.png
inflating: /content/faces/thumbnails128x128/69824.png
inflating: /content/faces/thumbnails128x128/69825.png
inflating: /content/faces/thumbnails128x128/69826.png
inflating: /content/faces/thumbnails128x128/69827.png
inflating: /content/faces/thumbnails128x128/69828.png
inflating: /content/faces/thumbnails128x128/69829.png
inflating: /content/faces/thumbnails128x128/69830.png
inflating: /content/faces/thumbnails128x128/69831.png
inflating: /content/faces/thumbnails128x128/69832.png
inflating: /content/faces/thumbnails128x128/69833.png
inflating: /content/faces/thumbnails128x128/69834.png
inflating: /content/faces/thumbnails128x128/69835.png
inflating: /content/faces/thumbnails128x128/69836.png
inflating: /content/faces/thumbnails128x128/69837.png
inflating: /content/faces/thumbnails128x128/69838.png
inflating: /content/faces/thumbnails128x128/69839.png
inflating: /content/faces/thumbnails128x128/69840.png
inflating: /content/faces/thumbnails128x128/69841.png
inflating: /content/faces/thumbnails128x128/69842.png
inflating: /content/faces/thumbnails128x128/69843.png
inflating: /content/faces/thumbnails128x128/69844.png
inflating: /content/faces/thumbnails128x128/69845.png
inflating: /content/faces/thumbnails128x128/69846.png
inflating: /content/faces/thumbnails128x128/69847.png
inflating: /content/faces/thumbnails128x128/69848.png
inflating: /content/faces/thumbnails128x128/69849.png
inflating: /content/faces/thumbnails128x128/69850.png
inflating: /content/faces/thumbnails128x128/69851.png
inflating: /content/faces/thumbnails128x128/69852.png
inflating: /content/faces/thumbnails128x128/69853.png
inflating: /content/faces/thumbnails128x128/69854.png
inflating: /content/faces/thumbnails128x128/69855.png
inflating: /content/faces/thumbnails128x128/69856.png
inflating: /content/faces/thumbnails128x128/69857.png
inflating: /content/faces/thumbnails128x128/69858.png
inflating: /content/faces/thumbnails128x128/69859.png
inflating: /content/faces/thumbnails128x128/69860.png
inflating: /content/faces/thumbnails128x128/69861.png
inflating: /content/faces/thumbnails128x128/69862.png
inflating: /content/faces/thumbnails128x128/69863.png
inflating: /content/faces/thumbnails128x128/69864.png
inflating: /content/faces/thumbnails128x128/69865.png
inflating: /content/faces/thumbnails128x128/69866.png
inflating: /content/faces/thumbnails128x128/69867.png
inflating: /content/faces/thumbnails128x128/69868.png
inflating: /content/faces/thumbnails128x128/69869.png
inflating: /content/faces/thumbnails128x128/69870.png
inflating: /content/faces/thumbnails128x128/69871.png
inflating: /content/faces/thumbnails128x128/69872.png
inflating: /content/faces/thumbnails128x128/69873.png
inflating: /content/faces/thumbnails128x128/69874.png
inflating: /content/faces/thumbnails128x128/69875.png
inflating: /content/faces/thumbnails128x128/69876.png
inflating: /content/faces/thumbnails128x128/69877.png
inflating: /content/faces/thumbnails128x128/69878.png
inflating: /content/faces/thumbnails128x128/69879.png
inflating: /content/faces/thumbnails128x128/69880.png
inflating: /content/faces/thumbnails128x128/69881.png
inflating: /content/faces/thumbnails128x128/69882.png
inflating: /content/faces/thumbnails128x128/69883.png
inflating: /content/faces/thumbnails128x128/69884.png
inflating: /content/faces/thumbnails128x128/69885.png
inflating: /content/faces/thumbnails128x128/69886.png
inflating: /content/faces/thumbnails128x128/69887.png
inflating: /content/faces/thumbnails128x128/69888.png
inflating: /content/faces/thumbnails128x128/69889.png
inflating: /content/faces/thumbnails128x128/69890.png
inflating: /content/faces/thumbnails128x128/69891.png
inflating: /content/faces/thumbnails128x128/69892.png
inflating: /content/faces/thumbnails128x128/69893.png
inflating: /content/faces/thumbnails128x128/69894.png
inflating: /content/faces/thumbnails128x128/69895.png
inflating: /content/faces/thumbnails128x128/69896.png
inflating: /content/faces/thumbnails128x128/69897.png
inflating: /content/faces/thumbnails128x128/69898.png
inflating: /content/faces/thumbnails128x128/69899.png
inflating: /content/faces/thumbnails128x128/69900.png
inflating: /content/faces/thumbnails128x128/69901.png
inflating: /content/faces/thumbnails128x128/69902.png
inflating: /content/faces/thumbnails128x128/69903.png
inflating: /content/faces/thumbnails128x128/69904.png
inflating: /content/faces/thumbnails128x128/69905.png
inflating: /content/faces/thumbnails128x128/69906.png
inflating: /content/faces/thumbnails128x128/69907.png
inflating: /content/faces/thumbnails128x128/69908.png
inflating: /content/faces/thumbnails128x128/69909.png
inflating: /content/faces/thumbnails128x128/69910.png
inflating: /content/faces/thumbnails128x128/69911.png
inflating: /content/faces/thumbnails128x128/69912.png
inflating: /content/faces/thumbnails128x128/69913.png
inflating: /content/faces/thumbnails128x128/69914.png
inflating: /content/faces/thumbnails128x128/69915.png
inflating: /content/faces/thumbnails128x128/69916.png
inflating: /content/faces/thumbnails128x128/69917.png
inflating: /content/faces/thumbnails128x128/69918.png
inflating: /content/faces/thumbnails128x128/69919.png
inflating: /content/faces/thumbnails128x128/69920.png
inflating: /content/faces/thumbnails128x128/69921.png
inflating: /content/faces/thumbnails128x128/69922.png
inflating: /content/faces/thumbnails128x128/69923.png
inflating: /content/faces/thumbnails128x128/69924.png
inflating: /content/faces/thumbnails128x128/69925.png
inflating: /content/faces/thumbnails128x128/69926.png
inflating: /content/faces/thumbnails128x128/69927.png
inflating: /content/faces/thumbnails128x128/69928.png
inflating: /content/faces/thumbnails128x128/69929.png
inflating: /content/faces/thumbnails128x128/69930.png
inflating: /content/faces/thumbnails128x128/69931.png
inflating: /content/faces/thumbnails128x128/69932.png
inflating: /content/faces/thumbnails128x128/69933.png
inflating: /content/faces/thumbnails128x128/69934.png
inflating: /content/faces/thumbnails128x128/69935.png
inflating: /content/faces/thumbnails128x128/69936.png
inflating: /content/faces/thumbnails128x128/69937.png
inflating: /content/faces/thumbnails128x128/69938.png
inflating: /content/faces/thumbnails128x128/69939.png
inflating: /content/faces/thumbnails128x128/69940.png
inflating: /content/faces/thumbnails128x128/69941.png
inflating: /content/faces/thumbnails128x128/69942.png
inflating: /content/faces/thumbnails128x128/69943.png
inflating: /content/faces/thumbnails128x128/69944.png
inflating: /content/faces/thumbnails128x128/69945.png
inflating: /content/faces/thumbnails128x128/69946.png
inflating: /content/faces/thumbnails128x128/69947.png
inflating: /content/faces/thumbnails128x128/69948.png
inflating: /content/faces/thumbnails128x128/69949.png
inflating: /content/faces/thumbnails128x128/69950.png
inflating: /content/faces/thumbnails128x128/69951.png
inflating: /content/faces/thumbnails128x128/69952.png
inflating: /content/faces/thumbnails128x128/69953.png
inflating: /content/faces/thumbnails128x128/69954.png
inflating: /content/faces/thumbnails128x128/69955.png
inflating: /content/faces/thumbnails128x128/69956.png
inflating: /content/faces/thumbnails128x128/69957.png
inflating: /content/faces/thumbnails128x128/69958.png
inflating: /content/faces/thumbnails128x128/69959.png
inflating: /content/faces/thumbnails128x128/69960.png
inflating: /content/faces/thumbnails128x128/69961.png
inflating: /content/faces/thumbnails128x128/69962.png
inflating: /content/faces/thumbnails128x128/69963.png
inflating: /content/faces/thumbnails128x128/69964.png
inflating: /content/faces/thumbnails128x128/69965.png
inflating: /content/faces/thumbnails128x128/69966.png
inflating: /content/faces/thumbnails128x128/69967.png
inflating: /content/faces/thumbnails128x128/69968.png
inflating: /content/faces/thumbnails128x128/69969.png
inflating: /content/faces/thumbnails128x128/69970.png
inflating: /content/faces/thumbnails128x128/69971.png
inflating: /content/faces/thumbnails128x128/69972.png
inflating: /content/faces/thumbnails128x128/69973.png
inflating: /content/faces/thumbnails128x128/69974.png
inflating: /content/faces/thumbnails128x128/69975.png
inflating: /content/faces/thumbnails128x128/69976.png
inflating: /content/faces/thumbnails128x128/69977.png
inflating: /content/faces/thumbnails128x128/69978.png
inflating: /content/faces/thumbnails128x128/69979.png
inflating: /content/faces/thumbnails128x128/69980.png
inflating: /content/faces/thumbnails128x128/69981.png
inflating: /content/faces/thumbnails128x128/69982.png
inflating: /content/faces/thumbnails128x128/69983.png
inflating: /content/faces/thumbnails128x128/69984.png
inflating: /content/faces/thumbnails128x128/69985.png
inflating: /content/faces/thumbnails128x128/69986.png
inflating: /content/faces/thumbnails128x128/69987.png
inflating: /content/faces/thumbnails128x128/69988.png
inflating: /content/faces/thumbnails128x128/69989.png
inflating: /content/faces/thumbnails128x128/69990.png
inflating: /content/faces/thumbnails128x128/69991.png
inflating: /content/faces/thumbnails128x128/69992.png
inflating: /content/faces/thumbnails128x128/69993.png
inflating: /content/faces/thumbnails128x128/69994.png
inflating: /content/faces/thumbnails128x128/69995.png
inflating: /content/faces/thumbnails128x128/69996.png
inflating: /content/faces/thumbnails128x128/69997.png
inflating: /content/faces/thumbnails128x128/69998.png
inflating: /content/faces/thumbnails128x128/69999.png
###Markdown
Load Data utility functionAs there are around 70k images and cannot fit on RAM so specify it according to your RAM size
###Code
PATH=os.path.join("faces/thumbnails128x128")
def load_data(path,upper_limit=20000,lower_limit=0):
data=[]
files=os.listdir(PATH)
for file in files[lower_limit:upper_limit]:
image=Image.open(os.path.join(path,file))
image=np.array(image)
image=image/255.0
data.append(image)
return np.array(data)
###Output
_____no_output_____
###Markdown
Loading train and test images
###Code
colored_train=load_data(PATH,lower_limit=35000,upper_limit=40000)
colored_test=load_data(PATH,lower_limit=15000,upper_limit=18000)
print(colored_train.shape)
print(colored_test.shape)
###Output
(5000, 128, 128, 3)
(3000, 128, 128, 3)
###Markdown
Converting RGB color space to Lab color space
###Code
lab_train=rgb2lab(colored_train)
lab_test=rgb2lab(colored_test)
###Output
_____no_output_____
###Markdown
First Channel for input image for network
###Code
input_train=lab_train[:,:,:,0]
input_test=lab_test[:,:,:,0]
###Output
_____no_output_____
###Markdown
Reshaping grayscale image So that it have 4 dimensions to feed to convolutional neural network
###Code
input_train=input_train.reshape(-1,128,128,1)
input_test=input_test.reshape(-1,128,128,1)
###Output
_____no_output_____
###Markdown
Remaining 2 Channel for output for network
###Code
output_train=lab_train[:,:,:,1:]
output_test=lab_test[:,:,:,1:]
output_train=output_train/128.0 # feature scaling
output_test=output_test/128.0
print(output_train.shape)
print(input_train.shape)
print(output_test.shape)
print(input_test.shape)
###Output
(5000, 128, 128, 2)
(5000, 128, 128, 1)
(3000, 128, 128, 2)
(3000, 128, 128, 1)
###Markdown
Utility Functions 1. merge_channels: merge the L channel (input) with ab channel (output)2. decoded_images: do merge_channels for all decoder output image3. plot_images: plot images using numpy
###Code
def merge_channels(input_L,output_ab):
image=np.zeros(shape=(input_L.shape[0],input_L.shape[1],3))
image[:,:,0]=np.squeeze(input_L)
image[:,:,1:]=output_ab
return image
def decoded_images(encoded_input,decoded_output):
assert len(encoded_input)==len(decoded_output)
images=[]
for i in range(len(encoded_input)):
img=merge_channels(encoded_input[i],decoded_output[i]*128.0)
img=lab2rgb(img)
images.append(img)
return np.array(images)
def plot_images(images,w=3,h=3,cmap=None):
images=np.squeeze(images)
fig, axes = plt.subplots(w, h)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
if cmap is not None:
ax.imshow(images[i],cmap=cmap)
else:
ax.imshow(images[i])
ax.set_xticks([])
ax.set_yticks([])
plt.show()
images=input_train[0:9]
plot_images(images,cmap="gray",w=3,h=3)
images=output_train[0:9][:,:,:,0]
plot_images(images,cmap="gray",w=3,h=3)
images=output_train[0:9][:,:,:,1]
plot_images(images,cmap="gray",w=3,h=3)
###Output
_____no_output_____
###Markdown
Lets test how our merge_channels function works
###Code
images=decoded_images(input_train[0:9],output_train[0:9])
plot_images(images,w=3,h=3)
###Output
_____no_output_____
###Markdown
Encoder and Decoder creationour encoder and decoder consists of convolutional and convolution transpose layers
###Code
bottleneck_unit=8
inputs=keras.layers.Input(shape=(128,128,1),name="input")
enc=keras.layers.Conv2D(filters=512,kernel_size=(3,3),strides=2,padding="same",activation="relu",name="encoder1")(inputs)
enc=keras.layers.Conv2D(filters=256,kernel_size=(3,3),strides=1,padding="same",activation="relu",name="encoder2")(enc)
enc=keras.layers.Conv2D(filters=128,kernel_size=(3,3),strides=1,padding="same",activation="relu",name="encoder3")(enc)
enc=keras.layers.Conv2D(filters=64,kernel_size=(3,3),strides=2,padding="same",activation="relu",name="encoder4")(enc)
enc=keras.layers.Conv2D(filters=32,kernel_size=(3,3),strides=1,padding="same",activation="relu",name="encoder5")(enc)
enc=keras.layers.Conv2D(filters=16,kernel_size=(3,3),strides=2,padding="same",activation="relu",name="encoder6")(enc)
bottleneck=keras.layers.Conv2D(filters=bottleneck_unit,kernel_size=(3,3),strides=2,padding="same",activation="relu",name="bottleneck")(enc)
###Output
_____no_output_____
###Markdown
check if we have trained model on our gdrive to further tuning
###Code
if os.path.isfile(os.path.join("..","gdrive","My Drive","gray2color","encoder.h5")):
encoder=keras.models.load_model(os.path.join("..","gdrive","My Drive","gray2color","encoder.h5"),compile=True)
print("resuming older model")
else:
encoder=keras.Model(inputs=inputs,outputs=bottleneck)
print("No precious model found")
encoder.summary()
encoded_input=keras.layers.Input(shape=(8,8,bottleneck_unit))
dec=keras.layers.Conv2DTranspose(filters=16,strides=2,kernel_size=(3,3),padding="same",activation="relu",name="decoder1")(encoded_input)
dec=keras.layers.Conv2DTranspose(filters=32,strides=1,kernel_size=(3,3),padding="same",activation="relu",name="decoder2")(dec)
dec=keras.layers.Conv2DTranspose(filters=64,strides=2,kernel_size=(3,3),padding="same",activation="relu",name="decoder3")(dec)
dec=keras.layers.Conv2DTranspose(filters=128,strides=1,kernel_size=(3,3),padding="same",activation="relu",name="decoder4")(dec)
dec=keras.layers.Conv2DTranspose(filters=256,strides=1,kernel_size=(3,3),padding="same",activation="relu",name="decoder5")(dec)
dec=keras.layers.Conv2DTranspose(filters=512,strides=2,kernel_size=(3,3),padding="same",activation="relu",name="decoder6")(dec)
outputs=keras.layers.Conv2DTranspose(filters=2,strides=2,kernel_size=(3,3),padding="same",activation="tanh",name="output")(dec)
if os.path.isfile(os.path.join("..","gdrive","My Drive","gray2color","decoder.h5")):
decoder=keras.models.load_model(os.path.join("..","gdrive","My Drive","gray2color","decoder.h5"))
print("resuming older model")
else:
decoder=keras.Model(inputs=encoded_input,outputs=outputs)
print("No precious model found")
decoder.summary()
###Output
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 8, 8, 8)] 0
_________________________________________________________________
decoder1 (Conv2DTranspose) (None, 16, 16, 16) 1168
_________________________________________________________________
decoder2 (Conv2DTranspose) (None, 16, 16, 32) 4640
_________________________________________________________________
decoder3 (Conv2DTranspose) (None, 32, 32, 64) 18496
_________________________________________________________________
decoder4 (Conv2DTranspose) (None, 32, 32, 128) 73856
_________________________________________________________________
decoder5 (Conv2DTranspose) (None, 32, 32, 256) 295168
_________________________________________________________________
decoder6 (Conv2DTranspose) (None, 64, 64, 512) 1180160
_________________________________________________________________
output (Conv2DTranspose) (None, 128, 128, 2) 9218
=================================================================
Total params: 1,582,706
Trainable params: 1,582,706
Non-trainable params: 0
_________________________________________________________________
###Markdown
Creating our AutoEncoder model
###Code
class AutoEncoder(keras.Model):
def __init__(self,encoder,decoder):
super(AutoEncoder, self).__init__()
self.encoder=encoder
self.decoder=decoder
def call(self,inputs):
m=self.encoder.layers[0](inputs)
for i in range(1,len(self.encoder.layers)):
m=self.encoder.layers[i](m)
for i in range(1,len(self.decoder.layers)):
m=self.decoder.layers[i](m)
return m
autoencoder=AutoEncoder(encoder,decoder)
autoencoder.compile(optimizer="adam",loss="mse")
reduce_lr=tf.keras.callbacks.ReduceLROnPlateau(
monitor="loss",
factor=0.1,
patience=10,
verbose=0,
mode="auto",
min_delta=0.0001,
cooldown=0,
min_lr=0
)
###Output
_____no_output_____
###Markdown
Training the model
###Code
epochs=10
history=autoencoder.fit(input_train,output_train,batch_size=32,epochs=epochs,shuffle=True,callbacks=[reduce_lr])
###Output
Epoch 1/10
157/157 [==============================] - 21s 134ms/step - loss: 0.0095 - lr: 0.0010
Epoch 2/10
157/157 [==============================] - 21s 131ms/step - loss: 0.0093 - lr: 0.0010
Epoch 3/10
157/157 [==============================] - 21s 131ms/step - loss: 0.0092 - lr: 0.0010
Epoch 4/10
157/157 [==============================] - 21s 131ms/step - loss: 0.0091 - lr: 0.0010
Epoch 5/10
157/157 [==============================] - 21s 131ms/step - loss: 0.0090 - lr: 0.0010
Epoch 6/10
157/157 [==============================] - 21s 131ms/step - loss: 0.0089 - lr: 0.0010
Epoch 7/10
157/157 [==============================] - 21s 131ms/step - loss: 0.0088 - lr: 0.0010
Epoch 8/10
157/157 [==============================] - 21s 131ms/step - loss: 0.0086 - lr: 0.0010
Epoch 9/10
157/157 [==============================] - 21s 131ms/step - loss: 0.0085 - lr: 0.0010
Epoch 10/10
157/157 [==============================] - 20s 130ms/step - loss: 0.0083 - lr: 0.0010
###Markdown
Testing the model1. latent vectors are spit out by encoder2. these vectors are passes to decoder to generate colored images
###Code
latent_vectors=encoder.predict(input_test)
decoded_test_images=decoder.predict(latent_vectors)
val_loss=keras.losses.mse(output_test,decoded_test_images)
print(tf.reduce_mean(val_loss).numpy())
###Output
0.009874075
###Markdown
Plotting test output images and original test colored images
###Code
images=decoded_images(input_test[0:20],decoded_test_images[0:20])
plot_images(images)
images=colored_test[0:20]
plot_images(images)
###Output
_____no_output_____
###Markdown
Saving the model to local or gdrive
###Code
encoder.save("encoder.h5")
decoder.save("decoder.h5")
encoder.save("/gdrive/My Drive/gray2color/encoder.h5")
decoder.save("/gdrive/My Drive/gray2color/decoder.h5")
!ls '/gdrive/My Drive/'
###Output
'Colab Notebooks' dogs_classifier_mobilenetv2.h5
Data gray2color
datasets Images
dogs_classifier_mobilenet1.h5 Keystores
dogs_classifier_mobilenet.h5 tarun_bisht_coat11.jpg
|
Regular Expressions.ipynb | ###Markdown
Regular Expressions* The syntax for the regex library is to always to pass the pattern first, and then the string secondReferences* [Introduction to NLP in Python](https://campus.datacamp.com/courses/introduction-to-natural-language-processing-in-python/regular-expressions-word-tokenization)
###Code
import re
###Output
_____no_output_____
###Markdown
SearchSearch will go through the entire string
###Code
match = re.search(r"\d+", "M123")
print("Match:", match)
print("Start:", match.start())
print("End:", match.end())
re.search(r"^([A-Z])", "M123").group(1)
###Output
_____no_output_____
###Markdown
MatchWhereas match will start at the beginning
###Code
re.match(r"\d+", "M123")
# No match
###Output
_____no_output_____
###Markdown
Split
###Code
re.split(r"\s+", "Machine learning is fun")
###Output
_____no_output_____
###Markdown
Find All
###Code
re.findall(r"[A-Z]+", "THIS documentation ROCKS")
###Output
_____no_output_____
###Markdown
Regular Expression Quick Guide^ Matches the beginning of the line$ Matches the end of the line. Matches any character\s Matches whitespace\S Matches any non-whitespace character* Repeats a character zero or more times*? Repeats a character zero or more times (non-greedy)+ Repeats a character one or more times+? Repeats a character one or more times (non-greedy)[aeiou] Matches a single character in the listed set[^XYZ] Matches a single character not in the listed set[a-z0-9] The set of characters can include a range( Indicates where string extraction is to start) Indicates where string extraction is to end{}[]\d any digit\D anything except for a digit\w equals to [a-zA-Z0-9_]\W equals to [^a-zA-Z0-9_]
###Code
Class Recording 8 from 56:13. Edureka
###Output
_____no_output_____
###Markdown
A character can be a-z, A-Z, 0-9, special chars(~!@$%^&*), spaces hare = open('C:\\Users\\HP\\Desktop\\Sudheer DS\\LearnDataScience-master\\Python_Handouts\\hare.txt','w+')hare.write("This notepad file is created using Anaconda")hare.write(" This is a great day")hare.seek(0)print(hare.read())hare.close()
###Code
# find(), re.search(), startswith()
###Output
_____no_output_____
###Markdown
Using re.search() like find()hand = open('mbox-short.txt')for line in hand: line = line.rstrip() if line.find('From:') >= 0: print(line) Using re.search() like find()import rehand = open('mbox-short.txt')for line in hand: line = line.rstrip() if re.search('From:',line): print(line) Using re.search() like startswith()hand = open('mbox-short.txt')for line in hand: line = line.rstrip() if line.startswith('From:'): print(line) Using re.search() like startswith() Ignoring import re and came to know that it is not necessary to call on the library everytime.hand = open('mbox-short.txt')for line in hand: line = line.rstrip() if re.search('^From:',line): Observe the previous codes ^ in not necessary also here. print(line) Wild-card charactersimport rehand = open('rexp.txt')for line in hand: line = line.rstrip()if re.search('^X.*:', line) : print(line) Fine tuning your Matchhand = open('rexp.txt')for line in hand: line = line.rstrip()if re.search('^X-\S+:', line) : print(line) Matching and Extracting Dataimport rex = 'My 2 favorite numbers are 19 and 42'y = re.findall('[0-9]+',x)print(y) import rex = 'My 2 favorite numbers are 19 and 42'y = re.findall('[0-9]+',x) re.findall always returns a listprint(y) Experimentimport rex = 'My 2 favorite numbers are 19 and 42'y = re.findall('[0-9]',x) re.findall always returns a listprint(y) Experimentimport rex = 'My 2 favorite numbers are 19 and 42'y = re.findall('[0-9]*',x) re.findall always returns a listprint(y) Experiment - It Workedimport rex = 'My 2 favorite numbers are 19 and 42'y = re.findall('[^A-z\s]+',x) re.findall always returns a listprint(y) y = re.findall('[AEIOU]+',x)print(y) Experimenty = re.findall('[AEIOUM]+',x)print(y) Experimentx = 'My 2 favorite numbers are 19 and 42'y = re.findall('[A-z]+',x)print(y) Warning: Greedy Matchingimport rex = 'From: Using the : character'y = re.findall('^F.+:', x) the last colon of the stringprint(y) Non-Greedy Matchingimport rex = 'From: Using the : character'y = re.findall('^F.+?:',x) the first colon of the stringprint(y) Fine-Tuning String Extractionx = 'From [email protected] Sat Jan 5 09:14:16 2008'y= re.findall('\S+@\S+',x)print(y) Fine-Tuning String Extractionx = 'From [email protected] Sat Jan 5 09:14:16 2008'y= re.findall('\S+@\S+',x)print(y)y = re.findall('^From:.*? (\S+@\S+)',x)print(y) data = 'From [email protected] Sat Jan 5 09:14:16 2008'atpos = data.find('.')print(atpos) data = 'From [email protected] Sat Jan 5 09:14:16 2008'atpos = data.find('o')print(atpos) data = 'From [email protected] Sat Jan 5 09:14:16 2008'atpos = data.find('@')print(atpos) Denotes space which comes after atpos - here it is "@"sppos = data.find(' ',atpos)print(sppos) host = data[atpos+1 : sppos]print(host) The Double Split Patternline = 'From [email protected] Sat Jan 5 09:14:16 2008'words = line.split()words email = words[1]email pieces = email.split('@')pieces print(pieces[1]) The Regex Version import relin = 'From [email protected] Sat Jan 5 09:14:16 2008'y = re.findall('@([^ ]*)',lin) [^ ] Match non blank character [^ ]* Match non blank character Match many of themprint(y) Experimentimport relin = 'From [email protected] Sat Jan 5 09:14:16 2008'y = re.findall('@([^ ]+)',lin) [^ ] Match non blank character [^ ]* Match non blank character Match many of themprint(y) Even Cooler Regex Versionimport relin = 'From [email protected] Sat Jan 5 09:14:16 2008'y = re.findall('^From .*@([^ ]*)',lin) ( Start extracting ) Stop extractingprint(y) Result will always be a extracted portion Spam Confidenceimport rehand = open('mbox-short.txt')numlist = list()for line in hand: line = line.rstrip() stuff = re.findall('^X-DSPAM-Confidence: ([0-9.]+)',line) if len(stuff) !=1 : continue num = float(stuff[0]) numlist.append(num)print('Maximum:', max(numlist)) Escape Character If you want a special regular expression character to just behave normally(most of the time) you prefix it with '\'import rex = 'We just received $10.00 for cookies.'y = re.findall('\$[0-9.]+',x)print(y)
###Code
wazzzzzup = 'waz{3,5}up'
wazzzup = 'waz{3,5}up'
but does not match
wazup as there is only one z
aa+b*c+
a's atleast two or more
b atleast 0 or more
c atleast one or more
aaaabcc
aabbbbc
aacc
but does not match
a
\d+ files? found\?
\d+ to match number of more than one digit
files? to match file word and 's' zero or one time
found\? to match found word and \ treats ? as character and not as regex shortcut
1 file found?
2 files found?
24 files found?
but does not match
No files found.[]
###Output
_____no_output_____
###Markdown
[1-9]\.\s+abc[1-9] matches any digit from 1 to 9\. treats . as a character instead of regex shortcut\s+ whitespace character one or more timesabc letters abc1. abc2. abc3. abcbut does not match4.abc '^(file.+)\.pdf$'file_record_transcript.pdffile_0.7241999.pdf but does not matchtestfile_fake.pdf.tmp ([A-Za-z]+ ([0-9]+))[A-Za-z]+ Starting with capital Letters between A-Z followed by a-z one or manyfollowed by space[0-9]+ One or more digits\dJan 1987May 1969Aug 2011 I love (cats|dogs)I love catsI love dogsdoes not matchI love logsI love cogs ^-?\d+(,\d+)*(\.\d+(e\d+)?)?$^-? starts with - or not ^-?\d+ starts with - or not, followed by one or more digits(,\d+)* comma, then followed by one or more digits ------------ this pattern zero or more times(\.\d+(e\d+)?)? followed by a "." and then by one or more digits, then followed by letter "e" and then followed by 1 or more digits --------- this pattern present or notabove two lines present or not.3.14529-255.341281.9e10123,340.00but does not match720p
###Code
import re
p = re.compile(r'\sclass\s')
print(p.search('no class at all')) # Takes class along with spaces
p = re.compile(r'\bclass\b')
print(p.search('no class at all')) # Takes class without spaces
###Output
<re.Match object; span=(2, 9), match=' class '>
<re.Match object; span=(3, 8), match='class'>
###Markdown
Regular Expressionss = regex or regexp is essentially a search query for text that's expressed by string patternWhen you run a search against a particular piece of text, anything that matches a regular expression pattern you specified, is returned as a result of the search. Regular expressions let you answer the questions like what are all the four-letter words in a file? Or how many different error types are there in this error log?
###Code
import re
log = "July 31 07:51:48 mycomputer bad_process[12345]: ERROR Performing package upgrade"
regex1 = r"\[(\d+)]"
result =re.search(regex1, log)
print(result[1])
###Output
12345
###Markdown
grep() = command line regex tool
###Code
import re
result = re.search(r"aza","plaza")
#always use raw string on regular python
print(result)
result = re.search(r"aza","bazaar")
print(result)
result = re.search(r"aza","market")
print(result)
print(re.search(r"p.ng","penguin"))
print(re.search(r"^aag","aagtwg"))
import re
def check_aei (text):
result = re.search(r"a.e.i", text)
return result != None
print(check_aei("academia")) # True
print(check_aei("aerial")) # False
print(check_aei("paramedic")) # True
print(re.search(r"a.t","aagTwg", re.IGNORECASE))
###Output
<re.Match object; span=(1, 4), match='agT'>
###Markdown
Wildcards and Char Classes
###Code
print(re.search(r"[Pp]ython","Python"))
print(re.search(r"[a-z]ython","Python"))
print(re.search(r"[a-z]on","Prett cool qython"))
print(re.search("cloud[a-zA-Z0-9]on","cloud2on"))
import re
def check_punctuation (text):
result = re.search(r"[!-),-/<-?]", text)
return result != None
print(check_punctuation("This is a sentence that ends with a period.")) # True
print(check_punctuation("This is a sentence fragment without a period")) # False
print(check_punctuation("Aren't regular expressions awesome?")) # True
print(check_punctuation("Wow! We're really picking up some steam now!")) # True
print(check_punctuation("End of the line")) # False
print(re.search(r"[^a-zA-Z]","This is a sentence with a space"))
print(re.search(r"[^a-zA-Z ]","This is a sentence with a space."))
#because we add space, so it exclude the char inside[]
print(re.search(r"[^a-zA-Z ]","This is a sentence with a space2"))
#because we add space, so it exclude the char inside[]
print(re.search(r"cat|dog","This is sentence with a space catttin and hotdog."))
#because we add space, so it exclude the char inside[]
print(re.findall(r"cat|dog [a-z]","This is a sentence with a space dog cat dog cat cata cata."))
#because we add space, so it exclude the char inside[]
###Output
['dog c', 'dog c', 'cat', 'cat']
###Markdown
Repetition Qualifiers
###Code
print(re.search(r"py.*y","pylahropy"))
#the stars take the between leters
print(re.search(r"o+l+","wool woool lllle lily nonon"))
print(re.search(r"p?each","I like each"))
print(re.search(r"p?each","I like peach"))
###Output
<re.Match object; span=(7, 12), match='peach'>
|
015_Seaborn_Factor_Plot.ipynb | ###Markdown
All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/12_Python_Seaborn_Module)** Seaborn: Factor Plot Welcome back to another lecture on *Data Visualization with Seaborn*! This lecture is kind of a continuation to **FacetGrid** that we had been discussing in previous lecture. Today our major emphasis is once again going to be on plotting *Categorical Data*. Well, you might think that we have already done enough of these in previous section, when we covered visualization methods like **Swarm Plot**, **Strip Plot**, **Box Plot**, **Violin Plot**, **Bar Plot** and **Point Plot**. News for you today is that all the above mentioned plots are generally considered *low-level methods* as they all plot onto a specific pair of *Matplotlib axes*.Today we shall discuss a higher level function, i.e. **Factor Plot**, which combines all the *low-level functions* with our **FacetGrid** to apply a *Categorical plot* across a grid of figure panels on **[Tidy DataFrame](http://vita.had.co.nz/papers/tidy-data.pdf)**. I shall attach a link in our notebook for you to better assess *Tidy Data* as defined by official page. Let us now dive little deeper to understand this *high-level* **Factor Plot**; not actually in terms of underlying code but in terms of the conceptual foundation, where it majorly holds relevance to **Factor Analysis**. To give you an overview, **[Factor Analysis](https://en.wikipedia.org/wiki/Factor_analysis)** is again a *statistical method* that describes *variability* among observed, correlated variables in terms of a potentially lower number of unobserved variables, which are referred to as **Factors**. **Factor Analysis** searches for similar Joint variations in response to an unobserved set of **[Latent variables](https://en.wikipedia.org/wiki/Latent_variable)**. The term **Latent** refers to the fact that even though these variables were not measured directly in a research design, still they are the ultimate goal of that project. Hence, the observed variables are modelled as *Linear combinations of potential factors*, plus *"error"* terms. Our Factor analysis aims to find such independent Latent variables and the theory behind these methods is that the *Information* gained about the inter-dependencies between observed variables can be used later to reduce the set of variables in a dataset. In short, we may say that Factor analysis is related to **Principal Component Analysis (PCA)**, though those two are not identical. During visualization, A *Factor plot* simply drafts the same plot generated for different response and factor variables and arranged on a single page. Here, the underlying plot generated can be any *Univariate* or *Bivariate* plot, and *Scatter Plot* serves this purpose quite frequently than others.Let us now get our package dependancies and plot a simple Factor Plot to understand the parameters offered by Seaborn to make our task easier:
###Code
# Importing intrinsic libraries:
import numpy as np
import pandas as pd
np.random.seed(44)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(style="ticks", palette="hsv")
import warnings
warnings.filterwarnings("ignore")
# Let us also get tableau colors we defined earlier:
tableau_20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120),
(44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150),
(148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148),
(227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199),
(188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)]
# Scaling above RGB values to [0, 1] range, which is Matplotlib acceptable format:
for i in range(len(tableau_20)):
r, g, b = tableau_20[i]
tableau_20[i] = (r / 255., g / 255., b / 255.)
# Loading Built-in Dataset:
exercise = sns.load_dataset("exercise")
# Pre-viewing Dataset:
exercise.head(10)
#exercise.columns
# Creating a basic Factor Plot:
sns.factorplot(x="time", y="pulse", hue="kind", data=exercise, size=6)
###Output
_____no_output_____
###Markdown
This looks quite informative and we already know how to interpret a **Point Plot** that we have on screen right now. If an individual is at **`rest`**, his/her **`pulse`** remains pretty constant at approximately 90, when measured for a **`time`** interval from 1 minute to half an hour. Even while **`walking`**, the **`pulse`** soars high for first **`15 minutes`**, but then stabilizes around 93, and then remains constant at that **`pulse`**. But the story is totally different when the individual is **`running`**, because the pulse then takes a major upwards leap in first **`time`** segment, and constantly keeps pounding with increase in **`time`**. Let us now look at the parameters offered by Seaborn to expand our horizon with **Factor Plot**:**`seaborn.factorplot(x=None, y=None, hue=None, data=None, row=None, col=None, col_wrap=None, estimator=, ci=95, n_boot=1000, units=None, order=None, hue_order=None, row_order=None, col_order=None, kind='point', size=4, aspect=1, orient=None, color=None, palette=None, legend=True, legend_out=True, sharex=True, sharey=True, margin_titles=False, facet_kws=None)`**The good news is that Seaborn seems to offer almost all the *optional parameters* that we've covered till now; and the other good news is that there isn't any extra parameter for us to fiddle with. So instead, let us play around with few more *Factor Plots* to visualize the difference. As of now, we just have a **Point Plot** on one facet, so we shall eventually even try to draw subplots to get further acquainted with the syntax and corresponding results. There isn't much we need to do in terms of *inference*, so let us run through few examples:
###Code
# Let us begin by altering the type of plot to a "BarPlot" on our FactorPlot:
sns.factorplot(x="time", y="pulse", hue="kind", data=exercise, size=7, kind="bar", palette="rocket")
###Output
_____no_output_____
###Markdown
As you would have guessed by now, the **`kind`** parameter by default is for **Point Plot**, but just like this, we may modify it to **`box`**, **`violin`**, **`swarm`**, etc.Let us now *facet our plot along variable columns in the same row*:
###Code
# For a change, here we shall use a "Box Plot", instead of a "Bar Plot" to visualize the difference:
#sns.factorplot(x="time", y="pulse", hue="kind", col="diet", data=exercise, size=6, kind="box", palette="rocket")
# Let us pull our Legend inside the plot:
sns.factorplot(x="time", y="pulse", hue="kind", col="diet", data=exercise, size=7, kind="box", palette="rocket", legend_out=False)
###Output
_____no_output_____
###Markdown
That comfortably divides **`exercise`** datapoints with respect to the *factor* whether they are on **`low fat`** *or* on **`no fat`** *diet*. Now let us add in few more *optional parameters* and tweak our presentation, and for this purpose we shall use our *Tips* dataset, so we shall commence by reloading this built-in dataset:
###Code
# Loading Built-in Tips Dataset:
tips = sns.load_dataset("tips")
# Let us get all the facets of our grid vertically stacked this time:
sns.factorplot(x="day", y="total_bill", hue="smoker", row="time", data=tips,
orient="v", size=4, aspect=2.5, palette="bwr", kind="point", dodge=True, cut=0, bw=.2, margin_titles=True)
###Output
_____no_output_____
###Markdown
Hmmm! So we have a competent plot here presenting the variations. Let us now try to extemporize few modifications in our **Barplot** that we plotted earlier using **`exercise`** dataset:
###Code
# Let us also assign a variable "ax" to it:
ax = sns.factorplot(x="time", y="pulse", hue="kind", col="diet", data=exercise, size=7, kind="bar",
palette="rocket", ci=None)
# Let us now customize it by using methods on our FacetGrid:
ax.set_axis_labels("Time Taken", "Pulse Rate")
ax.set_xticklabels(["1 minute", "15 minutes", "30 minutes"])
ax.set_titles("{col_name} {col_var}")
ax.despine(left=True)
###Output
_____no_output_____ |
doc/REPORT.ipynb | ###Markdown
Modesolver_helper 연구실 심화 실습 Project----- Project members+ Sanghoon Kim+ Kangseok Kim+ Seoungmin Park ---- Introduction> * Background of Si photonics> + The problem is that electronics reduces energy efficiency when processing and processing data in a data center. > + Data center capacity is increasing every year due to other Internet activities.> + An alternative to Si Electronics is the introduction of Si Photonics. >> * Waveguide and Mode> + Waveguide traps light in the Waveguide structure and transmits it to a specific location. > + Waveguide consists of Core and Cladding.Due to the geometric morphology of Waveguide and the refractive index of the components, there is a specific Efield profile in the Waveguide section, which is called Mode.> + If the structure of the Waveguide does not change in the direction of light travel, the light will proceed with the probability image in the form of an E-field profile in a particular mode. Waveguide's design parameters have different mode characteristics, so the process of optimizing Waveguide to achieve the desired mode is a must in Waveguide design. Requirements Getting Stared + Entered the Terminal, write down 'pip install -r requirements.txt' and download it. \```pip install -r requirements.txt```----- MotivationI've practice to solve modes in optical-waveguide by using modesolverpy in lab practice course in my university. However, It seems there are some problems when you run modesolverpy in Windows. So, I make a decision to help running modesolverpy in windows. And also, There are some example codes in this repository. modesolverpy> + photonic mode solver with a nice interface and output > + simple structure drawing.> + automated data saving and plotting via Gnuplotm> + some limited (at this stage) data processing (finding MFD of fundamental mode), and> + easily extensible library> > The documentation for this project can be found here.https://modesolverpy.readthedocs.io/en/latest/index.html Structure> example>> Parameter> + > + t_slab> + ----- BaseCode Import Library
###Code
import modesolverpy.mode_solver as ms
import modesolverpy.structure as st
import numpy as mp
import pandas as pd
###Output
_____no_output_____
###Markdown
Make a list of parameters
###Code
# Key Element : [w_wg,t_soi,t_slab,n_eff]
# sweep : 0.01 um
total_list = []
w_wg_list = [round(0.2 +0.01*i,2) for i in range(0,81) ] # 0.2 ~ 1
t_soi_list = [round(0.2 + 0.01*i,2) for i in range(0,31)] # 0.2 ~ 0.5
t_slab_list = [] # 0 ~ t_soi/2
###Output
_____no_output_____
###Markdown
Draw Structure
###Code
"""
Param : wg_width[um], film_thickness[um], wg_height[um]
Base wavelength of light : 1350nm
Return structure profile
"""
def draw_structure(wg_width=0.22,film_thickness=0.1,wg_height=0.05)->float:
struct = st.RidgeWaveguide(x_step = 0.02,
y_step = 0.02,
wg_height = wg_height,
wg_width = wg_width,
sub_height = 0.5,
sub_width = 2.,
n_sub = 1.4,
n_wg = 3.4,
n_clad = 1.,
wavelength = 1.350,
angle = 90.,
clad_height = 0.5,
film_thickness=film_thickness)
return struct
###Output
_____no_output_____
###Markdown
* Solve effective index of refraction
###Code
"""
====
param : wg_width[um], film_thickness[um], wg_height[um]
Return n_eff of waveguide with mode 0 and 1.
=====
"""
def find_n_eff_mode_zero(wg_width=0.22,film_thickness=0.1,wg_height=0.05)->float:
structure = draw_structure(wg_width,film_thickness,wg_height)
mode_solver = ms.ModeSolverSemiVectorial(2, semi_vectorial_method='Ex')
a = mode_solver.solve(structure)
return a["n_effs"][0].real # 구해야 하는 것 [mode : 0에서의 n_eff]
def find_n_eff_mode_one(wg_width=0.22,film_thickness=0.1,wg_height=0.05)->float:
structure = draw_structure(wg_width,film_thickness,wg_height)
mode_solver = ms.ModeSolverSemiVectorial(4, semi_vectorial_method='Ex')
a = mode_solver.solve(structure)
return a["n_effs"][1].real # 구해야 하는 것 [mode : 0에서의 n_eff]
###Output
_____no_output_____
###Markdown
Case1) param : Waveguide length> Draw structure & solve n_effs
###Code
t_soi = 0.22
t_slab = 0.1 # t_slab = film_thickness - wg_height
# mode_solver = ms.ModeSolverSemiVectorial(4,semi_vectorial_method='Ex')
for i in w_wg_list:
x = draw_structure(i,t_soi,t_slab)
# x.write_to_file('.\\struct\\struct_w_wg={0}.dat'.format(i))
mode_solver.solve(x)
# mode_solver.write_modes_to_file('mode_w_wg={0}.dat'.format(i))
###Output
_____no_output_____ |
Linear Regression/India_per_capita-2030.ipynb | ###Markdown
Machine Learning With Python: Linear Regression With One Variable **Problem Statement**: Predict India's per capita income in year 2023 using india_gdp.csv
###Code
import pandas as pd
import numpy as np
from sklearn import linear_model
import matplotlib.pyplot as plt
df = pd.read_csv('../datasets/gdp/india_gdp.csv')
df
%matplotlib inline
plt.xlabel('Year')
plt.ylabel('GDP (US$)')
plt.scatter(df.year,df.gdp,color='red',marker='+')
year = df[['year']]
year
gdp = df.gdp
gdp
# Create linear regression object
reg = linear_model.LinearRegression()
reg.fit(year,gdp)
reg.predict([[2023]])
reg.coef_
reg.intercept_
year_df = pd.read_csv("../datasets/year.csv")
year_df.head(3)
p = reg.predict(year_df)
p
year_df['income']=p
year_df
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(year,gdp,test_size=0.2,random_state=0)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(x_train, y_train)
lr = LinearRegression().fit(x_train, y_train)
y_pred = regressor.predict([[2023]])
#y_pred = regressor.predict(x_test)
y_pred
year_df = pd.read_csv("../datasets/year.csv")
year_df.head(3)
q = regressor.predict(year_df)
q
year_df['income']=q
year_df
print("Training set score: {:.2f}".format(lr.score(x_train, y_train)))
print("Test set score: {:.7f}".format(lr.score(x_test, y_test)))
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
steps = [
('poly', PolynomialFeatures(degree=2)),
('model', LinearRegression())
]
pipeline = Pipeline(steps)
pipeline.fit(x_train, y_train)
print('Training score: {}'.format(pipeline.score(x_train, y_train)))
print('Test score: {}'.format(pipeline.score(x_test, y_test)))
pipeline.predict([[2023]])
# Now Read Years
year_f = pd.read_csv("../datasets/year.csv")
year_f.head(3)
qr = pipeline.predict(year_f)
qr
year_f['gdp']=qr
print('Forecast per capita GDP (US$) : ')
year_f
%matplotlib inline
plt.xlabel('Year')
plt.ylabel('GDP (US$)')
plt.plot(df.year,df.gdp,color='g',marker='+')
plt.plot(year_f.year,qr,color='red',marker='+')
###Output
_____no_output_____ |
4_PyTorch_Example.ipynb | ###Markdown
Minimal PyTorch Example This notebooks shows a very minimal example on how to use PyTorch for training a neural network on the Iris data set. 0. Preamble
###Code
import torch
import torch.nn.functional as F
import torch.nn as nn
torch.manual_seed(1)
###Output
_____no_output_____
###Markdown
The following lines checks for GPU availability on the machine and sets the GPU as processing device (if available).If you are on Colab you can enable GPU support in the menu via "Runtime > Change runtime type" and select "GPU" as hardware accelerator.
###Code
if(torch.cuda.is_available()):
processing_chip = "cuda:0"
print(f"{torch.cuda.get_device_name(0)} available")
else:
processing_chip = "cpu"
print("No GPU available")
device = torch.device(processing_chip)
device
###Output
No GPU available
###Markdown
1. Data Preperation For this small example we use the [Iris flower data set](https://en.wikipedia.org/wiki/Iris_flower_data_set). The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. Based on these four features, we want to train a model that can predict the species.In the first step we load the data into a Pandas.
###Code
import pandas as pd
url = 'data/iris.csv'
dataset = pd.read_csv(url)
dataset.head(5)
###Output
_____no_output_____
###Markdown
To be able to train a model, we first need to transform the *species* column into a numeric:
###Code
dataset.loc[dataset.species=='Iris-setosa', 'species'] = 0
dataset.loc[dataset.species=='Iris-versicolor', 'species'] = 1
dataset.loc[dataset.species=='Iris-virginica', 'species'] = 2
dataset.head()
###Output
_____no_output_____
###Markdown
Next, we specify which columns we want to use as features and which as label:
###Code
X = dataset[dataset.columns[0:4]].values
y = dataset.species.values.astype(int)
###Output
_____no_output_____
###Markdown
We then split our data into training and test data.
###Code
from sklearn.model_selection import train_test_split
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=0.2)
print(train_X.shape, test_X.shape)
###Output
(120, 4) (30, 4)
###Markdown
To be able to use the data in PyTorch, we need to convert them into PyTorch tensors. Such a tensor can be thought of an efficient way to represent lists and matrices (similar to Numpy), with the additional benefit that they can be moved to the GPU (the `.to(device)` part in the code below) and that they support automatic backpropagation (more on this later):
###Code
train_x = torch.Tensor(train_X).float().to(device)
test_x = torch.Tensor(test_X).float().to(device)
train_y =torch.Tensor(train_y).long().to(device)
test_y = torch.Tensor(test_y).long().to(device)
###Output
_____no_output_____
###Markdown
2. Model definitionWe define now the strucutre of our neural network. For this we create a class that is a subclass from PyTorch's `nn.Module`.By convention we put in the `__init__` method the layers we want to use in the network and in the `forward` method how data flows through this network.Our network has 4 input features, 7 hidden layer nodes and 3 output neurons. The hidden layer uses a Relu activation function. Note that the output layer does not have a softmax activation (unlike we have seen it in the lecture). It rather gives out a raw score for each class (more on this later).
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.hidden = nn.Linear(4, 7)
self.output = nn.Linear(7, 3)
def forward(self, x):
z1 = self.hidden(x)
z2 = F.relu(z1)
z3 = self.output(z2) # no softmax. see CrossEntropyLoss()
return z3
###Output
_____no_output_____
###Markdown
3. Model TrainingWe can now start training our network. We run several epochs in which we first predict on the training data with our network and than backpropagate the loss. For this we use PyTorch's build-in optimizer that runs gradient descent on the weights of the network. Hence, in every episode we reduce the loss on the training data and improve our network.As loss function we use cross entropy, which consumes the raw scores from the prediction and internally applies a softmax (that is why we do not need the softmax as last layer in the network).Note that all training data is passed at once to our network (line `net(train_x)`), since PyTorch will predict on all data points in parallel.
###Code
# create network, move it to device (either CPU or GPU)
net = Net().to(device)
# define the parameters for training
no_epochs = 100
learning_rate = 0.04
loss_func = nn.CrossEntropyLoss() # applies softmax() internally
optimizer = torch.optim.SGD(net.parameters(), lr=learning_rate)
print("\nStarting training ")
train_losses = []
for epoch in range(0, no_epochs):
optimizer.zero_grad() # set gradients to zero
predictions = net(train_x) # predict on the training data, this calls net.forward()
loss = loss_func(predictions, train_y) # compute loss between prediction and true labels
loss.backward() # calculate the gradients for every weight
optimizer.step() # do one step of gradient descent
train_losses.append(loss.item())
if epoch % 10 == 0:
print(f"Loss in epoch {epoch} is {loss.item()}")
print("Done training ")
import matplotlib.pyplot as plt
fig = plt.figure()
plt.plot(range(0, no_epochs), train_losses, color='blue')
plt.legend(['Train Loss'], loc='upper right')
plt.xlabel('number of epochs')
plt.ylabel('loss')
###Output
_____no_output_____
###Markdown
4. Model EvaluationFinally, we check the model accuracy on the test data. For this we predict on the test data, identify the class with the highest score and compare it to the true label.
###Code
predictions = net(test_x)
_, predicted = torch.max(predictions.data, 1) # get the class with highest score
correct = (predicted == test_y).sum().item() # compare predicted class with real class
print(f"Accuarcy is {100. * correct / len(test_x)}%")
###Output
Accuarcy is 76.66666666666667%
|
Tutorials/Tutorial3/DS_Tutorial3.ipynb | ###Markdown
Name:- Parshwa Shah Roll No:- 34 UID:- 2019230071 Tutorial 3
###Code
import numpy as np
from numpy . random import randn
import matplotlib . pyplot as plt
from mpl_toolkits . mplot3d import Axes3D
n =1000
mu1 = np. array ([2 ,1 , -3])
mu2 = np. array ([1 , -4 ,0])
mu3 = np. array ([2 ,4 ,0])
X1 = randn (n ,3) + mu1
X2 = randn (n ,3) + mu2
X3 = randn (n ,3) + mu3
fig = plt. figure ()
ax = fig.gca( projection ='3d' ,)
ax.plot(X1 [: ,0] , X1 [: ,1] , X1 [: ,2] , 'r.',alpha =0.5 , markersize =2)
ax.plot(X2 [: ,0] , X2 [: ,1] , X2 [: ,2] , 'b.',alpha =0.5 , markersize =2)
ax.plot(X3 [: ,0] , X3 [: ,1] , X3 [: ,2] , 'g.',alpha =0.5 , markersize =2)
ax. set_xlim3d ( -4 ,6)
ax. set_ylim3d ( -5 ,5)
ax. set_zlim3d ( -5 ,2)
plt.show ()
!pip install pandas_datareader
from pandas_datareader import *
from numpy . linalg import svd , pinv
mu21 = (mu2 - mu1). reshape (3 ,1)
mu31 = (mu3 - mu1). reshape (3 ,1)
W = np. hstack (( mu21 , mu31))
U,_,_ = svd(W) # we only need U
P = W @ pinv(W)
R = U.T @ P
RX1 = (R @ X1.T).T
RX2 = (R @ X2.T).T
RX3 = (R @ X3.T).T
plt.plot(RX1 [: ,0] , RX1 [: ,1] ,'b.',alpha =0.5 , markersize =2)
plt.plot(RX2 [: ,0] , RX2 [: ,1] ,'g.',alpha =0.5 , markersize =2)
plt.plot(RX3 [: ,0] , RX3 [: ,1] ,'r.',alpha =0.5 , markersize =2)
plt.show ()
###Output
_____no_output_____ |
vermeerkat/plugins/fleetingpol/diagnostics/Polcal solutions.ipynb | ###Markdown
Crosshand delays
###Code
with tbl(KX) as t:
delays = t.getcol("FPARAM")
ants = t.getcol("ANTENNA1")
field = t.getcol("FIELD_ID")
flags = t.getcol("FLAG")
time = t.getcol("TIME")
with tbl("%s::ANTENNA" % KX) as t:
antnames = t.getcol("NAME")
delays[flags] = np.nan
hrloc = mdates.HourLocator()
minloc = mdates.MinuteLocator()
dtFmt = mdates.DateFormatter('%hh%mm%ss')
collections = []
collections_time = []
pcmax = -np.inf
pcmin = np.inf
for a in np.unique(ants):
asel = ants == a
unflagged = np.logical_not(flags[:, 0, 0][asel])
collections.append(delays[:, 0, 0][asel][unflagged])
pcmax = max(pcmax, np.nanpercentile(delays[:, 0, 0][asel][unflagged],98))
pcmin = min(pcmin, np.nanpercentile(delays[:, 0, 0][asel][unflagged],2))
collections_time.append(time[asel][unflagged])
labels=[antnames[ai] for ai in np.unique(ants)]
plt.figure(figsize=(25, 6))
plt.title("Crosshand delays")
plt.boxplot(collections, 0, '', labels=labels)
plt.ylabel("Delay (ns)")
plt.show()
fig, ax = plt.subplots(figsize=(25, 6))
for t,a,aname in zip(collections_time, collections, labels):
ax.plot(convertMJD2unix(t), a, label=aname)
ax.set_ylabel("Delay (ns) [98%]")
ax.set_xlabel("Time (start: %s)" % str(convertMJD2unix([np.min(time)])[0]))
ax.legend(loc = (1.01,0))
ax.grid(True)
hfmt = mdates.DateFormatter('%H:%M')
ax.xaxis.set_major_formatter(hfmt)
limmean = np.nanmean(delays)
lim = np.nanstd(delays)
ax.set_ylim(pcmin, pcmax)
plt.show()
###Output
_____no_output_____
###Markdown
Crosshand phase gain stability
###Code
with tbl(Xref) as t:
bpgain = t.getcol("CPARAM")
ants = t.getcol("ANTENNA1")
field = t.getcol("FIELD_ID")
flags = t.getcol("FLAG")
time = t.getcol("TIME")
with tbl("%s::ANTENNA" % Xref) as t:
antnames = t.getcol("NAME")
bpgain[flags] = np.nan
for corr in range(bpgain.shape[2]):
collections = []
collections_std = []
collections_time = []
for a in np.unique(ants):
asel = ants == a
bpgain[flags] = np.nan
ang = np.angle(bpgain[asel, :, corr])
collections.append(np.nanmedian(ang, axis=1))
collections_std.append((np.nanpercentile(ang, 75.0, axis=1) - np.nanpercentile(ang, 25.0, axis=1))*0.5)
collections_time.append(time[asel])
labels=[antnames[ai] for ai in np.unique(ants)]
fig, ax = plt.subplots(figsize=(25, 6))
for t,a,s,aname in zip(collections_time, collections, collections_std, labels):
ax.errorbar(convertMJD2unix(t), np.rad2deg(a), yerr=np.rad2deg(s), label=aname)
ax.set_title("Crosshand phase DC")
ax.set_ylabel("Phase [deg]")
ax.set_xlabel("Time (start: %s)" % str(convertMJD2unix([np.min(time)])[0]))
ax.legend(loc = (1.01,0))
ax.grid(True)
hfmt = mdates.DateFormatter('%H:%M')
ax.xaxis.set_major_formatter(hfmt)
plt.show()
###Output
_____no_output_____
###Markdown
Crosshand phase
###Code
with tbl(Xfreq) as t:
xfsols = t.getcol("CPARAM")
ants = t.getcol("ANTENNA1")
field = t.getcol("FIELD_ID")
flags = t.getcol("FLAG")
time = t.getcol("TIME")
with tbl("%s::ANTENNA" % Xfreq) as t:
antnames = t.getcol("NAME")
with tbl("%s::SPECTRAL_WINDOW" % Xfreq) as t:
freqs = t.getcol("CHAN_FREQ")/1.0e6
xfsols[flags] = np.nan
collections = []
for a in np.unique(ants):
asel = ants == a
collections.append(xfsols[:, :, corr][asel])
labels=[antnames[ai] for ai in np.unique(ants)]
fig, ax = plt.subplots(figsize=(25, 6))
for a,aname in zip(collections, labels):
ax.scatter(np.tile(freqs, (1, a.shape[0])),
np.rad2deg(np.angle(a)),
s=1.5,
label=aname)
ax.set_title("Crosshand phase")
ax.set_ylabel("Phase [deg]")
ax.set_xlabel("Frequency (MHz)")
ax.legend(loc = (1.01,0))
ax.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
First order leakage gain stability
###Code
with tbl(Dref) as t:
dgain = t.getcol("CPARAM")
ants = t.getcol("ANTENNA1")
field = t.getcol("FIELD_ID")
flags = t.getcol("FLAG")
time = t.getcol("TIME")
with tbl("%s::ANTENNA" % Dref) as t:
antnames = t.getcol("NAME")
dgain[flags] = np.nan
collections = []
collections_time = []
for a in np.unique(ants):
asel = ants == a
unflagged = np.logical_not(flags[:, 0, 0][asel])
collections.append(dgain[:, 0, 0][asel][unflagged])
collections_time.append(time[asel][unflagged])
labels=[antnames[ai] for ai in np.unique(ants)]
fig, ax = plt.subplots(figsize=(25, 6))
for t,a,aname in zip(collections_time, collections, labels):
ax.plot(convertMJD2unix(t), 10*np.log10(np.abs(a)), label=aname)
ax.set_title("Leakage gain")
ax.set_ylabel("Amplitude [dB]")
ax.set_xlabel("Time (start: %s)" % str(convertMJD2unix([np.min(time)])[0]))
ax.legend(loc = (1.01,0))
ax.grid(True)
hfmt = mdates.DateFormatter('%H:%M')
ax.xaxis.set_major_formatter(hfmt)
plt.show()
plt.figure(figsize=(25, 6))
plt.title("DC leakage")
plt.boxplot([10*np.log10(np.abs(c)) for c in collections], 0, '', labels=labels)
plt.ylabel("DC leakage")
plt.show()
###Output
_____no_output_____
###Markdown
Leakage
###Code
with tbl(Dfreq) as t:
dfsols = t.getcol("CPARAM")
ants = t.getcol("ANTENNA1")
field = t.getcol("FIELD_ID")
flags = t.getcol("FLAG")
time = t.getcol("TIME")
with tbl("%s::ANTENNA" % Dfreq) as t:
antnames = t.getcol("NAME")
with tbl("%s::SPECTRAL_WINDOW" % Dfreq) as t:
freqs = t.getcol("CHAN_FREQ")/1.0e6
dfsols[flags] = np.nan
collections = []
for a in np.unique(ants):
asel = ants == a
collections.append(dfsols[:, :, corr][asel])
labels=[antnames[ai] for ai in np.unique(ants)]
fig, ax = plt.subplots(figsize=(25, 6))
for a,aname in zip(collections, labels):
ax.scatter(np.tile(freqs, (1, a.shape[0])),
10*np.log10(np.abs(a)),
s=1.5,
label=aname)
ax.set_title("Leakage")
ax.set_ylabel("Leakage [dB]")
ax.set_xlabel("Frequency (MHz)")
ax.legend(loc = (1.01,0))
ax.grid(True)
plt.show()
###Output
_____no_output_____ |
documents/presentation-5/script7.ipynb | ###Markdown
Statistical Analysis of Data Environment SettingsAn statistical Analysis of the data captured will be performed.The environment configuration is the following:- A rectangle area is used whose dimension is 2 x 1.5 meters. - A custom robot similar to an epuck was used.- The robot starts in the middle of the arena.- The robot moves in a random fashion way around the environment avoiding obstacles.- The robot has 8 sensors that measure the distance between the robot and the walls.- Some noise was introduced in the sensors measurements of the robot using the concept of [lookup tables](https://cyberbotics.com/doc/reference/distancesensor) in the Webots simulator which according to Webots documentation "The first column of the table specifies the input distances, the second column specifies the corresponding desired response values, and the third column indicates the desired standard deviation of the noise. The noise on the return value is computed according to a gaussian random number distribution whose range is calculated as a percent of the response value (two times the standard deviation is often referred to as the signal quality)". The following values were taken: -First experiment: - (0, 0, 0.01) - (10, 10, 0.01) -Second experiment: - (0, 0, 0.2) - (10, 10, 0.2)- The simulator runs during 10 minutes in fast mode which is translated into 12 hours of collected data.
###Code
# Install a pip package in the current Jupyter kernel
import sys
!{sys.executable} -m pip install scikit-learn
!{sys.executable} -m pip install keras
import pandas as pd
import tensorflow as tf
import numpy as np
import math
from sklearn.ensemble import RandomForestRegressor
from keras import models
from keras import layers
from keras import regularizers
import matplotlib.pyplot as plt
from keras import optimizers
###Output
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/site-packages (0.22)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.7/site-packages (from scikit-learn) (1.1.0)
Requirement already satisfied: numpy>=1.11.0 in /usr/local/lib/python3.7/site-packages (from scikit-learn) (1.16.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/site-packages (from scikit-learn) (0.14.1)
Requirement already satisfied: keras in /usr/local/lib/python3.7/site-packages (2.3.1)
Requirement already satisfied: scipy>=0.14 in /usr/local/lib/python3.7/site-packages (from keras) (1.1.0)
Requirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.7/site-packages (from keras) (1.0.7)
Requirement already satisfied: numpy>=1.9.1 in /usr/local/lib/python3.7/site-packages (from keras) (1.16.1)
Requirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.7/site-packages (from keras) (1.12.0)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.7/site-packages (from keras) (1.0.9)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/site-packages (from keras) (5.2)
Requirement already satisfied: h5py in /usr/local/lib/python3.7/site-packages (from keras) (2.9.0)
###Markdown
First Experiment
###Code
csv_file = 'robot_info_dataset-jumped.csv'
df = pd.read_csv(csv_file)
df.head()
###Output
_____no_output_____
###Markdown
Data pre-processing The data collected 1384848 samples.
###Code
df.shape
###Output
_____no_output_____
###Markdown
The data set contains some null values so they should be deleted from the samples.
###Code
df = df.dropna()
###Output
_____no_output_____
###Markdown
Now the data will be normalized.
###Code
normalized_df=(df-df.min())/(df.max()-df.min())
normalized_df.describe()
###Output
_____no_output_____
###Markdown
Input and output variables The data will be split into training, testing and validation sets. 60% of the data will be used for training, 20% for training and 20% of validation.
###Code
# train size
test_size_percentage = .2
train_size_percentage = .6
ds_size = normalized_df.shape[0]
train_size = int(train_size_percentage * ds_size)
test_size = int(test_size_percentage * ds_size)
# shuffle dataset
normalized_df = normalized_df.sample(frac=1)
# separate inputs from outputs
inputs = normalized_df[['x', 'y', 'theta']]
targets = normalized_df[['sensor_1', 'sensor_2', 'sensor_3', 'sensor_4', 'sensor_5', 'sensor_6', 'sensor_7', 'sensor_8']]
# train
train_inputs = inputs[:train_size]
train_targets = targets[:train_size]
# test
test_inputs = inputs[train_size:(train_size + test_size)]
test_targets = targets[train_size:(train_size + test_size)]
# validation
validation_inputs = inputs[(train_size + test_size):]
validation_targets = targets[(train_size + test_size):]
###Output
_____no_output_____
###Markdown
Neural Network As input the neural network receives the x, y coordinates and rotation angle $\theta$. The output are the sensor measurements. One model per sensor will be created.
###Code
def get_model():
# neural network with a 10-neuron hidden layer
model = models.Sequential()
model.add(layers.Dense(10, activation='relu', input_shape=(3,)))
# model.add(layers.Dropout(0.5))
model.add(layers.Dense(6, activation='relu'))
model.add(layers.Dense(3, activation='relu'))
model.add(layers.Dense(1))
# rmsprop = optimizers.RMSprop(learning_rate=0.01)
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
return model
model = get_model()
history = model.fit(inputs, targets[['sensor_7']], epochs=75, batch_size=1, verbose=1)
history.history['mae']
model.save("nn_sensor_7.h5")
###Output
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/75
65341/65341 [==============================] - 211s 3ms/step - loss: 0.0155 - mae: 0.0960
Epoch 2/75
65341/65341 [==============================] - 212s 3ms/step - loss: 0.0096 - mae: 0.0740
Epoch 3/75
65341/65341 [==============================] - 222s 3ms/step - loss: 0.0067 - mae: 0.0608
Epoch 4/75
65341/65341 [==============================] - 211s 3ms/step - loss: 0.0064 - mae: 0.0591
Epoch 5/75
65341/65341 [==============================] - 233s 4ms/step - loss: 0.0063 - mae: 0.0583
Epoch 6/75
65341/65341 [==============================] - 239s 4ms/step - loss: 0.0062 - mae: 0.0572
Epoch 7/75
65341/65341 [==============================] - 220s 3ms/step - loss: 0.0060 - mae: 0.0566
Epoch 9/75
65341/65341 [==============================] - 227s 3ms/step - loss: 0.0059 - mae: 0.0563
Epoch 10/75
65341/65341 [==============================] - 231s 4ms/step - loss: 0.0058 - mae: 0.0561
Epoch 11/75
65341/65341 [==============================] - 178s 3ms/step - loss: 0.0058 - mae: 0.0558
Epoch 17/75
65341/65341 [==============================] - 176s 3ms/step - loss: 0.0058 - mae: 0.0557
Epoch 18/75
65341/65341 [==============================] - 176s 3ms/step - loss: 0.0059 - mae: 0.0559
Epoch 19/75
65341/65341 [==============================] - 178s 3ms/step - loss: 0.0058 - mae: 0.0560
Epoch 20/75
65341/65341 [==============================] - 175s 3ms/step - loss: 0.0058 - mae: 0.0557 0s - loss: 0.0058 -
Epoch 22/75
65341/65341 [==============================] - 178s 3ms/step - loss: 0.0057 - mae: 0.0556
Epoch 23/75
65341/65341 [==============================] - 167s 3ms/step - loss: 0.0057 - mae: 0.0554
Epoch 24/75
65341/65341 [==============================] - 154s 2ms/step - loss: 0.0056 - mae: 0.0549
Epoch 25/75
65341/65341 [==============================] - 160s 2ms/step - loss: 0.0056 - mae: 0.0547
Epoch 26/75
65341/65341 [==============================] - 209s 3ms/step - loss: 0.0056 - mae: 0.0549
Epoch 28/75
65341/65341 [==============================] - 208s 3ms/step - loss: 0.0057 - mae: 0.0553
Epoch 29/75
65341/65341 [==============================] - 196s 3ms/step - loss: 0.0057 - mae: 0.0554
Epoch 30/75
65341/65341 [==============================] - 208s 3ms/step - loss: 0.0058 - mae: 0.0557
Epoch 31/75
65341/65341 [==============================] - 206s 3ms/step - loss: 0.0057 - mae: 0.0554
Epoch 32/75
65341/65341 [==============================] - 192s 3ms/step - loss: 0.0056 - mae: 0.0552
Epoch 33/75
65341/65341 [==============================] - 192s 3ms/step - loss: 0.0056 - mae: 0.0547
Epoch 34/75
65341/65341 [==============================] - 195s 3ms/step - loss: 0.0055 - mae: 0.0543
Epoch 35/75
65341/65341 [==============================] - 192s 3ms/step - loss: 0.0055 - mae: 0.0541
Epoch 36/75
65341/65341 [==============================] - 186s 3ms/step - loss: 0.0055 - mae: 0.0539
Epoch 37/75
65341/65341 [==============================] - 162s 2ms/step - loss: 0.0054 - mae: 0.0539
Epoch 38/75
65341/65341 [==============================] - 198s 3ms/step - loss: 0.0053 - mae: 0.0529
Epoch 40/75
65341/65341 [==============================] - 201s 3ms/step - loss: 0.0052 - mae: 0.0525 0s - loss: 0.0052 - ma
Epoch 41/75
53410/65341 [=======================>......] - ETA: 35s - loss: 0.0052 - mae: 0.0521 |
HFI - A Brief Examination of Religious Freedom.ipynb | ###Markdown
A Brief Examination of World Religious Freedom The goal of this notebook is to perform a brief exploratory analysis of the human freedom index dataset, particularly with regards to religious freedom. We will begin by briefly looking at overall human freedom around the world, and then dive a little bit deeper into the trends in religious freedom. We will set out to answer some basic questions:1. How has world religious freedom varied over the 2008-2016 period?2. How is religious freedom connected to overall human freedom?3. How are government restrictions on religious freedom related to religious harassment?4. What countries represent the best and worst in terms of religious freedom, and how have those fluctuated in most recent times (2015-2016)?
###Code
# Import Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import Image
from IPython.core.display import HTML
sns.set()
%matplotlib inline
# Read in data and view the first 5 rows
hfi_data = pd.read_csv('hfi_cc_2018.csv')
hfi_data = hfi_data[hfi_data['ISO_code'] != 'BRN'] # We drop this country as it does not contain pf_religion data
hfi_data.head()
# Define function for plotting yearly median for given metrics with std_dev error bars
def plot_med_year(df, metric, title=None):
df_year = df.groupby('year')
df_med = pd.merge(df_year[metric].median().to_frame().reset_index(),
df_year[metric].std().to_frame(name='std').reset_index())
plt.figure()
plt.errorbar(x=df_med['year'], y=df_med[metric],
yerr=df_med['std'], linestyle='None', marker='s')
if title:
plt.title(title)
else:
plt.title('Median {} by Year'.format(metric))
###Output
_____no_output_____
###Markdown
Overall FreedomHere, we will view the overall changes in average world freedom for the time period contained in the datset (2008-2016). In particular, we will view the trends in overall human freedom and in personal freedom. While the human freedom score is made up of an average of personal freedom and economic freedom, we will not be analyzing economic freedom as our interest is primairly in religious freedom (a sub-category of personal freedom).
###Code
# Plot World Average hfi by year
plot_med_year(hfi_data, 'hf_score', 'World Median HF Score by Year')
# Plot World Average personal freedom by year
plot_med_year(hfi_data, 'pf_score', 'World Median PF Score by Year')
###Output
_____no_output_____
###Markdown
The above graphs show a downward trend in both personal and overall human freedom from 2008 to 2016. Let's break these scores down further by world region.
###Code
# Explore yearly regional metrics in human freedom, personal freedom, and religious freedom
# Other religion related metrics are excluded as many have missing data
region_yr = hfi_data.groupby(['region', 'year']).median().reset_index()
# Plot human freedom by region
plt.figure()
sns.relplot(x='year', y='hf_score', hue='region', data=region_yr)
plt.title('Median Yearly hf_score by Region')
#Plot personal freeodm by region
plt.figure()
sns.relplot(x='year', y='pf_score', hue='region', data=region_yr)
plt.title('Median Yearly pf_score by Region')
###Output
_____no_output_____
###Markdown
Not surprisingly, North America and Western Europe consistently exhibit the highest scores for personal and overall human freedom. What is more surprising from these visualizations is the large spread in human and personal freedom between the five highest reigons and the five lowest regions in the world. These groups are spread by nearly a full point or more in both the personal and human freedom scores. Let's dive deeper into the sub-category of interest here - religious freedom, and see if similar trends exist in religious freedom. Religious Freedom
###Code
# Plot World Median religious freedom score by year
plot_med_year(hfi_data, 'pf_religion', 'World Med Religious Freedom Score by Year')
###Output
_____no_output_____
###Markdown
Religious freedom exhibits an overall downard trend over the 2008 to 2016 period, similar to those of personal and human freedom, albeit with more fluctuation. Next, we will explore the regional breakdown of religious freedom over this period.
###Code
#Plot religious freedom by region
plt.figure()
sns.relplot(x='year', y='pf_religion', hue='region', data=region_yr)
plt.title('Median Yearly pf_religion Score by Region')
###Output
_____no_output_____
###Markdown
From this regional breakdown, it is clear that there has been more significant variance in religious freedom for each region over 2008-2016 than there has been for overall personal or human freedom. Additionally, while Western Europe has generally ranked very highly in personal and human freedom, it ranks in the middle for religious freedom. Let's look further into the overall and regional medians for government religious restrictions and religious harassment.
###Code
#Plot religious freeodm - restrictions
plot_med_year(hfi_data, 'pf_religion_restrictions', 'World Med Religious Freedom Restrictions Score by Year')
#Plot religious freeodm - harassment
plot_med_year(hfi_data, 'pf_religion_harassment', 'World Med Religious Freedom Harassment Score by Year')
#Plot religious freeodm - restrictions by region
plt.figure()
sns.relplot(x='year', y='pf_religion_restrictions', hue='region', data=region_yr)
plt.title('Median Yearly pf_religion_restrictions Score by Region')
#Plot religious freeodm - harassment by region
plt.figure()
sns.relplot(x='year', y='pf_religion_harassment', hue='region', data=region_yr)
plt.title('Median Yearly pf_religion_harassment Score by Region')
###Output
_____no_output_____
###Markdown
Western Europe is again low in both sub-categories of religious freedom. There is also an interesting and somewhat surprising downward trend for North America, particularly from 2014-2016. To get a better understanding of how these metrics relate to each other and to overall religious, personal, and human freedom, we will view the correlation matrix.
###Code
# First, we will create a data frame containing only the metrics of interest
religion = hfi_data[['countries', 'region', 'year', 'pf_religion_harassment','pf_religion_restrictions',
'pf_religion', 'pf_score', 'hf_score']]
# Next, we create a heat map of the correlation matrix
plt.figure()
sns.heatmap(religion.drop(columns='year').corr(), annot=True)
###Output
_____no_output_____
###Markdown
Not surprisingly given the trends observed above, around the world there is a high degree of correlation between government restrictions on religious freedom and the presence of religious harassment. Additionally, there is a somewhat strong correlation (.49) between pf_religion and overall pf_score. Somewhat surprisingly however, the correlation between pf_religion and overall hf_score is only .39. Thus, it exhbits a somewhat strong but not overwhelmingly strong correlation. Let's view these metrics another way to get a better sense for how they relate.
###Code
# Create a scatter matrix to view correlation at a more granular level
plt.figure()
sns.pairplot(religion.drop(columns='year'), hue='region')
###Output
/anaconda3/lib/python3.6/site-packages/statsmodels/nonparametric/kde.py:448: RuntimeWarning: invalid value encountered in greater
X = X[np.logical_and(X > clip[0], X < clip[1])] # won't work for two columns.
/anaconda3/lib/python3.6/site-packages/statsmodels/nonparametric/kde.py:448: RuntimeWarning: invalid value encountered in less
X = X[np.logical_and(X > clip[0], X < clip[1])] # won't work for two columns.
/anaconda3/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
The above scatter matrix allows us to visualize the correlation between each of the key metrics in a more granular way. This allows us to see what the correlaiton matrix already told us, that pf_religion is only mildly correlated with hf_score, but that govrnment restrictions of religion and religious harassment are more strongly correlated. Next, we will examine religious freedom at an even more granular level, and look at the world's 5 best and 5 worst countries in terms of religious freedom for 2015 and 2016.
###Code
# Create new df for these years only
years = [2015, 2016]
religion_15_16 = religion[religion['year'].isin(years)]
# Bottom 5 - 2015
religion_15_16[religion_15_16['year'] == 2015].nsmallest(n=5, columns='pf_religion')
###Output
_____no_output_____
###Markdown
From here, we see that the five lowest countries for pf_religion also have very low scores for religious restrictions, but moderate scores for religious harassment. Let's see if anything changed from 2015 to 2016.
###Code
# Bottom 5 - 2016
religion_15_16[religion_15_16['year'] == 2016].nsmallest(n=5, columns='pf_religion')
###Output
_____no_output_____
###Markdown
Interestingly, Iran is no longer in the bottom 5 for religious freedom in 2016, but Malaysia now is. The other four countries remain the same, albeit in a different order. Additionally, the scores in government restrictions in the bottom 5 all decreased from 2015-2016. This is not surprising given the overall world downward trend in restrictions scores from 2015 to 2016. Let's see if there is a similar trend amongst the 5 best countries in the world with regards to religious freedom.
###Code
# Top 5 - 2015
religion_15_16[religion_15_16['year'] == 2015].nlargest(n=5, columns='pf_religion')
# Top 5 - 2016
religion_15_16[religion_15_16['year'] == 2016].nlargest(n=5, columns='pf_religion')
###Output
_____no_output_____
###Markdown
Unlike the lowest countries for religious freedom, there is a higher degree of variance amongst the countries included in the top 5. Additionally, the countries in the top 5 do not seem to exhibit the same downward trend in government restrictions that the world in general, and the bottom 5 in particular, exhibit. Of course, this is not a perfect comparison given the changes in the countries included in the top 5, but for our brief analysis, it is still helpful. Lastly, it is interesting to note that there is slightly more regional variation amongst the top 5 countries than amongst the bottom 5. ConclusionThus, we have briefly answered the basic questions we set out to answer at the beginning of this analysis. While we have not gone too in depth, the analysis has shown us that there is a correlation between high levels of government restrictions on religion and religious harassment. Additionally, one somewhat surprising finding was the downward trend in government religious restrictions scores in Western Europe. We will end this analysis with three geographic visualizations which show how religious freedom scores are distributed around the globe.
###Code
%%html
<div class='tableauPlaceholder' id='viz1548565321797' style='position: relative'><noscript><a href='#'><img alt=' ' src='https://public.tableau.com/static/images/hf/hfi_analysis_v1/RelFreedom/1_rss.png' style='border: none' /></a></noscript><object class='tableauViz' style='display:none;'><param name='host_url' value='https%3A%2F%2Fpublic.tableau.com%2F' /> <param name='embed_code_version' value='3' /> <param name='site_root' value='' /><param name='name' value='hfi_analysis_v1/RelFreedom' /><param name='tabs' value='no' /><param name='toolbar' value='yes' /><param name='static_image' value='https://public.tableau.com/static/images/hf/hfi_analysis_v1/RelFreedom/1.png' /> <param name='animate_transition' value='yes' /><param name='display_static_image' value='yes' /><param name='display_spinner' value='yes' /><param name='display_overlay' value='yes' /><param name='display_count' value='yes' /><param name='filter' value='publish=yes' /></object></div> <script type='text/javascript'> var divElement = document.getElementById('viz1548565321797'); var vizElement = divElement.getElementsByTagName('object')[0]; vizElement.style.width='100%';vizElement.style.height=(divElement.offsetWidth*0.75)+'px'; var scriptElement = document.createElement('script'); scriptElement.src = 'https://public.tableau.com/javascripts/api/viz_v1.js'; vizElement.parentNode.insertBefore(scriptElement, vizElement); </script>
%%html
<div class='tableauPlaceholder' id='viz1548565707738' style='position: relative'><noscript><a href='#'><img alt=' ' src='https://public.tableau.com/static/images/hf/hfi_analysis_v1/RelRestrictions/1_rss.png' style='border: none' /></a></noscript><object class='tableauViz' style='display:none;'><param name='host_url' value='https%3A%2F%2Fpublic.tableau.com%2F' /> <param name='embed_code_version' value='3' /> <param name='site_root' value='' /><param name='name' value='hfi_analysis_v1/RelRestrictions' /><param name='tabs' value='no' /><param name='toolbar' value='yes' /><param name='static_image' value='https://public.tableau.com/static/images/hf/hfi_analysis_v1/RelRestrictions/1.png' /> <param name='animate_transition' value='yes' /><param name='display_static_image' value='yes' /><param name='display_spinner' value='yes' /><param name='display_overlay' value='yes' /><param name='display_count' value='yes' /></object></div> <script type='text/javascript'> var divElement = document.getElementById('viz1548565707738'); var vizElement = divElement.getElementsByTagName('object')[0]; vizElement.style.width='100%';vizElement.style.height=(divElement.offsetWidth*0.75)+'px'; var scriptElement = document.createElement('script'); scriptElement.src = 'https://public.tableau.com/javascripts/api/viz_v1.js'; vizElement.parentNode.insertBefore(scriptElement, vizElement); </script>
%%html
<div class='tableauPlaceholder' id='viz1548565668582' style='position: relative'><noscript><a href='#'><img alt=' ' src='https://public.tableau.com/static/images/hf/hfi_analysis_v1/RelHarassment/1_rss.png' style='border: none' /></a></noscript><object class='tableauViz' style='display:none;'><param name='host_url' value='https%3A%2F%2Fpublic.tableau.com%2F' /> <param name='embed_code_version' value='3' /> <param name='site_root' value='' /><param name='name' value='hfi_analysis_v1/RelHarassment' /><param name='tabs' value='no' /><param name='toolbar' value='yes' /><param name='static_image' value='https://public.tableau.com/static/images/hf/hfi_analysis_v1/RelHarassment/1.png' /> <param name='animate_transition' value='yes' /><param name='display_static_image' value='yes' /><param name='display_spinner' value='yes' /><param name='display_overlay' value='yes' /><param name='display_count' value='yes' /></object></div> <script type='text/javascript'> var divElement = document.getElementById('viz1548565668582'); var vizElement = divElement.getElementsByTagName('object')[0]; vizElement.style.width='100%';vizElement.style.height=(divElement.offsetWidth*0.75)+'px'; var scriptElement = document.createElement('script'); scriptElement.src = 'https://public.tableau.com/javascripts/api/viz_v1.js'; vizElement.parentNode.insertBefore(scriptElement, vizElement); </script>
###Output
_____no_output_____ |
GuidedTour/GuidedTour.ipynb | ###Markdown
Data ScrapingFor analyzing wallstreetbets data, we recommend downloading full.csv from [url] and putting it in ../Data/subreddit_wallstreetbets.If you want to scrape a different subreddit, you can use the following file. You will need API.env with appropriate credentials in /Automated/
###Code
start = dt.datetime(2020, 1, 1)
end = dt.datetime(2020, 1, 30)
if not os.path.exists(f"../Data/subreddit_{subreddit}/full.csv"):
print("Did not find scraped data, scraping.")
RedditScraper.scrape_data(subreddits = [subreddit], start = start, end = end)
###Output
_____no_output_____
###Markdown
Change Point AnalysisThe next cell will open full.csv , compute the words that are among the top daily_words most popular words on any day, and then run the change point analysis model on each of them.The first time this is a run, a cleaned up version of the dataframe will be created for ease of processing.
###Code
up_to = None # Only calculate change points for up_to of the popular words. Set to None to do all of them.
daily_words = 1 # Get the daily_words most popular posts on each day.
# Compute the changepoints
# ChangePointAnalysis.changepointanalysis([subreddit], up_to = up_to, daily_words = daily_words)
# The output has been commented out because it is very long.
###Output
_____no_output_____
###Markdown
After running, these files will in ../Data/subreddit_subreddit/Changepoints/Metropolis_30000Draws_5000TuneThe final folder name corresponds to the parameters of the Markov chain used by pymc3 for the inference. Organizing the changepointsA table of the keywords considered with parameters estimated by the model is stored in : ../Data/subreddit_{subreddit}/Changepoints/results.csvYou can then sort through these to find the keywords the model detected a change in.There are two key parameters:change_point_confidence (also denoted p): the models belief that there was a changepoint. p = 1 indicates yes, p = 0 indicates no.mu_diff : a measurement of the size of the changepoint.
###Code
results = pd.read_csv(f"../Data/subreddit_{subreddit}/Changepoints/results.csv")
results = results.rename( columns = {"Unnamed: 0" : "keyword" })
filtered = results[(results.change_point_confidence == 1) & (results.mu_diff.apply(lambda x : np.abs(x)) > .03)]
filtered = filtered.sort_values(by = "mu_diff", ascending = False)
for row in filtered.iterrows():
word = row[1]["keyword"]
print(word, " Change point confidence: ", row[1]["change_point_confidence"], " Change point magnitude: ", row[1]["mu_diff"])
img = mpimg.imread(f'../Data/subreddit_WallStreetBets/Changepoints/Metropolis_30000Draws_5000Tune/ChangePoint_{word}.png')
imgplot = plt.imshow(img)
plt.axis('off')
plt.show()
###Output
gme Change point confidence: 1.0 Change point magnitude: 0.0870740331085252
###Markdown
Warning: Sometimes change point confidence alone is not enough. For instance, it was very confident (p = 1) that there was a changepoint in the following, although it wouldn't be reported as a change point because mu_2 - mu_1 was very small (~.003): Brief explanation of how this model works:The Bayesian model is as follows:1. A coin is flipped with probability p.2. If the coin comes up heads, then there is a change point. Otherwise, there is no change point.3. It is assumed that the frequency random variable consists of independent draws from a beta distribution. If the coin decided there would be no change point, it is the same beta distribution at all times. Otherwise, it is a different beta on the different sides of the change points.The posterior distribution of p is the models confidence that there is a change point, and the posterior distribution of tau represents its guess about when it occured.The variable mu_1 represents the mean of the beta distribution before the change point, and mu_2 represents the mean of the beta distribution after the changepoint.Of course, this is not a realistic picture of the process; the independence of the different draws from the betas is especially unlike the data. However, it appears to be good enough to discover change points, especially when p and mu_2 - mu_1 are used together.As currently written, it only handles one change point, however this can be improved.(This model was inspired by the changepoint example from Chapter 1 of Bayesian Methods for Hackers: Probabilistic Programming and Bayesian Inference.) Neural NetsThe following code will train a neural net that predicts, given a submission's title text and time of posting, whether that submission's score will be above the median. We use pre-trained GloVe word embeddings in order to convert the title text into a vector that can be used in the neural net. These word embeddings are tuned along with the model parameters as the model is being trained. This technique and the neural net's architecture are taken from a blog post of Max Woolf, https://minimaxir.com/2017/06/reddit-deep-learning/.
###Code
model, accuracies, word_tokenizer, df = CreateNeuralNets.buildnets(['wallstreetbets'])[0]
###Output
Starting Post Classification Model.
###Markdown
Predicted popularity as a time seriesWe now show how the predicted popularity of a post depends on the day on which it was posted. We plot the prediction for the same title, "GME GME GME GME GME GME", as if it were posted at noon each day. It is interesting to note that the variance seems to decrease after the GameStop short squeeze of early 2021.
###Code
text = "GME GME GME GME GME GME"
CreateNeuralNets.timeseries(df, text, model, word_tokenizer)
###Output
_____no_output_____
###Markdown
This will produce a picture like the following: Workshopping exampleHere we start with a potential title (to be posted at noon on April 1, 2021) and attempt to improve it based on the model's prediction.
###Code
#this is the date information for April 1, 2021.
#Note we normalize so the earliest year in our data set (2020)
#and the earliest day of the year correspond to the number 0
input_hour = np.array([12])
input_dayofweek = np.array([3])
input_minute = np.array([0])
input_dayofyear = np.array([91])
input_year = np.array([0])
input_info=[input_hour,input_dayofweek, input_minute, input_dayofyear, input_year]
#given a list of potential titles, predict the success of each one
def CheckPopularity(potential_titles):
for title in potential_titles:
print(model.predict([CreateNeuralNets.encode_text(title,word_tokenizer)] + input_info)[0][0][0])
potential_titles = ["Buy TSLA", "Buy TSLA! I like the stock", "Buy TSLA! Elon likes the stock",
"TSLA is the next GME. Elon likes the stock",
"TSLA is the next GME. To the moon! Elon likes the stock"]
CheckPopularity(potential_titles)
###Output
0.9536921
0.957647
0.9620316
0.98298347
0.983858
|
evaluations/sars-cov-2/4-query.case-5000.ipynb | ###Markdown
1. Parameters
###Code
cases_dir = 'cases/unset'
metadata_file = 'input/metadata-subsample-pangolin.tsv'
build_tree = False
# Parameters
cases_dir = "cases/case-5000"
iterations = 3
number_samples = 5000
build_tree = False
from pathlib import Path
import imp
fp, pathname, description = imp.find_module('gdi_benchmark', ['../../lib'])
gdi_benchmark = imp.load_module('gdi_benchmark', fp, pathname, description)
cases_dir_path = Path(cases_dir)
case_name = str(cases_dir_path.name)
index_path = cases_dir_path / 'index'
output_api_path = cases_dir_path / 'query-api.tsv'
output_cli_path = cases_dir_path / 'query-cli.tsv'
###Output
_____no_output_____
###Markdown
2. Benchmark command-line
###Code
import pandas as pd
import genomics_data_index.api as gdi
def benchmark_cli_index(name: str, index_path: Path, build_tree: bool) -> pd.DataFrame:
benchmark_commands = {
'query hasa': f'gdi --project-dir {index_path} --ncores 1 query "hasa:hgvs_gn:NC_045512.2:S:p.D614G"',
'query isa': f'gdi --project-dir {index_path} --ncores 1 query "isa:Switzerland/100108/2020"',
'query --summary': f'gdi --project-dir {index_path} --ncores 1 query "hasa:hgvs_gn:NC_045512.2:S:p.D614G" --summary',
'query --features-summary': f'gdi --project-dir {index_path} --ncores 1 query --features-summary mutations',
'list samples': f'gdi --project-dir {index_path} --ncores 1 list samples',
}
if build_tree:
benchmark_commands['query isin'] = f'gdi --project-dir {index_path} --ncores 1 query --reference-name NC_045512 "isin_5_substitutions:Switzerland/100108/2020"'
db = gdi.GenomicsDataIndex.connect(index_path)
number_samples = db.count_samples()
number_features_no_unknown = db.count_mutations(reference_genome='NC_045512', include_unknown=False)
number_features_all = db.count_mutations(reference_genome='NC_045512', include_unknown=True)
iterations = 10
benchmarker = gdi_benchmark.QueryBenchmarkHandler()
return benchmarker.benchmark_cli(name=name, kind_commands=benchmark_commands, number_samples=number_samples,
number_features_no_unknown=number_features_no_unknown, number_features_all=number_features_all,
iterations=iterations)
cli_df = benchmark_cli_index(name=case_name, index_path=index_path, build_tree=build_tree)
cli_df.head(3)
cli_df.to_csv(output_cli_path, sep='\t', index=False)
###Output
_____no_output_____
###Markdown
3. Test query API 3.1. Load (example) metadataThe simulated data is based off of real sample names and a real tree. So I can load up real metadata and attach it to a query (though the mutations and reference genome are all simulated).
###Code
import pandas as pd
metadata_df = pd.read_csv(metadata_file, sep='\t')
metadata_df.head(2)
###Output
_____no_output_____
###Markdown
3.2. Define benchmark cases
###Code
from typing import List
import genomics_data_index.api as gdi
def benchmark_api_index(name: str, index_path: Path, build_tree: bool) -> pd.DataFrame:
db = gdi.GenomicsDataIndex.connect(index_path)
q_no_join = db.samples_query(reference_name='NC_045512', universe='mutations')
q_join = db.samples_query(reference_name='NC_045512', universe='mutations').join(metadata_df, sample_names_column='strain')
q = q_join.hasa('hgvs_gn:NC_045512.2:S:p.D614G')
r = q_join.hasa('hgvs_gn:NC_045512.2:N:p.R203K')
number_samples = db.count_samples()
number_features_no_unknown = db.count_mutations(reference_genome='NC_045512', include_unknown=False)
number_features_all = db.count_mutations(reference_genome='NC_045512', include_unknown=True)
repeat = 10
benchmark_cases = {
'db.samples_query': lambda: db.samples_query(reference_name='NC_045512', universe='mutations'),
'q.join': lambda: q_no_join.join(metadata_df, sample_names_column='strain'),
'q.features_summary': lambda: q_join.features_summary(),
'q.features_comparison': lambda: q_join.features_comparison(sample_categories='lineage', categories_kind='dataframe', kind='mutations', unit='proportion'),
'q.hasa': lambda: q_join.hasa("hgvs_gn:NC_045512.2:N:p.R203K"),
'q.isa': lambda: q_join.isa("Switzerland/100112/2020"),
'q AND r': lambda: q & r,
'q.toframe': lambda: q_join.toframe(),
'q.summary': lambda: q_join.summary(),
}
if build_tree:
benchmark_cases['q.isin (distance)'] = lambda: q_join.isin("Switzerland/100108/2020", kind='distance', distance=5, units='substitutions')
benchmark_cases['q.isin (mrca)'] = lambda: q_join.isin(["Switzerland/100108/2020", "FR993751"], kind='mrca')
benchmarker = gdi_benchmark.QueryBenchmarkHandler()
return benchmarker.benchmark_api(name=name, kind_functions=benchmark_cases,
number_samples=number_samples,
number_features_no_unknown=number_features_no_unknown,
number_features_all=number_features_all,
repeat=repeat)
###Output
_____no_output_____
###Markdown
3.3. Benchmark reads index
###Code
api_df = benchmark_api_index(name=case_name, index_path=index_path, build_tree=build_tree)
api_df.head(5)
api_df.to_csv(output_api_path, sep='\t', index=False)
###Output
_____no_output_____ |
src/modeling/Apply Models to Figshare Talk Corpus.ipynb | ###Markdown
Load Models
###Code
tasks = ['attack', 'toxicity', 'aggression']
model_dict = {}
for task in tasks:
os.system("python get_prod_models.py --task %s" % task)
model_dict[task] = joblib.load("/tmp/%s_linear_char_oh_pipeline.pkl" % task)
def apply_models(df):
comments = df['comment']
for task, model in model_dict.items():
scores = model.predict_proba(comments)[:,1]
df['pred_%s_score' % task] = scores
return df
def pred_helper(df):
if len(df) == 0:
return None
return df.assign(timestamp = lambda x: pd.to_datetime(x.timestamp),
comment = lambda x: x['comment'].astype(str))\
.pipe(apply_models)
def prep_in_parallel(path, k = 8):
df = pd.read_csv(path, sep = '\t', encoding = 'utf-8')
m = df.shape[0]
if m < 15000:
n_groups = 1
else:
n_groups = int(m / 10000.0)
df['key'] = np.random.randint(0, high=n_groups, size=m)
dfs = [e[1] for e in df.groupby('key')]
#dfs = [pred_helper(d) for d in dfs]
p = mp.Pool(k)
dfs = p.map(pred_helper, dfs)
p.close()
p.join()
return pd.concat(dfs)
base = '../../data/figshare/'
nss = ['user', 'article']
years = range(2001, 2016)
for ns in nss:
for year in years:
dirname = "comments_%s_%d" % (ns, year)
print(dirname)
indir = os.path.join(base, dirname + ".tar.gz")
os.system("mkdir ", os.path.join(base, "scored"))
outf = os.path.join(base, "scored", dirname + ".tsv.gz")
os.system("cp %s ." % indir)
os.system("tar -zxvf %s.tar.gz" % dirname)
dfs = []
for inf in os.listdir(dirname):
print(inf)
if inf.endswith(".tsv"):
df = prep_in_parallel(os.path.join(dirname, inf), k = 8)
dfs.append(df)
os.system("rm -rf %s" % dirname)
os.system("rm -rf %s.tar.gz" % dirname)
pd.concat(dfs).to_csv(outf, sep = '\t', index = False, compression = "gzip")
df.sort_values("pred_toxicity_score").tail()
###Output
_____no_output_____ |
docs/feature-store/end-to-end-demo/01-ingest-datasources.ipynb | ###Markdown
Part 1: Data Ingestion This demo showcases financial fraud prevention. It uses the MLRun feature store to define complex features that help identify fraud. Fraud prevention is a special challenge since it requires processing raw transaction and events in real-time and being able to quickly respond and block transactions before they occur.To address this, you'll create a development pipeline and a production pipeline. Both pipelines share the same feature engineering and model code, but serve data very differently. Furthermore, MLRun automates the data and model monitoring process, drift identification, and trigger retraining in a CI/CD pipeline. This process is described in the diagram below: The raw data is described as follows:| TRANSACTIONS || &x2551; |USER EVENTS || |-----------------|----------------------------------------------------------------|----------|-----------------|----------------------------------------------------------------|| **age** | age group value 0-6. Some values are marked as U for unknown | &x2551; | **source** | The party/entity related to the event || **gender** | A character to define the age | &x2551; | **event** | event, such as login or password change || **zipcodeOri** | ZIP code of the person originating the transaction | &x2551; | **timestamp** | The date and time of the event || **zipMerchant** | ZIP code of the merchant receiving the transaction | &x2551; | | || **category** | category of the transaction (e.g., transportation, food, etc.) | &x2551; | | || **amount** | the total amount of the transaction | &x2551; | | || **fraud** | whether the transaction is fraudulent | &x2551; | | || **timestamp** | the date and time in which the transaction took place | &x2551; | | || **source** | the ID of the party/entity performing the transaction | &x2551; | | || **target** | the ID of the party/entity receiving the transaction | &x2551; | | || **device** | the device ID used to perform the transaction | &x2551; | | | This notebook introduces how to **Ingest** different data sources to the **Feature Store**.The following FeatureSets are created:- **Transactions**: Monetary transactions between a source and a target.- **Events**: Account events such as account login or a password change.- **Label**: Fraud label for the data.By the end of this tutorial you’ll know how to:- Create an ingestion pipeline for each data source.- Define preprocessing, aggregation, and validation of the pipeline.- Run the pipeline locally within the notebook.- Launch a real-time function to ingest live data.- Schedule a cron to run the task when needed.
###Code
project_name = 'fraud-demo'
import mlrun
# Initialize the MLRun project object
project = mlrun.get_or_create_project(project_name, context="./", user_project=True)
###Output
> 2022-03-16 05:45:07,703 [info] loaded project fraud-demo from MLRun DB
###Markdown
Step 1 - Fetch, process and ingest the datasets 1.1 - Transactions Transactions
###Code
# Helper functions to adjust the timestamps of our data
# while keeping the order of the selected events and
# the relative distance from one event to the other
def date_adjustment(sample, data_max, new_max, old_data_period, new_data_period):
'''
Adjust a specific sample's date according to the original and new time periods
'''
sample_dates_scale = ((data_max - sample) / old_data_period)
sample_delta = new_data_period * sample_dates_scale
new_sample_ts = new_max - sample_delta
return new_sample_ts
def adjust_data_timespan(dataframe, timestamp_col='timestamp', new_period='2d', new_max_date_str='now'):
'''
Adjust the dataframe timestamps to the new time period
'''
# Calculate old time period
data_min = dataframe.timestamp.min()
data_max = dataframe.timestamp.max()
old_data_period = data_max-data_min
# Set new time period
new_time_period = pd.Timedelta(new_period)
new_max = pd.Timestamp(new_max_date_str)
new_min = new_max-new_time_period
new_data_period = new_max-new_min
# Apply the timestamp change
df = dataframe.copy()
df[timestamp_col] = df[timestamp_col].apply(lambda x: date_adjustment(x, data_max, new_max, old_data_period, new_data_period))
return df
import pandas as pd
# Fetch the transactions dataset from the server
transactions_data = pd.read_csv('https://s3.wasabisys.com/iguazio/data/fraud-demo-mlrun-fs-docs/data.csv', parse_dates=['timestamp'], nrows=500)
# Adjust the samples timestamp for the past 2 days
transactions_data = adjust_data_timespan(transactions_data, new_period='2d')
# Preview
transactions_data.head(3)
###Output
_____no_output_____
###Markdown
Transactions - create a feature set and preprocessing pipelineCreate the feature set (data pipeline) definition for the **credit transaction processing** that describes the offline/online data transformations and aggregations.The feature store automatically adds an offline `parquet` target and an online `NoSQL` target by using `set_targets()`.The data pipeline consists of:* **Extracting** the data components (hour, day of week)* **Mapping** the age values* **One hot encoding** for the transaction category and the gender* **Aggregating** the amount (avg, sum, count, max over 2/12/24 hour time windows)* **Aggregating** the transactions per category (over 14 days time windows)* **Writing** the results to **offline** (Parquet) and **online** (NoSQL) targets
###Code
# Import MLRun's Feature Store
import mlrun.feature_store as fstore
from mlrun.feature_store.steps import OneHotEncoder, MapValues, DateExtractor
# Define the transactions FeatureSet
transaction_set = fstore.FeatureSet("transactions",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="transactions feature set")
# Define and add value mapping
main_categories = ["es_transportation", "es_health", "es_otherservices",
"es_food", "es_hotelservices", "es_barsandrestaurants",
"es_tech", "es_sportsandtoys", "es_wellnessandbeauty",
"es_hyper", "es_fashion", "es_home", "es_contents",
"es_travel", "es_leisure"]
# One Hot Encode the newly defined mappings
one_hot_encoder_mapping = {'category': main_categories,
'gender': list(transactions_data.gender.unique())}
# Define the graph steps
transaction_set.graph\
.to(DateExtractor(parts = ['hour', 'day_of_week'], timestamp_col = 'timestamp'))\
.to(MapValues(mapping={'age': {'U': '0'}}, with_original_features=True))\
.to(OneHotEncoder(mapping=one_hot_encoder_mapping))
# Add aggregations for 2, 12, and 24 hour time windows
transaction_set.add_aggregation(name='amount',
column='amount',
operations=['avg','sum', 'count','max'],
windows=['2h', '12h', '24h'],
period='1h')
# Add the category aggregations over a 14 day window
for category in main_categories:
transaction_set.add_aggregation(name=category,column=f'category_{category}',
operations=['count'], windows=['14d'], period='1d')
# Add default (offline-parquet & online-nosql) targets
transaction_set.set_targets()
# Plot the pipeline so we can see the different steps
transaction_set.plot(rankdir="LR", with_targets=True)
###Output
_____no_output_____
###Markdown
Transactions - ingestion
###Code
# Ingest the transactions dataset through the defined pipeline
transactions_df = fstore.ingest(transaction_set, transactions_data,
infer_options=fstore.InferOptions.default())
transactions_df.head(3)
###Output
persist count = 0
persist count = 100
persist count = 200
persist count = 300
persist count = 400
persist count = 500
persist count = 600
persist count = 700
persist count = 800
persist count = 900
persist count = 1000
###Markdown
1.2 - User events User events - fetching
###Code
# Fetch the user_events dataset from the server
user_events_data = pd.read_csv('https://s3.wasabisys.com/iguazio/data/fraud-demo-mlrun-fs-docs/events.csv',
index_col=0, quotechar="\'", parse_dates=['timestamp'], nrows=500)
# Adjust to the last 2 days to see the latest aggregations in our online feature vectors
user_events_data = adjust_data_timespan(user_events_data, new_period='2d')
# Preview
user_events_data.head(3)
###Output
_____no_output_____
###Markdown
User events - create a feature set and preprocessing pipelineDefine the events feature set.This is a fairly straightforward pipeline in which you only "one hot encode" the event categories and save the data to the default targets.
###Code
user_events_set = fstore.FeatureSet("events",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="user events feature set")
# Define and add value mapping
events_mapping = {'event': list(user_events_data.event.unique())}
# One Hot Encode
user_events_set.graph.to(OneHotEncoder(mapping=events_mapping))
# Add default (offline-parquet & online-nosql) targets
user_events_set.set_targets()
# Plot the pipeline so we can see the different steps
user_events_set.plot(rankdir="LR", with_targets=True)
###Output
_____no_output_____
###Markdown
User events - ingestion
###Code
# Ingestion of the newly created events feature set
events_df = fstore.ingest(user_events_set, user_events_data)
events_df.head(3)
###Output
persist count = 0
persist count = 100
persist count = 200
persist count = 300
persist count = 400
persist count = 500
###Markdown
Step 2 - Create a labels dataset for model training Label set - create a feature setThis feature set contains the label for the fraud demo, it is ingested directly to the default targets without any changes
###Code
def create_labels(df):
labels = df[['fraud','source','timestamp']].copy()
labels = labels.rename(columns={"fraud": "label"})
labels['timestamp'] = labels['timestamp'].astype("datetime64[ms]")
labels['label'] = labels['label'].astype(int)
labels.set_index('source', inplace=True)
return labels
# Define the "labels" feature set
labels_set = fstore.FeatureSet("labels",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="training labels",
engine="pandas")
labels_set.graph.to(name="create_labels", handler=create_labels)
# specify only Parquet (offline) target since its not used for real-time
labels_set.set_targets(['parquet'], with_defaults=False)
labels_set.plot(with_targets=True)
###Output
_____no_output_____
###Markdown
Label set - ingestion
###Code
# Ingest the labels feature set
labels_df = fstore.ingest(labels_set, transactions_data)
labels_df.head(3)
###Output
_____no_output_____
###Markdown
Step 3 - Deploy a real-time pipelineWhen dealing with real-time aggregation, it's important to be able to update these aggregations in real-time.For this purpose, you'll create live serving functions that update the online feature store of the `transactions` FeatureSet and `Events` FeatureSet.Using MLRun's `serving` runtime, create a nuclio function loaded with the feature set's computational graph definitionand an `HttpSource` to define the HTTP trigger.Notice that the implementation below does not require any rewrite of the pipeline logic. 3.1 - Transactions Transactions - deploy the feature set live endpoint
###Code
# Create iguazio v3io stream and transactions push API endpoint
transaction_stream = f'v3io:///projects/{project.name}/streams/transaction'
transaction_pusher = mlrun.datastore.get_stream_pusher(transaction_stream)
# Define the source stream trigger (use v3io streams)
# Define the `key` and `time` fields (extracted from the Json message).
source = mlrun.datastore.sources.StreamSource(path=transaction_stream , key_field='source', time_field='timestamp')
# Deploy the transactions feature set's ingestion service over a real-time (Nuclio) serverless function
# you can use the run_config parameter to pass function/service specific configuration
transaction_set_endpoint = fstore.deploy_ingestion_service(featureset=transaction_set, source=source)
###Output
> 2022-03-16 05:45:43,035 [info] Starting remote function deploy
2022-03-16 05:45:43 (info) Deploying function
2022-03-16 05:45:43 (info) Building
2022-03-16 05:45:43 (info) Staging files and preparing base images
2022-03-16 05:45:43 (warn) Python 3.6 runtime is deprecated and will soon not be supported. Please migrate your code and use Python 3.7 runtime (`python:3.7`) or higher
2022-03-16 05:45:43 (info) Building processor image
2022-03-16 05:47:03 (info) Build complete
2022-03-16 05:47:08 (info) Function deploy complete
> 2022-03-16 05:47:08,835 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-fraud-demo-admin-transactions-ingest.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['fraud-demo-admin-transactions-ingest-fraud-demo-admin.default-tenant.app.xtvtjecfcssi.iguazio-cd1.com/']}
###Markdown
Transactions - test the feature set HTTP endpoint By defining the `transactions` feature set you can now use MLRun and Storey to deploy it as a live endpoint, ready to ingest new data!Using MLRun's `serving` runtime, create a nuclio function loaded with the feature set's computational graph definition and an `HttpSource` to define the HTTP trigger.
###Code
import requests
import json
# Select a sample from the dataset and serialize it to JSON
transaction_sample = json.loads(transactions_data.sample(1).to_json(orient='records'))[0]
transaction_sample['timestamp'] = str(pd.Timestamp.now())
transaction_sample
# Post the sample to the ingestion endpoint
requests.post(transaction_set_endpoint, json=transaction_sample).text
###Output
_____no_output_____
###Markdown
3.2 - User events User events - deploy the feature set live endpointDeploy the events feature set's ingestion service using the feature set and all the previously defined resources.
###Code
# Create iguazio v3io stream and transactions push API endpoint
events_stream = f'v3io:///projects/{project.name}/streams/events'
events_pusher = mlrun.datastore.get_stream_pusher(events_stream)
# Define the source stream trigger (use v3io streams)
# Define the `key` and `time` fields (extracted from the Json message).
source = mlrun.datastore.sources.StreamSource(path=events_stream , key_field='source', time_field='timestamp')
# Deploy the transactions feature set's ingestion service over a real-time (Nuclio) serverless function
# you can use the run_config parameter to pass function/service specific configuration
events_set_endpoint = fstore.deploy_ingestion_service(featureset=user_events_set, source=source)
###Output
> 2022-03-16 05:47:09,035 [info] Starting remote function deploy
2022-03-16 05:47:09 (info) Deploying function
2022-03-16 05:47:09 (info) Building
2022-03-16 05:47:09 (info) Staging files and preparing base images
2022-03-16 05:47:09 (warn) Python 3.6 runtime is deprecated and will soon not be supported. Please migrate your code and use Python 3.7 runtime (`python:3.7`) or higher
2022-03-16 05:47:09 (info) Building processor image
###Markdown
User events - test the feature set HTTP endpoint
###Code
# Select a sample from the events dataset and serialize it to JSON
user_events_sample = json.loads(user_events_data.sample(1).to_json(orient='records'))[0]
user_events_sample['timestamp'] = str(pd.Timestamp.now())
user_events_sample
# Post the sample to the ingestion endpoint
requests.post(events_set_endpoint, json=user_events_sample).text
###Output
_____no_output_____
###Markdown
Part 1: Data Ingestion In this notebook we will learn how to **Ingest** different data sources to our **Feature Store**. Specifically, this patient data has been successfully used to treat hospitalized COVID-19 patients prior to their condition becoming severe or critical. To do this we will use a medical dataset which includes three types of data: - **Healthcare systems**: Batch updated dataset, containing different lab test results (Blood test results for ex.).- **Patient Records**: Static dataset containing general patient details.- **Real-time sensors**: Real-Time patient metric monitoring sensor. We will walk through creation of ingestion pipeline for each datasource with all the needed preprocessing and validation. We will run the pipeline locally within the notebook and then launch a real-time function to **ingest live data** or schedule a cron to run the task when needed. Environment SetupSince our work is done in a this project scope, first define the project itself for all our MLRun work in this notebook.
###Code
import mlrun
from os import getenv
mlrun.set_environment(project='fsdemo', user_project=True)
# location of the output data files
data_path = f"{getenv('V3IO_HOME_URL')}/demos/feature-store/data/"
def move_timestamps(df, shift='0s'):
''' Update timetsamps to current time so we can see live aggregations '''
now = pd.to_datetime('now')
max_time = df['timestamp'].max()
time_shift = now-max_time
tmp_df = df.copy()
tmp_df['timestamp'] = tmp_df['timestamp'].apply(lambda t: t + time_shift + pd.to_timedelta(shift))
return tmp_df
###Output
_____no_output_____
###Markdown
Create Ingestion Pipeline With MLRunIn this section we will ingest the lab measurements data using MLRun and Storey. Storey is the underlying implementation of the feature store which is used by MLRun. It is the engine that allows you to define and execute complex graphs that create the feature engineering pipeline. With storey, you can define source, transformations and targets, many actions are available as part of the Storey library, but you can define additional actions easily. We will see these custom actions in later sections.For the execution, it is possible to also use Spark. The main difference between Storey and Spark pipelines is that Storey blocks are built for Real-Time workloads while Spark is more Batch oriented. We will now do the following:- Create the `measurements` FeatureSet- Define Preprocessing graph including aggregations- Ingest the data using the defined pipeline
###Code
# Import MLRun's Feature Store
import mlrun.feature_store as fs
# Import MLRun's Data Sources to set the wanted ingestion pipeline
from mlrun.datastore.sources import CSVSource, ParquetSource, HttpSource
# Import storey so it will be available on our scope
# when testing the pipeline
import storey
# Define the Lab Measurements FeatureSet
measurements_set = fs.FeatureSet("measurements",
entities=[fs.Entity("patient_id")],
timestamp_key='timestamp',
description="various patient health measurements")
# Get FeatureSet computation graph
measurements_graph = measurements_set.graph
###Output
_____no_output_____
###Markdown
Define the processing pipeline- Transformation function- Sliding window aggregation- Set targets (NoSQL and Parquet)
###Code
# Import pandas and load the sample CSV and load it as a datasource
# for our ingestion
import pandas as pd
measurements_df = pd.read_csv('https://s3.wasabisys.com/iguazio/data/patients/measurements.csv', index_col=0)
measurements_df['timestamp'] = pd.to_datetime(measurements_df['timestamp'])
measurements_df['timestamp'] = measurements_df['timestamp'].astype("datetime64[ms]")
measurements_df = pd.concat([move_timestamps(measurements_df, '-1h'), move_timestamps(measurements_df)]) # update timestamps
###Output
_____no_output_____
###Markdown
Take a look at the measurements dataset. This dataset includes a a single measurement per row. The measurement type is defined by the `source` and `parameter` column. We would like to transform this data, so each patient has multiple measurement columns. To do that, we will need to create a new column for each `source` and `parameter` combination. For example, if `source` is 3 and `parameter` is 0, then our transformed dataset will have the measurement value in a new feature named `sp_3_0`.Following that, we will create a sliding window aggregation that averages the values across that time window.
###Code
measurements_df.head()
###Output
_____no_output_____
###Markdown
The following code performs the transformation, adds the aggregation and sets the target to store the values to a NoSQL database for online retrieval and parquet files for batch processing.
###Code
# Define transform to create sparse dataset for aggregation
# adding an extra column for the specific source-parameter pair's measurement
# ex: source=3, parameter=4, measurement=100 -> add extra column sp_3_4=100
def transform(event):
event["_".join(['sp', str(event["source"]), str(event["parameter"])])] = event["measurement"]
return event
# Define Measurement FeatureSet pipeline
measurements_graph.to(
"storey.Map", _fn="transform"
)
# Get the available source, parameter pairs for our aggregation
sps = list(measurements_df.apply(lambda x: '_'.join(['sp', str(x['source']), str(x['parameter'])]), axis=1).unique())
# Add aggregations on top of the created sparse
# features by the transorm function
for col in sps:
measurements_set.add_aggregation(name=f'agg_{col}',
column=col,
operations=['avg'],
windows='1h',
period='30m')
# Add default (NoSQL via KV and Parquet) targets to save
# the ingestion results to
measurements_set.set_targets()
###Output
_____no_output_____
###Markdown
You can plot the graph to visalize the pipeline:
###Code
# Plot the ingestion pipeline we defined
measurements_set.plot(rankdir='LR', with_targets=True)
###Output
_____no_output_____
###Markdown
Run ingestion task using MLRun & StoreyIn order to ingest the dataframe to the featureset, use the `ingest` function.
###Code
# User our loaded DF as the datasource and ingest it through
# the define pipeline
resp = fs.ingest(measurements_set, measurements_df,
infer_options=fs.InferOptions.default())
resp.head()
# Save the FeatureSet and pipeline definition
measurements_set.save()
###Output
_____no_output_____
###Markdown
Ingest Patient Details Features In this section we will use MLRun to create our patient details datasource. We will do the following:- Create a `patient_details` FeatureSet- Add preprocessing transformations to the pipeline - Map ages to buckets and One Hot Encode them - Impute missing values- Test the processing pipeline with sample data- Run ingestion pipeline on top of the cluster Create the FeatureSet
###Code
# add feature set without time column (stock ticker metadata)
patients_set = fs.FeatureSet("patient_details", entities=[fs.Entity("patient_id")],
description="personal and medical patient details")
# Get FeatureSet computation graph
graph = patients_set.spec.graph
###Output
_____no_output_____
###Markdown
Define the computation pipeline
###Code
# Define age buckets for our age value mapping
personal_details = {'age': {'ranges': [{'range': [0, 3], "value": "toddler"},
{'range': [3, 18], "value": "child"},
{'range': [18, 65], "value": "adult"},
{'range': [65, 120], "value": "elder"}]}}
# Define one hot encoding values map
one_hot_encoder_mapping = {'age_mapped': ['toddler', 'child', 'adult', 'elder']}
# Import MLRun's FeatureStore steps for easy
# use in our pipeline
from mlrun.feature_store.steps import *
# Define the pipeline for our FeatureSet
graph.to(MapValues(mapping=personal_details, with_original_features=True))\
.to(OneHotEncoder(mapping=one_hot_encoder_mapping))\
.to(Imputer(method='values', default_value=1, mapping={}))
# Add default NoSQL & Parquet ingestion targets
patients_set.set_targets()
# Plot the FeatureSet pipeline
patients_set.plot(rankdir='LR', with_targets=True)
###Output
_____no_output_____
###Markdown
Test the Feature transformation pipelineCreating a transformation pipeline requires some trial and error. Therefore, it is useful to run the pipeline in memory without storing the resultant data. For this purpose, `infer` is used. This function receives as input any sample DataFrame, performs all the graph steps and outputs the transformed DataFrame.
###Code
# Load the sample patient details data
patients_df = pd.read_parquet('https://s3.wasabisys.com/iguazio/data/patients/patient_details.parquet')
# Run local ingestion test
fs.infer(patients_set, patients_df.head())
###Output
_____no_output_____
###Markdown
Save the FeatureSet and run full ingestion taskOnce you are satisfied with the transformation pipeline, ingest that full DataFrame and store the data.
###Code
# Save the FeatureSet
patients_set.save()
# Run Ingestion task
resp = fs.ingest(patients_set, patients_df,
infer_options=fs.InferOptions.default())
###Output
_____no_output_____
###Markdown
Start Immediate or Scheduled Ingestion Job (over Kubernetes)Another useful method to ingest data, is by creating a Kubernetes job. This may be necessary to process large amounts of data as well as to process any recurring data. With MLRun it is easy to take the pipeline and run it as a job. This is done by:1. Define a source, specifically here we define a parquet file source2. Define a configuration where `local` is set to `False`3. Mount to the provisioned storage by calling `auto_mount`4. Run `ingest` with the source and run configuration
###Code
source = ParquetSource('pq', 'https://s3.wasabisys.com/iguazio/data/patients/patient_details.parquet')
config = fs.RunConfig(local=False).apply(mlrun.platforms.auto_mount())
fs.ingest(patients_set, source, run_config=config)
###Output
> 2021-05-06 15:27:08,769 [info] starting run patient_details_ingest uid=76f197b8ab3347d1b995a5ea55d0a98a DB=http://mlrun-api:8080
> 2021-05-06 15:27:09,022 [info] Job is running in the background, pod: patient-details-ingest-g9hgn
> 2021-05-06 15:27:15,073 [info] starting ingestion task to store://feature-sets/fsdemo-admin/patient_details:latest
> 2021-05-06 15:27:15,745 [info] ingestion task completed, targets:
> 2021-05-06 15:27:15,746 [info] [{'name': 'parquet', 'kind': 'parquet', 'path': 'v3io:///projects/fsdemo-admin/fs/parquet/sets/patient_details-latest.parquet', 'status': 'created', 'updated': '2021-05-06T15:27:15.432576+00:00'}, {'name': 'nosql', 'kind': 'nosql', 'path': 'v3io:///projects/fsdemo-admin/fs/nosql/sets/patient_details-latest', 'status': 'created', 'updated': '2021-05-06T15:27:15.432947+00:00'}]
> 2021-05-06 15:27:15,936 [info] run executed, status=completed
final state: completed
###Markdown
Real-time Early-Sense Sensor Ingestion (HTTP or Stream Processing With Nuclio) In this section we will use MLRun to create our Early Sense Sensor datasource. We will do the following:- Create early sense FeatureSet- Add Preprocessing transformations to the Pipeline using custom functions - Drop and Rename columns - Aggregations- Add Feature Validator to detect bad sensor readings- Test the processing pipeline with sample data- Deploy the FeatureSet ingestion service as a live rest endpoint
###Code
early_sense_set = fs.FeatureSet("early_sense", entities=[fs.Entity("patient_id")], timestamp_key='timestamp',
description="real time patient bed sensor data")
###Output
_____no_output_____
###Markdown
Define data validation & quality policyWe can define validations on the feature level. For example, define here validation to check if the heart-rate value is between 0 and 220 and respitory rate is between 0 and 25.
###Code
from mlrun.features import MinMaxValidator
early_sense_set["hr"] = fs.Feature(validator = MinMaxValidator(min=0, max=220, severity="info"))
early_sense_set["rr"] = fs.Feature(validator = MinMaxValidator(min=0, max=25, severity="info"))
###Output
_____no_output_____
###Markdown
Define custom processing classesIn the previous sections we used transformation steps that are available as part of Storey. Here we show how to create custom transformation classes. We will later run these functions as part of a Nuclio serverless real-time function, therefore, we also use the nuclio `start-code` and `end-code` comments.
###Code
# nuclio: start-code
# We will import storey here too so it will
# be included in our function code (within the nuclio comment block)
import json
import storey
from typing import List, Dict
# The custom functions are based on `storey.MapClass`
# when they are called in the graph the `do(self, event)`
# function will be activated.
# A to_dict(self) function is also required by MLRun
# to allow the class creation on remote functions
class DropColumns(storey.MapClass):
def __init__(self, columns: List[str], **kwargs):
super().__init__(**kwargs)
self.columns = columns
def do(self, event):
for col in self.columns:
if col in event:
del event[col]
return event
def to_dict(self):
return {
"class_name": "DropColumns",
"name": self.name or "DropColumns",
"class_args": {
"columns": self.columns
},
}
class RenameColumns(storey.MapClass):
def __init__(self, mapping: Dict[str, str], **kwargs):
super().__init__(**kwargs)
self.mapping = mapping
def do(self, event):
for old_col, new_col in self.mapping.items():
try:
event[new_col] = event.pop(old_col)
except Exception as e:
print(f'{old_col} doesnt exist')
return event
def to_dict(self):
return {
"class_name": "RenameColumns",
"name": self.name or "RenameColumns",
"class_args": {"mapping": self.mapping},
}
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Define the Real-Time PipelineDefine the transoformation pipeline below. This is done just like the previous sections.
###Code
# Configure the list of columns to drop from
# the raw data
drop_columns = ['hr_is_error',
'rr_is_error',
'spo2_is_error',
'movements_is_error',
'turn_count_is_error',
'is_in_bed_is_error']
# Define the computationala graph including our custom functions
early_sense_set.graph.to(DropColumns(drop_columns), after='start')\
.to(RenameColumns(mapping={'bad': 'bed'}))
# Add real-time aggreagations on top of our sensor readings
for col in ['hr', 'rr', 'spo2', 'movements', 'turn_count']:
early_sense_set.add_aggregation(col + "_h", col, ['avg', 'max', 'min'], "1h")
early_sense_set.add_aggregation(col + "_d", col, ['avg', 'max', 'min'], "1d")
early_sense_set.add_aggregation('in_bed_h', 'is_in_bed', ['avg'], "1h")
early_sense_set.add_aggregation('in_bed_d', 'is_in_bed', ['avg'], "1d")
# Set NoSQL and Parquet default targets
early_sense_set.set_targets()
# Plot the pipeline
early_sense_set.plot(rankdir='LR', with_targets=True)
###Output
_____no_output_____
###Markdown
Test/debug the real-time pipeline locally in the notebook
###Code
# infer schema + stats, show the final feature set (after the data pipeline)
early_sense_df = pd.read_parquet('https://s3.wasabisys.com/iguazio/data/patients/early_sense.parquet')
early_sense_df['timestamp'] = pd.to_datetime(early_sense_df['timestamp'])
early_sense_df = move_timestamps(early_sense_df) # update timestamps
fs.infer(early_sense_set, early_sense_df.head())
# Run ingest pipeline
df=fs.ingest(early_sense_set, early_sense_df)
# Save the early-sense Featureset
early_sense_set.save()
# print the FeatureSet spec
print(early_sense_set.status.targets.to_dict())
###Output
[{'name': 'parquet', 'kind': 'parquet', 'path': 'v3io:///projects/fsdemo-admin/fs/parquet/sets/early_sense-latest.parquet', 'status': 'created', 'updated': '2021-05-06T15:27:46.222973+00:00'}, {'name': 'nosql', 'kind': 'nosql', 'path': 'v3io:///projects/fsdemo-admin/fs/nosql/sets/early_sense-latest', 'status': 'created', 'updated': '2021-05-06T15:27:46.223349+00:00'}]
###Markdown
Deploy as Real-Time Stream Processing Function (Nuclio Serverless)Features are not static. For example, it is common that features include different aggregations that need to be updated as data continues to flow. A real-time pipeline requires this data to be up date. Therefore, we need a convenient way to ingest data, not just as batch, but per specific input.MLRun can convert any code to a real-time serverless function, including the pipeline. This is done by performing the following steps:1. Define a source, in this case it's an HTTP source2. Convert the previously defined code to a serving function3. Create a configuration to run the function4. Deploy an ingestion service with the Featureset, source and the configuration
###Code
# Set a new HTTPSource, this will tell our ingestion service
# to setup a Nuclio function to act as the rest endpoint
# to which we would receive the data
source = HttpSource(key_field='patient_id', time_field='timestamp')
# Take the relevant code parts from this notebook and create
# an MLRun function from them so we can run the pipeline
# as a Nuclio function
func = mlrun.code_to_function("ingest", kind="serving")
nuclio_config = fs.RunConfig(function=func, local=False).apply(mlrun.platforms.auto_mount())
# Deploy the Online ingestion service using the pipeline definition from before
# with our new HTTP Source and our define Function
server = fs.deploy_ingestion_service(early_sense_set, source, run_config=nuclio_config)
###Output
> 2021-05-06 15:29:52,032 [info] Starting remote function deploy
2021-05-06 15:29:52 (info) Deploying function
{'level': 'info', 'message': 'Deploying function', 'name': 'fsdemo-admin-ingest', 'time': 1620314992169.7139}
2021-05-06 15:29:52 (info) Building
{'level': 'info', 'message': 'Building', 'name': 'fsdemo-admin-ingest', 'time': 1620314992169.7478, 'versionInfo': 'Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3'}
2021-05-06 15:29:52 (info) Staging files and preparing base images
{'level': 'info', 'message': 'Staging files and preparing base images', 'name': 'deployer', 'time': 1620314992237.7905}
2021-05-06 15:29:52 (info) Building processor image
{'imageName': 'fsdemo-admin-fsdemo-admin-ingest-processor:latest', 'level': 'info', 'message': 'Building processor image', 'name': 'deployer', 'time': 1620314992238.347}
2021-05-06 15:29:55 (info) Build complete
{'level': 'info', 'message': 'Build complete', 'name': 'deployer', 'result': {'Image': 'fsdemo-admin-fsdemo-admin-ingest-processor:latest', 'UpdatedFunctionConfig': {'metadata': {'annotations': {'nuclio.io/generated_by': 'function generated from https://github.com/mlrun/mlrun#004d7b6797e3292525d220bb4389470342ebe752:ingest.ipynb'}, 'labels': {'mlrun/class': 'serving', 'nuclio.io/project-name': 'fsdemo-admin'}, 'name': 'fsdemo-admin-ingest', 'namespace': 'default-tenant'}, 'spec': {'build': {'baseImage': 'mlrun/mlrun:0.6.3-rc9', 'codeEntryType': 'sourceCode', 'functionSourceCode': 'IyBHZW5lcmF0ZWQgYnkgbnVjbGlvLmV4cG9ydC5OdWNsaW9FeHBvcnRlcgoKaW1wb3J0IGpzb24KaW1wb3J0IHN0b3JleQpmcm9tIHR5cGluZyBpbXBvcnQgTGlzdCwgRGljdAoKCmNsYXNzIERyb3BDb2x1bW5zKHN0b3JleS5NYXBDbGFzcyk6CiAgICBkZWYgX19pbml0X18oc2VsZiwgY29sdW1uczogTGlzdFtzdHJdLCAqKmt3YXJncyk6CiAgICAgICAgc3VwZXIoKS5fX2luaXRfXygqKmt3YXJncykKICAgICAgICBzZWxmLmNvbHVtbnMgPSBjb2x1bW5zCgogICAgZGVmIGRvKHNlbGYsIGV2ZW50KToKICAgICAgICBmb3IgY29sIGluIHNlbGYuY29sdW1uczoKICAgICAgICAgICAgaWYgY29sIGluIGV2ZW50OgogICAgICAgICAgICAgICAgZGVsIGV2ZW50W2NvbF0KICAgICAgICByZXR1cm4gZXZlbnQKCiAgICBkZWYgdG9fZGljdChzZWxmKToKICAgICAgICByZXR1cm4gewogICAgICAgICAgICAiY2xhc3NfbmFtZSI6ICJEcm9wQ29sdW1ucyIsCiAgICAgICAgICAgICJuYW1lIjogc2VsZi5uYW1lIG9yICJEcm9wQ29sdW1ucyIsCiAgICAgICAgICAgICJjbGFzc19hcmdzIjogewogICAgICAgICAgICAgICAgImNvbHVtbnMiOiBzZWxmLmNvbHVtbnMKICAgICAgICAgICAgfSwKICAgICAgICB9CgpjbGFzcyBSZW5hbWVDb2x1bW5zKHN0b3JleS5NYXBDbGFzcyk6CiAgICBkZWYgX19pbml0X18oc2VsZiwgbWFwcGluZzogRGljdFtzdHIsIHN0cl0sICoqa3dhcmdzKToKICAgICAgICBzdXBlcigpLl9faW5pdF9fKCoqa3dhcmdzKQogICAgICAgIHNlbGYubWFwcGluZyA9IG1hcHBpbmcKCiAgICBkZWYgZG8oc2VsZiwgZXZlbnQpOgogICAgICAgIGZvciBvbGRfY29sLCBuZXdfY29sIGluIHNlbGYubWFwcGluZy5pdGVtcygpOgogICAgICAgICAgICB0cnk6CiAgICAgICAgICAgICAgICBldmVudFtuZXdfY29sXSA9IGV2ZW50LnBvcChvbGRfY29sKQogICAgICAgICAgICBleGNlcHQgRXhjZXB0aW9uIGFzIGU6CiAgICAgICAgICAgICAgICBwcmludChmJ3tvbGRfY29sfSBkb2VzbnQgZXhpc3QnKQogICAgICAgIHJldHVybiBldmVudAoKICAgIGRlZiB0b19kaWN0KHNlbGYpOgogICAgICAgIHJldHVybiB7CiAgICAgICAgICAgICJjbGFzc19uYW1lIjogIlJlbmFtZUNvbHVtbnMiLAogICAgICAgICAgICAibmFtZSI6IHNlbGYubmFtZSBvciAiUmVuYW1lQ29sdW1ucyIsCiAgICAgICAgICAgICJjbGFzc19hcmdzIjogeyJtYXBwaW5nIjogc2VsZi5tYXBwaW5nfSwKICAgICAgICB9CgoKZnJvbSBtbHJ1bi5ydW50aW1lcyBpbXBvcnQgbnVjbGlvX2luaXRfaG9vawpkZWYgaW5pdF9jb250ZXh0KGNvbnRleHQpOgogICAgbnVjbGlvX2luaXRfaG9vayhjb250ZXh0LCBnbG9iYWxzKCksICdzZXJ2aW5nX3YyJykKCmRlZiBoYW5kbGVyKGNvbnRleHQsIGV2ZW50KToKICAgIHJldHVybiBjb250ZXh0Lm1scnVuX2hhbmRsZXIoY29udGV4dCwgZXZlbnQpCg==', 'noBaseImagesPull': True, 'offline': True, 'registry': 'docker-registry.default-tenant.app.yh30.iguazio-c0.com'}, 'env': [{'name': 'V3IO_API', 'value': 'v3io-webapi.default-tenant.svc:8081'}, {'name': 'V3IO_USERNAME', 'value': 'admin'}, {'name': 'V3IO_ACCESS_KEY', 'value': '142a98fa-bef9-4095-b2d0-cab733f53238'}, {'name': 'MLRUN_LOG_LEVEL', 'value': 'DEBUG'}, {'name': 'MLRUN_DEFAULT_PROJECT', 'value': 'fsdemo-admin'}, {'name': 'MLRUN_DBPATH', 'value': 'http://mlrun-api:8080'}, {'name': 'MLRUN_NAMESPACE', 'value': 'default-tenant'}, {'name': 'SERVING_SPEC_ENV', 'value': '{"function_uri": "fsdemo-admin/ingest", "version": "v2", "parameters": {"infer_options": 0, "featureset": "store://feature-sets/fsdemo-admin/early_sense", "source": {"kind": "http", "path": "None", "key_field": "patient_id", "time_field": "timestamp", "online": true}}, "graph": {"states": {"DropColumns": {"kind": "task", "class_name": "DropColumns", "class_args": {"columns": ["hr_is_error", "rr_is_error", "spo2_is_error", "movements_is_error", "turn_count_is_error", "is_in_bed_is_error"]}}, "RenameColumns": {"kind": "task", "class_name": "RenameColumns", "class_args": {"mapping": {"bad": "bed"}}, "after": ["DropColumns"]}, "Aggregates": {"kind": "task", "class_name": "storey.AggregateByKey", "class_args": {"aggregates": [{"name": "hr", "column": "hr", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "rr", "column": "rr", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "spo2", "column": "spo2", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "movements", "column": "movements", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "turn_count", "column": "turn_count", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "in_bed", "column": "is_in_bed", "operations": ["avg"], "windows": ["1h", "1d"]}], "table": "."}, "after": ["RenameColumns"]}}}, "load_mode": null, "functions": {}, "graph_initializer": "mlrun.feature_store.ingestion.featureset_initializer", "error_stream": null, "track_models": null}'}], 'eventTimeout': '', 'handler': '01-ingest-datasources:handler', 'maxReplicas': 4, 'minReplicas': 1, 'platform': {}, 'resources': {}, 'runtime': 'python:3.6', 'securityContext': {}, 'serviceType': 'NodePort', 'triggers': {'default-http': {'attributes': {'serviceType': 'NodePort'}, 'class': '', 'kind': 'http', 'maxWorkers': 1, 'name': 'default-http'}}, 'volumes': [{'volume': {'flexVolume': {'driver': 'v3io/fuse', 'options': {'accessKey': '142a98fa-bef9-4095-b2d0-cab733f53238'}}, 'name': 'v3io'}, 'volumeMount': {'mountPath': '/v3io', 'name': 'v3io'}}, {'volume': {'flexVolume': {'driver': 'v3io/fuse', 'options': {'accessKey': '142a98fa-bef9-4095-b2d0-cab733f53238'}}, 'name': 'v3io'}, 'volumeMount': {'mountPath': '/User', 'name': 'v3io', 'subPath': 'users/admin'}}]}}}, 'time': 1620314995613.7964}
> 2021-05-06 15:30:03,749 [info] function deployed, address=default-tenant.app.yh30.iguazio-c0.com:31610
###Markdown
Test the function by sending data to the HTTP endpoint
###Code
test_data = {'patient_id': '838-21-8151',
'bad': 38,
'department': '01e9fe31-76de-45f0-9aed-0f94cc97bca0',
'room': 1,
'hr': 220.0,
'hr_is_error': True,
'rr': 5,
'rr_is_error': True,
'spo2': 85,
'spo2_is_error': True,
'movements': 0.0,
'movements_is_error': True,
'turn_count': 0.0,
'turn_count_is_error': True,
'is_in_bed': 1,
'is_in_bed_is_error': False,
'timestamp': 1606843455.906352
}
import requests
import json
response = requests.post(server, json=test_data)
response.text
###Output
_____no_output_____
###Markdown
Ingest labelsFinally, we define label data, this will be useful in the next notebook where we train a model Create Labels Set
###Code
# Define labels metric from the early sense error data
error_columns = [c for c in early_sense_df.columns if 'error' in c]
labels = early_sense_df.loc[:, ['patient_id', 'timestamp'] + error_columns]
labels['label'] = labels.apply(lambda x: sum([x[c] for c in error_columns])>(len(error_columns)*0.7), axis=1)
labels.to_parquet(data_path + 'labels.parquet')
#labels_df = pd.read_parquet('labels.parquet')
labels_set = fs.FeatureSet("labels", entities=[fs.Entity("patient_id")], timestamp_key='timestamp',
description="training labels")
labels_set.set_targets()
df = fs.infer(labels_set, data_path + 'labels.parquet')
df.head()
df = fs.ingest(labels_set, data_path + 'labels.parquet')
labels_set.save()
###Output
_____no_output_____
###Markdown
Part 1: Data Ingestion This demo showcases financial fraud prevention and using the MLRun feature store to define complex features that help identify fraud. Fraud prevention specifically is a challenge as it requires processing raw transaction and events in real-time and being able to quickly respond and block transactions before they occur.To address this, we create a development pipeline and a production pipeline. Both pipelines share the same feature engineering and model code, but serve data very differently. Furthermore, we automate the data and model monitoring process, identify drift and trigger retraining in a CI/CD pipeline. This process is described in the diagram below: The raw data is described as follows:| TRANSACTIONS || &x2551; |USER EVENTS || |-----------------|----------------------------------------------------------------|----------|-----------------|----------------------------------------------------------------|| **age** | age group value 0-6. Some values are marked as U for unknown | &x2551; | **source** | The party/entity related to the event || **gender** | A character to define the age | &x2551; | **event** | event, such as login or password change || **zipcodeOri** | ZIP code of the person originating the transaction | &x2551; | **timestamp** | The date and time of the event || **zipMerchant** | ZIP code of the merchant receiving the transaction | &x2551; | | || **category** | category of the transaction (e.g., transportation, food, etc.) | &x2551; | | || **amount** | the total amount of the transaction | &x2551; | | || **fraud** | whether the transaction is fraudulent | &x2551; | | || **timestamp** | the date and time in which the transaction took place | &x2551; | | || **source** | the ID of the party/entity performing the transaction | &x2551; | | || **target** | the ID of the party/entity receiving the transaction | &x2551; | | || **device** | the device ID used to perform the transaction | &x2551; | | | This notebook introduces how to **Ingest** different data sources to the **Feature Store**.The following FeatureSets will be created:- **Transactions**: Monetary transactions between a source and a target.- **Events**: Account events such as account login or a password change.- **Label**: Fraud label for the data.By the end of this tutorial you’ll learn how to:- Create an ingestion pipeline for each data source.- Define preprocessing, aggregation and validation of the pipeline.- Run the pipeline locally within the notebook.- Launch a real-time function to ingest live data.- Schedule a cron to run the task when needed.
###Code
project_name = 'fraud-demo'
import mlrun
# Initialize the MLRun project object
project = mlrun.get_or_create_project(project_name, context="./", user_project=True)
###Output
> 2021-10-28 11:25:47,346 [info] loaded project fraud-demo from MLRun DB
###Markdown
Step 1 - Fetch, Process and Ingest our datasets 1.1 - Transactions Transactions
###Code
# Helper functions to adjust the timestamps of our data
# while keeping the order of the selected events and
# the relative distance from one event to the other
def date_adjustment(sample, data_max, new_max, old_data_period, new_data_period):
'''
Adjust a specific sample's date according to the original and new time periods
'''
sample_dates_scale = ((data_max - sample) / old_data_period)
sample_delta = new_data_period * sample_dates_scale
new_sample_ts = new_max - sample_delta
return new_sample_ts
def adjust_data_timespan(dataframe, timestamp_col='timestamp', new_period='2d', new_max_date_str='now'):
'''
Adjust the dataframe timestamps to the new time period
'''
# Calculate old time period
data_min = dataframe.timestamp.min()
data_max = dataframe.timestamp.max()
old_data_period = data_max-data_min
# Set new time period
new_time_period = pd.Timedelta(new_period)
new_max = pd.Timestamp(new_max_date_str)
new_min = new_max-new_time_period
new_data_period = new_max-new_min
# Apply the timestamp change
df = dataframe.copy()
df[timestamp_col] = df[timestamp_col].apply(lambda x: date_adjustment(x, data_max, new_max, old_data_period, new_data_period))
return df
import pandas as pd
# Fetch the transactions dataset from the server
transactions_data = pd.read_csv('https://s3.wasabisys.com/iguazio/data/fraud-demo-mlrun-fs-docs/data.csv', parse_dates=['timestamp'])
# Adjust the samples timestamp for the past 2 days
transactions_data = adjust_data_timespan(transactions_data.sample(50000), new_period='2d')
# Preview
transactions_data.head(3)
###Output
_____no_output_____
###Markdown
Transactions - Create a FeatureSet and Preprocessing PipelineCreate the FeatureSet (data pipeline) definition for the **credit transaction processing** which describes the offline/online data transformations and aggregations.The feature store will automatically add an offline `parquet` target and an online `NoSQL` target by using `set_targets()`.The data pipeline consists of:* **Extracting** the data components (hour, day of week)* **Mapping** the age values* **One hot encoding** for the transaction category and the gender* **Aggregating** the amount (avg, sum, count, max over 2/12/24 hour time windows)* **Aggregating** the transactions per category (over 14 days time windows)* **Writing** the results to **offline** (Parquet) and **online** (NoSQL) targets
###Code
# Import MLRun's Feature Store
import mlrun.feature_store as fstore
from mlrun.feature_store.steps import OneHotEncoder, MapValues, DateExtractor
# Define the transactions FeatureSet
transaction_set = fstore.FeatureSet("transactions",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="transactions feature set")
# Define and add value mapping
main_categories = ["es_transportation", "es_health", "es_otherservices",
"es_food", "es_hotelservices", "es_barsandrestaurants",
"es_tech", "es_sportsandtoys", "es_wellnessandbeauty",
"es_hyper", "es_fashion", "es_home", "es_contents",
"es_travel", "es_leisure"]
# One Hot Encode the newly defined mappings
one_hot_encoder_mapping = {'category': main_categories,
'gender': list(transactions_data.gender.unique())}
# Define the graph steps
transaction_set.graph\
.to(DateExtractor(parts = ['hour', 'day_of_week'], timestamp_col = 'timestamp'))\
.to(MapValues(mapping={'age': {'U': '0'}}, with_original_features=True))\
.to(OneHotEncoder(mapping=one_hot_encoder_mapping))
# Add aggregations for 2, 12, and 24 hour time windows
transaction_set.add_aggregation(name='amount',
column='amount',
operations=['avg','sum', 'count','max'],
windows=['2h', '12h', '24h'],
period='1h')
# Add the category aggregations over a 14 day window
for category in main_categories:
transaction_set.add_aggregation(name=category,column=f'category_{category}',
operations=['count'], windows=['14d'], period='1d')
# Add default (offline-parquet & online-nosql) targets
transaction_set.set_targets()
# Plot the pipeline so we can see the different steps
transaction_set.plot(rankdir="LR", with_targets=True)
###Output
_____no_output_____
###Markdown
Transactions - Ingestion
###Code
# Ingest our transactions dataset through our defined pipeline
transactions_df = fstore.ingest(transaction_set, transactions_data,
infer_options=fstore.InferOptions.default())
transactions_df.head(3)
###Output
_____no_output_____
###Markdown
1.2 - User Events User Events - Fetching
###Code
# Fetch our user_events dataset from the server
user_events_data = pd.read_csv('https://s3.wasabisys.com/iguazio/data/fraud-demo-mlrun-fs-docs/events.csv',
index_col=0, quotechar="\'", parse_dates=['timestamp'])
# Adjust to the last 2 days to see the latest aggregations in our online feature vectors
user_events_data = adjust_data_timespan(user_events_data, new_period='2d')
# Preview
user_events_data.head(3)
###Output
_____no_output_____
###Markdown
User Events - Create a FeatureSet and Preprocessing PipelineNow we will define the events feature set.This is a pretty straight forward pipeline in which we only one hot encode the event categories and save the data to the default targets.
###Code
user_events_set = fstore.FeatureSet("events",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="user events feature set")
# Define and add value mapping
events_mapping = {'event': list(user_events_data.event.unique())}
# One Hot Encode
user_events_set.graph.to(OneHotEncoder(mapping=events_mapping))
# Add default (offline-parquet & online-nosql) targets
user_events_set.set_targets()
# Plot the pipeline so we can see the different steps
user_events_set.plot(rankdir="LR", with_targets=True)
###Output
_____no_output_____
###Markdown
User Events - Ingestion
###Code
# Ingestion of our newly created events feature set
events_df = fstore.ingest(user_events_set, user_events_data)
events_df.head(3)
###Output
_____no_output_____
###Markdown
Step 2 - Create a labels dataset for model training Label Set - Create a FeatureSetThis feature set contains the label for the fraud demo, it will be ingested directly to the default targets without any changes
###Code
def create_labels(df):
labels = df[['fraud','source','timestamp']].copy()
labels = labels.rename(columns={"fraud": "label"})
labels['timestamp'] = labels['timestamp'].astype("datetime64[ms]")
labels['label'] = labels['label'].astype(int)
labels.set_index('source', inplace=True)
return labels
# Define the "labels" feature set
labels_set = fstore.FeatureSet("labels",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="training labels",
engine="pandas")
labels_set.graph.to(name="create_labels", handler=create_labels)
# specify only Parquet (offline) target since its not used for real-time
labels_set.set_targets(['parquet'], with_defaults=False)
labels_set.plot(with_targets=True)
###Output
_____no_output_____
###Markdown
Label Set - Ingestion
###Code
# Ingest the labels feature set
labels_df = fstore.ingest(labels_set, transactions_data)
labels_df.head(3)
###Output
1000
None
###Markdown
Step 3 - Deploy a real-time pipelineWhen dealing with real-time aggregation, it's important to be able to update these aggregations in real-time.For this purpose, we will create live serving functions that will update the online feature store of the `transactions` FeatureSet and `Events` FeatureSet.Using MLRun's `serving` runtime, craetes a nuclio function loaded with our feature set's computational graph definitionand an `HttpSource` to define the HTTP trigger.Notice that the implementation below does not require any rewrite of the pipeline logic. 3.1 - Transactions Transactions - Deploy our FeatureSet live endpoint
###Code
# Create iguazio v3io stream and transactions push API endpoint
transaction_stream = f'v3io:///projects/{project.name}/streams/transaction'
transaction_pusher = mlrun.datastore.get_stream_pusher(transaction_stream)
# Define the HTTP Source to_dictable the HTTP trigger on our function and expose the endpoint.
# as any other datasource, we will define the `key` and `time` fields here too.
http_source = mlrun.datastore.sources.HttpSource(key_field='source', time_field='timestamp')
transaction_set.spec.source = http_source
# Create a real-time serverless function definition to deploy the ingestion pipeline on.
# the serving runtimes enables the deployment of our feature set's computational graph
function = (mlrun.new_function('ingest-transactions', kind='serving', image='mlrun/mlrun')).with_code(body=" ")
# Add stream trigger (must first create the stream)
function.add_v3io_stream_trigger(transaction_stream)
#
run_config = fstore.RunConfig(function=function, local=False).apply(mlrun.mount_v3io())
# Deploy the transactions feature set's ingestion service using the feature set
# and all the defined resources above.
transaction_set_endpoint = fstore.deploy_ingestion_service(featureset=transaction_set,
run_config=run_config)
###Output
> 2021-09-19 17:58:50,402 [info] Starting remote function deploy
2021-09-19 17:58:50 (info) Deploying function
2021-09-19 17:58:50 (info) Building
2021-09-19 17:58:50 (info) Staging files and preparing base images
2021-09-19 17:58:50 (info) Building processor image
2021-09-19 17:58:52 (info) Build complete
2021-09-19 17:59:01 (info) Function deploy complete
> 2021-09-19 17:59:01,461 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-fraud-demo-admin-ingest-transactions.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['fraud-demo-admin-ingest-transactions-fraud-demo-admin.default-tenant.app.jnewriujxdig.iguazio-cd1.com/']}
###Markdown
Transactions - Test the feature set HTTP endpoint By defining our `transactions` feature set we can now use MLRun and Storey to deploy it as a live endpoint, ready to ingest new data!Using MLRun's `serving` runtime, we will create a nuclio function loaded with our feature set's computational graph definition and an `HttpSource` to define the HTTP trigger.
###Code
import requests
import json
# Select a sample from the dataset and serialize it to JSON
transaction_sample = json.loads(transactions_data.sample(1).to_json(orient='records'))[0]
transaction_sample['timestamp'] = str(pd.Timestamp.now())
transaction_sample
# Post the sample to the ingestion endpoint
requests.post(transaction_set_endpoint, json=transaction_sample).text
###Output
_____no_output_____
###Markdown
3.2 - User Events User Events - Deploy our FeatureSet live endpointDeploy the events feature set's ingestion service using the feature set and all the previously defined resources.
###Code
# Create iguazio v3io stream and transactions push API endpoint
events_stream = f'v3io:///projects/{project.name}/streams/events'
events_pusher = mlrun.datastore.get_stream_pusher(events_stream)
# Create a `serving` "base function" to deploy the ingestion function on
# the serving runtimes enables the deployment of our feature set's computational graph
function = (mlrun.new_function('ingest-events', kind='serving', image='mlrun/mlrun')).with_code(body=" ")
# Add stream trigger
function.add_v3io_stream_trigger(events_stream)
#
run_config = fstore.RunConfig(function=function, local=False).apply(mlrun.mount_v3io())
# Deploy the transactions feature set's ingestion service using the feature set
# and all the defined resources above.
events_set_endpoint = fstore.deploy_ingestion_service(name="ingest-events", featureset=user_events_set,
source=http_source, run_config=run_config)
###Output
> 2021-09-19 17:59:01,795 [info] Starting remote function deploy
2021-09-19 17:59:02 (info) Deploying function
2021-09-19 17:59:02 (info) Building
2021-09-19 17:59:02 (info) Staging files and preparing base images
2021-09-19 17:59:02 (info) Building processor image
2021-09-19 17:59:03 (info) Build complete
> 2021-09-19 17:59:12,356 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-fraud-demo-admin-ingest-events.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['fraud-demo-admin-ingest-events-fraud-demo-admin.default-tenant.app.jnewriujxdig.iguazio-cd1.com/']}
###Markdown
User Events - Test the feature set HTTP endpoint
###Code
# Select a sample from the events dataset and serialize it to JSON
user_events_sample = json.loads(user_events_data.sample(1).to_json(orient='records'))[0]
user_events_sample['timestamp'] = str(pd.Timestamp.now())
user_events_sample
# Post the sample to the ingestion endpoint
requests.post(events_set_endpoint, json=user_events_sample).text
###Output
_____no_output_____
###Markdown
Part 1: Data Ingestion This demo showcases financial fraud prevention and using the MLRun feature store to define complex features that help identify fraud. Fraud prevention specifically is a challenge as it requires processing raw transaction and events in real-time and being able to quickly respond and block transactions before they occur.To address this, we create a development pipeline and a production pipeline. Both pipelines share the same feature engineering and model code, but serve data very differently. Furthermore, we automate the data and model monitoring process, identify drift and trigger retraining in a CI/CD pipeline. This process is described in the diagram below: The raw data is described as follows:| TRANSACTIONS || &x2551; |USER EVENTS || |-----------------|----------------------------------------------------------------|----------|-----------------|----------------------------------------------------------------|| **age** | age group value 0-6. Some values are marked as U for unknown | &x2551; | **source** | The party/entity related to the event || **gender** | A character to define the age | &x2551; | **event** | event, such as login or password change || **zipcodeOri** | ZIP code of the person originating the transaction | &x2551; | **timestamp** | The date and time of the event || **zipMerchant** | ZIP code of the merchant receiving the transaction | &x2551; | | || **category** | category of the transaction (e.g., transportation, food, etc.) | &x2551; | | || **amount** | the total amount of the transaction | &x2551; | | || **fraud** | whether the transaction is fraudulent | &x2551; | | || **timestamp** | the date and time in which the transaction took place | &x2551; | | || **source** | the ID of the party/entity performing the transaction | &x2551; | | || **target** | the ID of the party/entity receiving the transaction | &x2551; | | || **device** | the device ID used to perform the transaction | &x2551; | | | This notebook introduces how to **Ingest** different data sources to the **Feature Store**.The following FeatureSets will be created:- **Transactions**: Monetary transactions between a source and a target.- **Events**: Account events such as account login or a password change.- **Label**: Fraud label for the data.By the end of this tutorial you’ll learn how to:- Create an ingestion pipeline for each data source.- Define preprocessing, aggregation and validation of the pipeline.- Run the pipeline locally within the notebook.- Launch a real-time function to ingest live data.- Schedule a cron to run the task when needed.
###Code
project_name = 'fraud-demo'
import mlrun
# Initialize the MLRun project object
project = mlrun.get_or_create_project(project_name, context="./", user_project=True)
###Output
> 2021-09-19 17:55:08,313 [info] created and saved project fraud-demo
###Markdown
Step 1 - Fetch, Process and Ingest our datasets 1.1 - Transactions Transactions
###Code
# Helper functions to adjust the timestamps of our data
# while keeping the order of the selected events and
# the relative distance from one event to the other
def date_adjustment(sample, data_max, new_max, old_data_period, new_data_period):
'''
Adjust a specific sample's date according to the original and new time periods
'''
sample_dates_scale = ((data_max - sample) / old_data_period)
sample_delta = new_data_period * sample_dates_scale
new_sample_ts = new_max - sample_delta
return new_sample_ts
def adjust_data_timespan(dataframe, timestamp_col='timestamp', new_period='2d', new_max_date_str='now'):
'''
Adjust the dataframe timestamps to the new time period
'''
# Calculate old time period
data_min = dataframe.timestamp.min()
data_max = dataframe.timestamp.max()
old_data_period = data_max-data_min
# Set new time period
new_time_period = pd.Timedelta(new_period)
new_max = pd.Timestamp(new_max_date_str)
new_min = new_max-new_time_period
new_data_period = new_max-new_min
# Apply the timestamp change
df = dataframe.copy()
df[timestamp_col] = df[timestamp_col].apply(lambda x: date_adjustment(x, data_max, new_max, old_data_period, new_data_period))
return df
import pandas as pd
# Fetch the transactions dataset from the server
transactions_data = pd.read_csv('https://s3.wasabisys.com/iguazio/data/fraud-demo-mlrun-fs-docs/data.csv', parse_dates=['timestamp'])
# Adjust the samples timestamp for the past 2 days
transactions_data = adjust_data_timespan(transactions_data.sample(50000), new_period='2d')
# Preview
transactions_data.head(3)
###Output
_____no_output_____
###Markdown
Transactions - Create a FeatureSet and Preprocessing PipelineCreate the FeatureSet (data pipeline) definition for the **credit transaction processing** which describes the offline/online data transformations and aggregations.The feature store will automatically add an offline `parquet` target and an online `NoSQL` target by using `set_targets()`.The data pipeline consists of:* **Extracting** the data components (hour, day of week)* **Mapping** the age values* **One hot encoding** for the transaction category and the gender* **Aggregating** the amount (avg, sum, count, max over 2/12/24 hour time windows)* **Aggregating** the transactions per category (over 14 days time windows)* **Writing** the results to **offline** (Parquet) and **online** (NoSQL) targets
###Code
# Import MLRun's Feature Store
import mlrun.feature_store as fstore
from mlrun.feature_store.steps import OneHotEncoder, MapValues, DateExtractor
# Define the transactions FeatureSet
transaction_set = fstore.FeatureSet("transactions",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="transactions feature set")
# Define and add value mapping
main_categories = ["es_transportation", "es_health", "es_otherservices",
"es_food", "es_hotelservices", "es_barsandrestaurants",
"es_tech", "es_sportsandtoys", "es_wellnessandbeauty",
"es_hyper", "es_fashion", "es_home", "es_contents",
"es_travel", "es_leisure"]
# One Hot Encode the newly defined mappings
one_hot_encoder_mapping = {'category': main_categories,
'gender': list(transactions_data.gender.unique())}
# Define the graph steps
transaction_set.graph\
.to(DateExtractor(parts = ['hour', 'day_of_week'], timestamp_col = 'timestamp'))\
.to(MapValues(mapping={'age': {'U': '0'}}, with_original_features=True))\
.to(OneHotEncoder(mapping=one_hot_encoder_mapping))
# Add aggregations for 2, 12, and 24 hour time windows
transaction_set.add_aggregation(name='amount',
column='amount',
operations=['avg','sum', 'count','max'],
windows=['2h', '12h', '24h'],
period='1h')
# Add the category aggregations over a 14 day window
for category in main_categories:
transaction_set.add_aggregation(name=category,column=f'category_{category}',
operations=['count'], windows=['14d'], period='1d')
# Add default (offline-parquet & online-nosql) targets
transaction_set.set_targets()
# Plot the pipeline so we can see the different steps
transaction_set.plot(rankdir="LR", with_targets=True)
###Output
_____no_output_____
###Markdown
Transactions - Ingestion
###Code
# Ingest our transactions dataset through our defined pipeline
transactions_df = fstore.ingest(transaction_set, transactions_data,
infer_options=fstore.InferOptions.default())
transactions_df.head(3)
###Output
_____no_output_____
###Markdown
1.2 - User Events User Events - Fetching
###Code
# Fetch our user_events dataset from the server
user_events_data = pd.read_csv('https://s3.wasabisys.com/iguazio/data/fraud-demo-mlrun-fs-docs/events.csv',
index_col=0, quotechar="\'", parse_dates=['timestamp'])
# Adjust to the last 2 days to see the latest aggregations in our online feature vectors
user_events_data = adjust_data_timespan(user_events_data, new_period='2d')
# Preview
user_events_data.head(3)
###Output
_____no_output_____
###Markdown
User Events - Create a FeatureSet and Preprocessing PipelineNow we will define the events feature set.This is a pretty straight forward pipeline in which we only one hot encode the event categories and save the data to the default targets.
###Code
user_events_set = fstore.FeatureSet("events",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="user events feature set")
# Define and add value mapping
events_mapping = {'event': list(user_events_data.event.unique())}
# One Hot Encode
user_events_set.graph.to(OneHotEncoder(mapping=events_mapping))
# Add default (offline-parquet & online-nosql) targets
user_events_set.set_targets()
# Plot the pipeline so we can see the different steps
user_events_set.plot(rankdir="LR", with_targets=True)
###Output
_____no_output_____
###Markdown
User Events - Ingestion
###Code
# Ingestion of our newly created events feature set
events_df = fstore.ingest(user_events_set, user_events_data)
events_df.head(3)
###Output
_____no_output_____
###Markdown
Step 2 - Create a labels dataset for model training Label Set - Create a FeatureSetThis feature set contains the label for the fraud demo, it will be ingested directly to the default targets without any changes
###Code
def create_labels(df):
labels = df[['fraud','source','timestamp']].copy()
labels = labels.rename(columns={"fraud": "label"})
labels['timestamp'] = labels['timestamp'].astype("datetime64[ms]")
labels['label'] = labels['label'].astype(int)
labels.set_index('source', inplace=True)
return labels
# Define the "labels" feature set
labels_set = fstore.FeatureSet("labels",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="training labels",
engine="pandas")
labels_set.graph.to(name="create_labels", handler=create_labels)
# specify only Parquet (offline) target since its not used for real-time
labels_set.set_targets(['parquet'], with_defaults=False)
labels_set.plot(with_targets=True)
###Output
_____no_output_____
###Markdown
Label Set - Ingestion
###Code
# Ingest the labels feature set
labels_df = fstore.ingest(labels_set, transactions_data)
labels_df.head(3)
###Output
_____no_output_____
###Markdown
Step 3 - Deploy a real-time pipelineWhen dealing with real-time aggregation, it's important to be able to update these aggregations in real-time.For this purpose, we will create live serving functions that will update the online feature store of the `transactions` FeatureSet and `Events` FeatureSet.Using MLRun's `serving` runtime, craetes a nuclio function loaded with our feature set's computational graph definitionand an `HttpSource` to define the HTTP trigger.Notice that the implementation below does not require any rewrite of the pipeline logic. 3.1 - Transactions Transactions - Deploy our FeatureSet live endpoint
###Code
# Create iguazio v3io stream and transactions push API endpoint
transaction_stream = f'v3io:///projects/{project.name}/streams/transaction'
transaction_pusher = mlrun.datastore.get_stream_pusher(transaction_stream)
# Define the HTTP Source to_dictable the HTTP trigger on our function and expose the endpoint.
# as any other datasource, we will define the `key` and `time` fields here too.
http_source = mlrun.datastore.sources.HttpSource(key_field='source', time_field='timestamp')
transaction_set.spec.source = http_source
# Create a real-time serverless function definition to deploy the ingestion pipeline on.
# the serving runtimes enables the deployment of our feature set's computational graph
function = (mlrun.new_function('ingest-transactions', kind='serving', image='mlrun/mlrun')).with_code(body=" ")
# Add stream trigger (must first create the stream)
function.add_v3io_stream_trigger(transaction_stream)
#
run_config = fstore.RunConfig(function=function, local=False).apply(mlrun.mount_v3io())
# Deploy the transactions feature set's ingestion service using the feature set
# and all the defined resources above.
transaction_set_endpoint = fstore.deploy_ingestion_service(featureset=transaction_set,
run_config=run_config)
###Output
> 2021-09-19 17:58:50,402 [info] Starting remote function deploy
2021-09-19 17:58:50 (info) Deploying function
2021-09-19 17:58:50 (info) Building
2021-09-19 17:58:50 (info) Staging files and preparing base images
2021-09-19 17:58:50 (info) Building processor image
2021-09-19 17:58:52 (info) Build complete
2021-09-19 17:59:01 (info) Function deploy complete
> 2021-09-19 17:59:01,461 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-fraud-demo-admin-ingest-transactions.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['fraud-demo-admin-ingest-transactions-fraud-demo-admin.default-tenant.app.jnewriujxdig.iguazio-cd1.com/']}
###Markdown
Transactions - Test the feature set HTTP endpoint By defining our `transactions` feature set we can now use MLRun and Storey to deploy it as a live endpoint, ready to ingest new data!Using MLRun's `serving` runtime, we will create a nuclio function loaded with our feature set's computational graph definition and an `HttpSource` to define the HTTP trigger.
###Code
import requests
import json
# Select a sample from the dataset and serialize it to JSON
transaction_sample = json.loads(transactions_data.sample(1).to_json(orient='records'))[0]
transaction_sample['timestamp'] = str(pd.Timestamp.now())
transaction_sample
# Post the sample to the ingestion endpoint
requests.post(transaction_set_endpoint, json=transaction_sample).text
###Output
_____no_output_____
###Markdown
3.2 - User Events User Events - Deploy our FeatureSet live endpointDeploy the events feature set's ingestion service using the feature set and all the previously defined resources.
###Code
# Create iguazio v3io stream and transactions push API endpoint
events_stream = f'v3io:///projects/{project.name}/streams/events'
events_pusher = mlrun.datastore.get_stream_pusher(events_stream)
# Create a `serving` "base function" to deploy the ingestion function on
# the serving runtimes enables the deployment of our feature set's computational graph
function = (mlrun.new_function('ingest-events', kind='serving', image='mlrun/mlrun')).with_code(body=" ")
# Add stream trigger
function.add_v3io_stream_trigger(events_stream)
#
run_config = fstore.RunConfig(function=function, local=False).apply(mlrun.mount_v3io())
# Deploy the transactions feature set's ingestion service using the feature set
# and all the defined resources above.
events_set_endpoint = fstore.deploy_ingestion_service(name="ingest-events", featureset=user_events_set,
source=http_source, run_config=run_config)
###Output
> 2021-09-19 17:59:01,795 [info] Starting remote function deploy
2021-09-19 17:59:02 (info) Deploying function
2021-09-19 17:59:02 (info) Building
2021-09-19 17:59:02 (info) Staging files and preparing base images
2021-09-19 17:59:02 (info) Building processor image
2021-09-19 17:59:03 (info) Build complete
> 2021-09-19 17:59:12,356 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-fraud-demo-admin-ingest-events.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['fraud-demo-admin-ingest-events-fraud-demo-admin.default-tenant.app.jnewriujxdig.iguazio-cd1.com/']}
###Markdown
User Events - Test the feature set HTTP endpoint
###Code
# Select a sample from the events dataset and serialize it to JSON
user_events_sample = json.loads(user_events_data.sample(1).to_json(orient='records'))[0]
user_events_sample['timestamp'] = str(pd.Timestamp.now())
user_events_sample
# Post the sample to the ingestion endpoint
requests.post(events_set_endpoint, json=user_events_sample).text
###Output
_____no_output_____
###Markdown
Part 1: Data Ingestion This demo showcases financial fraud prevention and using the MLRun feature store to define complex features that help identify fraud. Fraud prevention specifically is a challenge as it requires processing raw transaction and events in real-time and being able to quickly respond and block transactions before they occur.To address this, we create a development pipeline and a production pipeline. Both pipelines share the same feature engineering and model code, but serve data very differently. Furthermore, we automate the data and model monitoring process, identify drift and trigger retraining in a CI/CD pipeline. This process is described in the diagram below: The raw data is described as follows:| TRANSACTIONS || &x2551; |USER EVENTS || |-----------------|----------------------------------------------------------------|----------|-----------------|----------------------------------------------------------------|| **age** | age group value 0-6. Some values are marked as U for unknown | &x2551; | **source** | The party/entity related to the event || **gender** | A character to define the age | &x2551; | **event** | event, such as login or password change || **zipcodeOri** | ZIP code of the person originating the transaction | &x2551; | **timestamp** | The date and time of the event || **zipMerchant** | ZIP code of the merchant receiving the transaction | &x2551; | | || **category** | category of the transaction (e.g., transportation, food, etc.) | &x2551; | | || **amount** | the total amount of the transaction | &x2551; | | || **fraud** | whether the transaction is fraudulent | &x2551; | | || **timestamp** | the date and time in which the transaction took place | &x2551; | | || **source** | the ID of the party/entity performing the transaction | &x2551; | | || **target** | the ID of the party/entity receiving the transaction | &x2551; | | || **device** | the device ID used to perform the transaction | &x2551; | | | This notebook introduces how to **Ingest** different data sources to the **Feature Store**.The following FeatureSets will be created:- **Transactions**: Monetary transactions between a source and a target.- **Events**: Account events such as account login or a password change.- **Label**: Fraud label for the data.By the end of this tutorial you’ll learn how to:- Create an ingestion pipeline for each data source.- Define preprocessing, aggregation and validation of the pipeline.- Run the pipeline locally within the notebook.- Launch a real-time function to ingest live data.- Schedule a cron to run the task when needed.
###Code
project_name = 'fraud-demo'
import mlrun
# Initialize the MLRun project object
project = mlrun.get_or_create_project(project_name, context="./", user_project=True)
###Output
> 2022-03-16 05:45:07,703 [info] loaded project fraud-demo from MLRun DB
###Markdown
Step 1 - Fetch, Process and Ingest our datasets 1.1 - Transactions Transactions
###Code
# Helper functions to adjust the timestamps of our data
# while keeping the order of the selected events and
# the relative distance from one event to the other
def date_adjustment(sample, data_max, new_max, old_data_period, new_data_period):
'''
Adjust a specific sample's date according to the original and new time periods
'''
sample_dates_scale = ((data_max - sample) / old_data_period)
sample_delta = new_data_period * sample_dates_scale
new_sample_ts = new_max - sample_delta
return new_sample_ts
def adjust_data_timespan(dataframe, timestamp_col='timestamp', new_period='2d', new_max_date_str='now'):
'''
Adjust the dataframe timestamps to the new time period
'''
# Calculate old time period
data_min = dataframe.timestamp.min()
data_max = dataframe.timestamp.max()
old_data_period = data_max-data_min
# Set new time period
new_time_period = pd.Timedelta(new_period)
new_max = pd.Timestamp(new_max_date_str)
new_min = new_max-new_time_period
new_data_period = new_max-new_min
# Apply the timestamp change
df = dataframe.copy()
df[timestamp_col] = df[timestamp_col].apply(lambda x: date_adjustment(x, data_max, new_max, old_data_period, new_data_period))
return df
import pandas as pd
# Fetch the transactions dataset from the server
transactions_data = pd.read_csv('https://s3.wasabisys.com/iguazio/data/fraud-demo-mlrun-fs-docs/data.csv', parse_dates=['timestamp'], nrows=500)
# Adjust the samples timestamp for the past 2 days
transactions_data = adjust_data_timespan(transactions_data, new_period='2d')
# Preview
transactions_data.head(3)
###Output
_____no_output_____
###Markdown
Transactions - Create a FeatureSet and Preprocessing PipelineCreate the FeatureSet (data pipeline) definition for the **credit transaction processing** which describes the offline/online data transformations and aggregations.The feature store will automatically add an offline `parquet` target and an online `NoSQL` target by using `set_targets()`.The data pipeline consists of:* **Extracting** the data components (hour, day of week)* **Mapping** the age values* **One hot encoding** for the transaction category and the gender* **Aggregating** the amount (avg, sum, count, max over 2/12/24 hour time windows)* **Aggregating** the transactions per category (over 14 days time windows)* **Writing** the results to **offline** (Parquet) and **online** (NoSQL) targets
###Code
# Import MLRun's Feature Store
import mlrun.feature_store as fstore
from mlrun.feature_store.steps import OneHotEncoder, MapValues, DateExtractor
# Define the transactions FeatureSet
transaction_set = fstore.FeatureSet("transactions",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="transactions feature set")
# Define and add value mapping
main_categories = ["es_transportation", "es_health", "es_otherservices",
"es_food", "es_hotelservices", "es_barsandrestaurants",
"es_tech", "es_sportsandtoys", "es_wellnessandbeauty",
"es_hyper", "es_fashion", "es_home", "es_contents",
"es_travel", "es_leisure"]
# One Hot Encode the newly defined mappings
one_hot_encoder_mapping = {'category': main_categories,
'gender': list(transactions_data.gender.unique())}
# Define the graph steps
transaction_set.graph\
.to(DateExtractor(parts = ['hour', 'day_of_week'], timestamp_col = 'timestamp'))\
.to(MapValues(mapping={'age': {'U': '0'}}, with_original_features=True))\
.to(OneHotEncoder(mapping=one_hot_encoder_mapping))
# Add aggregations for 2, 12, and 24 hour time windows
transaction_set.add_aggregation(name='amount',
column='amount',
operations=['avg','sum', 'count','max'],
windows=['2h', '12h', '24h'],
period='1h')
# Add the category aggregations over a 14 day window
for category in main_categories:
transaction_set.add_aggregation(name=category,column=f'category_{category}',
operations=['count'], windows=['14d'], period='1d')
# Add default (offline-parquet & online-nosql) targets
transaction_set.set_targets()
# Plot the pipeline so we can see the different steps
transaction_set.plot(rankdir="LR", with_targets=True)
###Output
_____no_output_____
###Markdown
Transactions - Ingestion
###Code
# Ingest our transactions dataset through our defined pipeline
transactions_df = fstore.ingest(transaction_set, transactions_data,
infer_options=fstore.InferOptions.default())
transactions_df.head(3)
###Output
persist count = 0
persist count = 100
persist count = 200
persist count = 300
persist count = 400
persist count = 500
persist count = 600
persist count = 700
persist count = 800
persist count = 900
persist count = 1000
###Markdown
1.2 - User Events User Events - Fetching
###Code
# Fetch our user_events dataset from the server
user_events_data = pd.read_csv('https://s3.wasabisys.com/iguazio/data/fraud-demo-mlrun-fs-docs/events.csv',
index_col=0, quotechar="\'", parse_dates=['timestamp'], nrows=500)
# Adjust to the last 2 days to see the latest aggregations in our online feature vectors
user_events_data = adjust_data_timespan(user_events_data, new_period='2d')
# Preview
user_events_data.head(3)
###Output
_____no_output_____
###Markdown
User Events - Create a FeatureSet and Preprocessing PipelineNow we will define the events feature set.This is a pretty straight forward pipeline in which we only one hot encode the event categories and save the data to the default targets.
###Code
user_events_set = fstore.FeatureSet("events",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="user events feature set")
# Define and add value mapping
events_mapping = {'event': list(user_events_data.event.unique())}
# One Hot Encode
user_events_set.graph.to(OneHotEncoder(mapping=events_mapping))
# Add default (offline-parquet & online-nosql) targets
user_events_set.set_targets()
# Plot the pipeline so we can see the different steps
user_events_set.plot(rankdir="LR", with_targets=True)
###Output
_____no_output_____
###Markdown
User Events - Ingestion
###Code
# Ingestion of our newly created events feature set
events_df = fstore.ingest(user_events_set, user_events_data)
events_df.head(3)
###Output
persist count = 0
persist count = 100
persist count = 200
persist count = 300
persist count = 400
persist count = 500
###Markdown
Step 2 - Create a labels dataset for model training Label Set - Create a FeatureSetThis feature set contains the label for the fraud demo, it will be ingested directly to the default targets without any changes
###Code
def create_labels(df):
labels = df[['fraud','source','timestamp']].copy()
labels = labels.rename(columns={"fraud": "label"})
labels['timestamp'] = labels['timestamp'].astype("datetime64[ms]")
labels['label'] = labels['label'].astype(int)
labels.set_index('source', inplace=True)
return labels
# Define the "labels" feature set
labels_set = fstore.FeatureSet("labels",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="training labels",
engine="pandas")
labels_set.graph.to(name="create_labels", handler=create_labels)
# specify only Parquet (offline) target since its not used for real-time
labels_set.set_targets(['parquet'], with_defaults=False)
labels_set.plot(with_targets=True)
###Output
_____no_output_____
###Markdown
Label Set - Ingestion
###Code
# Ingest the labels feature set
labels_df = fstore.ingest(labels_set, transactions_data)
labels_df.head(3)
###Output
_____no_output_____
###Markdown
Step 3 - Deploy a real-time pipelineWhen dealing with real-time aggregation, it's important to be able to update these aggregations in real-time.For this purpose, we will create live serving functions that will update the online feature store of the `transactions` FeatureSet and `Events` FeatureSet.Using MLRun's `serving` runtime, craetes a nuclio function loaded with our feature set's computational graph definitionand an `HttpSource` to define the HTTP trigger.Notice that the implementation below does not require any rewrite of the pipeline logic. 3.1 - Transactions Transactions - Deploy our FeatureSet live endpoint
###Code
# Create iguazio v3io stream and transactions push API endpoint
transaction_stream = f'v3io:///projects/{project.name}/streams/transaction'
transaction_pusher = mlrun.datastore.get_stream_pusher(transaction_stream)
# Define the source stream trigger (use v3io streams)
# we will define the `key` and `time` fields (extracted from the Json message).
source = mlrun.datastore.sources.StreamSource(path=transaction_stream , key_field='source', time_field='timestamp')
# Deploy the transactions feature set's ingestion service over a real-time (Nuclio) serverless function
# you can use the run_config parameter to pass function/service specific configuration
transaction_set_endpoint = fstore.deploy_ingestion_service(featureset=transaction_set, source=source)
###Output
> 2022-03-16 05:45:43,035 [info] Starting remote function deploy
2022-03-16 05:45:43 (info) Deploying function
2022-03-16 05:45:43 (info) Building
2022-03-16 05:45:43 (info) Staging files and preparing base images
2022-03-16 05:45:43 (warn) Python 3.6 runtime is deprecated and will soon not be supported. Please migrate your code and use Python 3.7 runtime (`python:3.7`) or higher
2022-03-16 05:45:43 (info) Building processor image
2022-03-16 05:47:03 (info) Build complete
2022-03-16 05:47:08 (info) Function deploy complete
> 2022-03-16 05:47:08,835 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-fraud-demo-admin-transactions-ingest.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['fraud-demo-admin-transactions-ingest-fraud-demo-admin.default-tenant.app.xtvtjecfcssi.iguazio-cd1.com/']}
###Markdown
Transactions - Test the feature set HTTP endpoint By defining our `transactions` feature set we can now use MLRun and Storey to deploy it as a live endpoint, ready to ingest new data!Using MLRun's `serving` runtime, we will create a nuclio function loaded with our feature set's computational graph definition and an `HttpSource` to define the HTTP trigger.
###Code
import requests
import json
# Select a sample from the dataset and serialize it to JSON
transaction_sample = json.loads(transactions_data.sample(1).to_json(orient='records'))[0]
transaction_sample['timestamp'] = str(pd.Timestamp.now())
transaction_sample
# Post the sample to the ingestion endpoint
requests.post(transaction_set_endpoint, json=transaction_sample).text
###Output
_____no_output_____
###Markdown
3.2 - User Events User Events - Deploy our FeatureSet live endpointDeploy the events feature set's ingestion service using the feature set and all the previously defined resources.
###Code
# Create iguazio v3io stream and transactions push API endpoint
events_stream = f'v3io:///projects/{project.name}/streams/events'
events_pusher = mlrun.datastore.get_stream_pusher(events_stream)
# Define the source stream trigger (use v3io streams)
# we will define the `key` and `time` fields (extracted from the Json message).
source = mlrun.datastore.sources.StreamSource(path=events_stream , key_field='source', time_field='timestamp')
# Deploy the transactions feature set's ingestion service over a real-time (Nuclio) serverless function
# you can use the run_config parameter to pass function/service specific configuration
events_set_endpoint = fstore.deploy_ingestion_service(featureset=user_events_set, source=source)
###Output
> 2022-03-16 05:47:09,035 [info] Starting remote function deploy
2022-03-16 05:47:09 (info) Deploying function
2022-03-16 05:47:09 (info) Building
2022-03-16 05:47:09 (info) Staging files and preparing base images
2022-03-16 05:47:09 (warn) Python 3.6 runtime is deprecated and will soon not be supported. Please migrate your code and use Python 3.7 runtime (`python:3.7`) or higher
2022-03-16 05:47:09 (info) Building processor image
###Markdown
User Events - Test the feature set HTTP endpoint
###Code
# Select a sample from the events dataset and serialize it to JSON
user_events_sample = json.loads(user_events_data.sample(1).to_json(orient='records'))[0]
user_events_sample['timestamp'] = str(pd.Timestamp.now())
user_events_sample
# Post the sample to the ingestion endpoint
requests.post(events_set_endpoint, json=user_events_sample).text
###Output
_____no_output_____
###Markdown
Part 1: Data Ingestion In this notebook we will learn how to **Ingest** different data sources to our **Feature Store**. Specifically, this patient data has been successfully used to treat hospitalized COVID-19 patients prior to their condition becoming severe or critical. To do this we will use a medical dataset which includes three types of data: - **Healthcare systems**: Batch updated dataset, containing different lab test results (Blood test results for ex.).- **Patient Records**: Static dataset containing general patient details.- **Real-time sensors**: Real-Time patient metric monitoring sensor. We will walk through creation of ingestion pipeline for each datasource with all the needed preprocessing and validation. We will run the pipeline locally within the notebook and then launch a real-time function to **ingest live data** or schedule a cron to run the task when needed. Environment SetupSince our work is done in a this project scope, first define the project itself for all our MLRun work in this notebook.
###Code
import mlrun
from os import getenv
mlrun.set_environment(project='fsdemo', user_project=True)
# location of the output data files
data_path = f"{getenv('V3IO_HOME_URL')}/demos/feature-store/data/"
def move_timestamps(df, shift='0s'):
''' Update timetsamps to current time so we can see live aggregations '''
now = pd.to_datetime('now')
max_time = df['timestamp'].max()
time_shift = now-max_time
tmp_df = df.copy()
tmp_df['timestamp'] = tmp_df['timestamp'].apply(lambda t: t + time_shift + pd.to_timedelta(shift))
return tmp_df
###Output
_____no_output_____
###Markdown
Create Ingestion Pipeline With MLRunIn this section we will ingest the lab measurements data using MLRun and Storey. Storey is the underlying implementation of the feature store which is used by MLRun. It is the engine that allows you to define and execute complex graphs that create the feature engineering pipeline. With storey, you can define source, transformations and targets, many actions are available as part of the Storey library, but you can define additional actions easily. We will see these custom actions in later sections.For the execution, it is possible to also use Spark. The main difference between Storey and Spark pipelines is that Storey blocks are built for Real-Time workloads while Spark is more Batch oriented. We will now do the following:- Create the `measurements` FeatureSet- Define Preprocessing graph including aggregations- Ingest the data using the defined pipeline
###Code
# Import MLRun's Feature Store
import mlrun.feature_store as fs
# Import MLRun's Data Sources to set the wanted ingestion pipeline
from mlrun.datastore.sources import CSVSource, ParquetSource, HttpSource
# Import storey so it will be available on our scope
# when testing the pipeline
import storey
# Define the Lab Measurements FeatureSet
measurements_set = fs.FeatureSet("measurements",
entities=[fs.Entity("patient_id")],
timestamp_key='timestamp',
description="various patient health measurements")
# Get FeatureSet computation graph
measurements_graph = measurements_set.graph
###Output
_____no_output_____
###Markdown
Define the processing pipeline- Transformation function- Sliding window aggregation- Set targets (NoSQL and Parquet)
###Code
# Import pandas and load the sample CSV and load it as a datasource
# for our ingestion
import pandas as pd
measurements_df = pd.read_csv('https://s3.wasabisys.com/iguazio/data/patients/measurements.csv', index_col=0)
measurements_df['timestamp'] = pd.to_datetime(measurements_df['timestamp'])
measurements_df['timestamp'] = measurements_df['timestamp'].astype("datetime64[ms]")
measurements_df = pd.concat([move_timestamps(measurements_df, '-1h'), move_timestamps(measurements_df)]) # update timestamps
###Output
_____no_output_____
###Markdown
Take a look at the measurements dataset. This dataset includes a a single measurement per row. The measurement type is defined by the `source` and `parameter` column. We would like to transform this data, so each patient has multiple measurement columns. To do that, we will need to create a new column for each `source` and `parameter` combination. For example, if `source` is 3 and `parameter` is 0, then our transformed dataset will have the measurement value in a new feature named `sp_3_0`.Following that, we will create a sliding window aggregation that averages the values across that time window.
###Code
measurements_df.head()
###Output
_____no_output_____
###Markdown
The following code performs the transformation, adds the aggregation and sets the target to store the values to a NoSQL database for online retrieval and parquet files for batch processing.
###Code
# Define transform to create sparse dataset for aggregation
# adding an extra column for the specific source-parameter pair's measurement
# ex: source=3, parameter=4, measurement=100 -> add extra column sp_3_4=100
def transform(event):
event["_".join(['sp', str(event["source"]), str(event["parameter"])])] = event["measurement"]
return event
# Define Measurement FeatureSet pipeline
measurements_graph.to(
"storey.Map", _fn="transform"
)
# Get the available source, parameter pairs for our aggregation
sps = list(measurements_df.apply(lambda x: '_'.join(['sp', str(x['source']), str(x['parameter'])]), axis=1).unique())
# Add aggregations on top of the created sparse
# features by the transorm function
for col in sps:
measurements_set.add_aggregation(name=f'agg_{col}',
column=col,
operations=['avg'],
window='1h',
period='30m')
# Add default (NoSQL via KV and Parquet) targets to save
# the ingestion results to
measurements_set.set_targets()
###Output
_____no_output_____
###Markdown
You can plot the graph to visalize the pipeline:
###Code
# Plot the ingestion pipeline we defined
measurements_set.plot(rankdir='LR', with_targets=True)
###Output
_____no_output_____
###Markdown
Run ingestion task using MLRun & StoreyIn order to ingest the dataframe to the featureset, use the `ingest` function.
###Code
# User our loaded DF as the datasource and ingest it through
# the define pipeline
resp = fs.ingest(measurements_set, measurements_df,
infer_options=fs.InferOptions.default())
resp.head()
# Save the FeatureSet and pipeline definition
measurements_set.save()
###Output
_____no_output_____
###Markdown
Ingest Patient Details Features In this section we will use MLRun to create our patient details datasource. We will do the following:- Create a `patient_details` FeatureSet- Add preprocessing transformations to the pipeline - Map ages to buckets and One Hot Encode them - Impute missing values- Test the processing pipeline with sample data- Run ingestion pipeline on top of the cluster Create the FeatureSet
###Code
# add feature set without time column (stock ticker metadata)
patients_set = fs.FeatureSet("patient_details", entities=[fs.Entity("patient_id")],
description="personal and medical patient details")
# Get FeatureSet computation graph
graph = patients_set.spec.graph
###Output
_____no_output_____
###Markdown
Define the computation pipeline
###Code
# Define age buckets for our age value mapping
personal_details = {'age': {'ranges': [{'range': [0, 3], "value": "toddler"},
{'range': [3, 18], "value": "child"},
{'range': [18, 65], "value": "adult"},
{'range': [65, 120], "value": "elder"}]}}
# Define one hot encoding values map
one_hot_encoder_mapping = {'age_mapped': ['toddler', 'child', 'adult', 'elder']}
# Import MLRun's FeatureStore steps for easy
# use in our pipeline
from mlrun.feature_store.steps import *
# Define the pipeline for our FeatureSet
graph.to(MapValues(mapping=personal_details, with_original_features=True))\
.to(OneHotEncoder(mapping=one_hot_encoder_mapping))\
.to(Imputer(method='values', default_value=1, mapping={}))
# Add default NoSQL & Parquet ingestion targets
patients_set.set_targets()
# Plot the FeatureSet pipeline
patients_set.plot(rankdir='LR', with_targets=True)
###Output
_____no_output_____
###Markdown
Test the Feature transformation pipelineCreating a transformation pipeline requires some trial and error. Therefore, it is useful to run the pipeline in memory without storing the resultant data. For this purpose, `infer` is used. This function receives as input any sample DataFrame, performs all the graph steps and outputs the transformed DataFrame.
###Code
# Load the sample patient details data
patients_df = pd.read_parquet('https://s3.wasabisys.com/iguazio/data/patients/patient_details.parquet')
# Run local ingestion test
fs.infer(patients_set, patients_df.head())
###Output
_____no_output_____
###Markdown
Save the FeatureSet and run full ingestion taskOnce you are satisfied with the transformation pipeline, ingest that full DataFrame and store the data.
###Code
# Save the FeatureSet
patients_set.save()
# Run Ingestion task
resp = fs.ingest(patients_set, patients_df,
infer_options=fs.InferOptions.default())
###Output
_____no_output_____
###Markdown
Start Immediate or Scheduled Ingestion Job (over Kubernetes)Another useful method to ingest data, is by creating a Kubernetes job. This may be necessary to process large amounts of data as well as to process any recurring data. With MLRun it is easy to take the pipeline and run it as a job. This is done by:1. Define a source, specifically here we define a parquet file source2. Define a configuration where `local` is set to `False`3. Mount to the provisioned storage by calling `auto_mount`4. Run `ingest` with the source and run configuration
###Code
source = ParquetSource('pq', 'https://s3.wasabisys.com/iguazio/data/patients/patient_details.parquet')
config = fs.RunConfig(local=False).apply(mlrun.platforms.auto_mount())
fs.ingest(patients_set, source, run_config=config)
###Output
> 2021-05-06 15:27:08,769 [info] starting run patient_details_ingest uid=76f197b8ab3347d1b995a5ea55d0a98a DB=http://mlrun-api:8080
> 2021-05-06 15:27:09,022 [info] Job is running in the background, pod: patient-details-ingest-g9hgn
> 2021-05-06 15:27:15,073 [info] starting ingestion task to store://feature-sets/fsdemo-admin/patient_details:latest
> 2021-05-06 15:27:15,745 [info] ingestion task completed, targets:
> 2021-05-06 15:27:15,746 [info] [{'name': 'parquet', 'kind': 'parquet', 'path': 'v3io:///projects/fsdemo-admin/fs/parquet/sets/patient_details-latest.parquet', 'status': 'created', 'updated': '2021-05-06T15:27:15.432576+00:00'}, {'name': 'nosql', 'kind': 'nosql', 'path': 'v3io:///projects/fsdemo-admin/fs/nosql/sets/patient_details-latest', 'status': 'created', 'updated': '2021-05-06T15:27:15.432947+00:00'}]
> 2021-05-06 15:27:15,936 [info] run executed, status=completed
final state: completed
###Markdown
Real-time Early-Sense Sensor Ingestion (HTTP or Stream Processing With Nuclio) In this section we will use MLRun to create our Early Sense Sensor datasource. We will do the following:- Create early sense FeatureSet- Add Preprocessing transformations to the Pipeline using custom functions - Drop and Rename columns - Aggregations- Add Feature Validator to detect bad sensor readings- Test the processing pipeline with sample data- Deploy the FeatureSet ingestion service as a live rest endpoint
###Code
early_sense_set = fs.FeatureSet("early_sense", entities=[fs.Entity("patient_id")], timestamp_key='timestamp',
description="real time patient bed sensor data")
###Output
_____no_output_____
###Markdown
Define data validation & quality policyWe can define validations on the feature level. For example, define here validation to check if the heart-rate value is between 0 and 220 and respitory rate is between 0 and 25.
###Code
from mlrun.features import MinMaxValidator
early_sense_set["hr"] = fs.Feature(validator = MinMaxValidator(min=0, max=220, severity="info"))
early_sense_set["rr"] = fs.Feature(validator = MinMaxValidator(min=0, max=25, severity="info"))
###Output
_____no_output_____
###Markdown
Define custom processing classesIn the previous sections we used transformation steps that are available as part of Storey. Here we show how to create custom transformation classes. We will later run these functions as part of a Nuclio serverless real-time function, therefore, we also use the nuclio `start-code` and `end-code` comments.
###Code
# nuclio: start-code
# We will import storey here too so it will
# be included in our function code (within the nuclio comment block)
import json
import storey
from typing import List, Dict
# The custom functions are based on `storey.MapClass`
# when they are called in the graph the `do(self, event)`
# function will be activated.
# A to_dict(self) function is also required by MLRun
# to allow the class creation on remote functions
class DropColumns(storey.MapClass):
def __init__(self, columns: List[str], **kwargs):
super().__init__(**kwargs)
self.columns = columns
def do(self, event):
for col in self.columns:
if col in event:
del event[col]
return event
def to_dict(self):
return {
"class_name": "DropColumns",
"name": self.name or "DropColumns",
"class_args": {
"columns": self.columns
},
}
class RenameColumns(storey.MapClass):
def __init__(self, mapping: Dict[str, str], **kwargs):
super().__init__(**kwargs)
self.mapping = mapping
def do(self, event):
for old_col, new_col in self.mapping.items():
try:
event[new_col] = event.pop(old_col)
except Exception as e:
print(f'{old_col} doesnt exist')
return event
def to_dict(self):
return {
"class_name": "RenameColumns",
"name": self.name or "RenameColumns",
"class_args": {"mapping": self.mapping},
}
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Define the Real-Time PipelineDefine the transoformation pipeline below. This is done just like the previous sections.
###Code
# Configure the list of columns to drop from
# the raw data
drop_columns = ['hr_is_error',
'rr_is_error',
'spo2_is_error',
'movements_is_error',
'turn_count_is_error',
'is_in_bed_is_error']
# Define the computationala graph including our custom functions
early_sense_set.graph.to(DropColumns(drop_columns), after='start')\
.to(RenameColumns(mapping={'bad': 'bed'}))
# Add real-time aggreagations on top of our sensor readings
for col in ['hr', 'rr', 'spo2', 'movements', 'turn_count']:
early_sense_set.add_aggregation(col + "_h", col, ['avg', 'max', 'min'], "1h")
early_sense_set.add_aggregation(col + "_d", col, ['avg', 'max', 'min'], "1d")
early_sense_set.add_aggregation('in_bed_h', 'is_in_bed', ['avg'], "1h")
early_sense_set.add_aggregation('in_bed_d', 'is_in_bed', ['avg'], "1d")
# Set NoSQL and Parquet default targets
early_sense_set.set_targets()
# Plot the pipeline
early_sense_set.plot(rankdir='LR', with_targets=True)
###Output
_____no_output_____
###Markdown
Test/debug the real-time pipeline locally in the notebook
###Code
# infer schema + stats, show the final feature set (after the data pipeline)
early_sense_df = pd.read_parquet('https://s3.wasabisys.com/iguazio/data/patients/early_sense.parquet')
early_sense_df['timestamp'] = pd.to_datetime(early_sense_df['timestamp'])
early_sense_df = move_timestamps(early_sense_df) # update timestamps
fs.infer(early_sense_set, early_sense_df.head())
# Run ingest pipeline
df=fs.ingest(early_sense_set, early_sense_df)
# Save the early-sense Featureset
early_sense_set.save()
# print the FeatureSet spec
print(early_sense_set.status.targets.to_dict())
###Output
[{'name': 'parquet', 'kind': 'parquet', 'path': 'v3io:///projects/fsdemo-admin/fs/parquet/sets/early_sense-latest.parquet', 'status': 'created', 'updated': '2021-05-06T15:27:46.222973+00:00'}, {'name': 'nosql', 'kind': 'nosql', 'path': 'v3io:///projects/fsdemo-admin/fs/nosql/sets/early_sense-latest', 'status': 'created', 'updated': '2021-05-06T15:27:46.223349+00:00'}]
###Markdown
Deploy as Real-Time Stream Processing Function (Nuclio Serverless)Features are not static. For example, it is common that features include different aggregations that need to be updated as data continues to flow. A real-time pipeline requires this data to be up date. Therefore, we need a convenient way to ingest data, not just as batch, but per specific input.MLRun can convert any code to a real-time serverless function, including the pipeline. This is done by performing the following steps:1. Define a source, in this case it's an HTTP source2. Convert the previously defined code to a serving function3. Create a configuration to run the function4. Deploy an ingestion service with the Featureset, source and the configuration
###Code
# Set a new HTTPSource, this will tell our ingestion service
# to setup a Nuclio function to act as the rest endpoint
# to which we would receive the data
source = HttpSource(key_field='patient_id', time_field='timestamp')
# Take the relevant code parts from this notebook and create
# an MLRun function from them so we can run the pipeline
# as a Nuclio function
func = mlrun.code_to_function("ingest", kind="serving")
nuclio_config = fs.RunConfig(function=func, local=False).apply(mlrun.platforms.auto_mount())
# Deploy the Online ingestion service using the pipeline definition from before
# with our new HTTP Source and our define Function
server = fs.deploy_ingestion_service(early_sense_set, source, run_config=nuclio_config)
###Output
> 2021-05-06 15:29:52,032 [info] Starting remote function deploy
2021-05-06 15:29:52 (info) Deploying function
{'level': 'info', 'message': 'Deploying function', 'name': 'fsdemo-admin-ingest', 'time': 1620314992169.7139}
2021-05-06 15:29:52 (info) Building
{'level': 'info', 'message': 'Building', 'name': 'fsdemo-admin-ingest', 'time': 1620314992169.7478, 'versionInfo': 'Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3'}
2021-05-06 15:29:52 (info) Staging files and preparing base images
{'level': 'info', 'message': 'Staging files and preparing base images', 'name': 'deployer', 'time': 1620314992237.7905}
2021-05-06 15:29:52 (info) Building processor image
{'imageName': 'fsdemo-admin-fsdemo-admin-ingest-processor:latest', 'level': 'info', 'message': 'Building processor image', 'name': 'deployer', 'time': 1620314992238.347}
2021-05-06 15:29:55 (info) Build complete
{'level': 'info', 'message': 'Build complete', 'name': 'deployer', 'result': {'Image': 'fsdemo-admin-fsdemo-admin-ingest-processor:latest', 'UpdatedFunctionConfig': {'metadata': {'annotations': {'nuclio.io/generated_by': 'function generated from https://github.com/mlrun/mlrun#004d7b6797e3292525d220bb4389470342ebe752:ingest.ipynb'}, 'labels': {'mlrun/class': 'serving', 'nuclio.io/project-name': 'fsdemo-admin'}, 'name': 'fsdemo-admin-ingest', 'namespace': 'default-tenant'}, 'spec': {'build': {'baseImage': 'mlrun/mlrun:0.6.3-rc9', 'codeEntryType': 'sourceCode', 'functionSourceCode': 'IyBHZW5lcmF0ZWQgYnkgbnVjbGlvLmV4cG9ydC5OdWNsaW9FeHBvcnRlcgoKaW1wb3J0IGpzb24KaW1wb3J0IHN0b3JleQpmcm9tIHR5cGluZyBpbXBvcnQgTGlzdCwgRGljdAoKCmNsYXNzIERyb3BDb2x1bW5zKHN0b3JleS5NYXBDbGFzcyk6CiAgICBkZWYgX19pbml0X18oc2VsZiwgY29sdW1uczogTGlzdFtzdHJdLCAqKmt3YXJncyk6CiAgICAgICAgc3VwZXIoKS5fX2luaXRfXygqKmt3YXJncykKICAgICAgICBzZWxmLmNvbHVtbnMgPSBjb2x1bW5zCgogICAgZGVmIGRvKHNlbGYsIGV2ZW50KToKICAgICAgICBmb3IgY29sIGluIHNlbGYuY29sdW1uczoKICAgICAgICAgICAgaWYgY29sIGluIGV2ZW50OgogICAgICAgICAgICAgICAgZGVsIGV2ZW50W2NvbF0KICAgICAgICByZXR1cm4gZXZlbnQKCiAgICBkZWYgdG9fZGljdChzZWxmKToKICAgICAgICByZXR1cm4gewogICAgICAgICAgICAiY2xhc3NfbmFtZSI6ICJEcm9wQ29sdW1ucyIsCiAgICAgICAgICAgICJuYW1lIjogc2VsZi5uYW1lIG9yICJEcm9wQ29sdW1ucyIsCiAgICAgICAgICAgICJjbGFzc19hcmdzIjogewogICAgICAgICAgICAgICAgImNvbHVtbnMiOiBzZWxmLmNvbHVtbnMKICAgICAgICAgICAgfSwKICAgICAgICB9CgpjbGFzcyBSZW5hbWVDb2x1bW5zKHN0b3JleS5NYXBDbGFzcyk6CiAgICBkZWYgX19pbml0X18oc2VsZiwgbWFwcGluZzogRGljdFtzdHIsIHN0cl0sICoqa3dhcmdzKToKICAgICAgICBzdXBlcigpLl9faW5pdF9fKCoqa3dhcmdzKQogICAgICAgIHNlbGYubWFwcGluZyA9IG1hcHBpbmcKCiAgICBkZWYgZG8oc2VsZiwgZXZlbnQpOgogICAgICAgIGZvciBvbGRfY29sLCBuZXdfY29sIGluIHNlbGYubWFwcGluZy5pdGVtcygpOgogICAgICAgICAgICB0cnk6CiAgICAgICAgICAgICAgICBldmVudFtuZXdfY29sXSA9IGV2ZW50LnBvcChvbGRfY29sKQogICAgICAgICAgICBleGNlcHQgRXhjZXB0aW9uIGFzIGU6CiAgICAgICAgICAgICAgICBwcmludChmJ3tvbGRfY29sfSBkb2VzbnQgZXhpc3QnKQogICAgICAgIHJldHVybiBldmVudAoKICAgIGRlZiB0b19kaWN0KHNlbGYpOgogICAgICAgIHJldHVybiB7CiAgICAgICAgICAgICJjbGFzc19uYW1lIjogIlJlbmFtZUNvbHVtbnMiLAogICAgICAgICAgICAibmFtZSI6IHNlbGYubmFtZSBvciAiUmVuYW1lQ29sdW1ucyIsCiAgICAgICAgICAgICJjbGFzc19hcmdzIjogeyJtYXBwaW5nIjogc2VsZi5tYXBwaW5nfSwKICAgICAgICB9CgoKZnJvbSBtbHJ1bi5ydW50aW1lcyBpbXBvcnQgbnVjbGlvX2luaXRfaG9vawpkZWYgaW5pdF9jb250ZXh0KGNvbnRleHQpOgogICAgbnVjbGlvX2luaXRfaG9vayhjb250ZXh0LCBnbG9iYWxzKCksICdzZXJ2aW5nX3YyJykKCmRlZiBoYW5kbGVyKGNvbnRleHQsIGV2ZW50KToKICAgIHJldHVybiBjb250ZXh0Lm1scnVuX2hhbmRsZXIoY29udGV4dCwgZXZlbnQpCg==', 'noBaseImagesPull': True, 'offline': True, 'registry': 'docker-registry.default-tenant.app.yh30.iguazio-c0.com'}, 'env': [{'name': 'V3IO_API', 'value': 'v3io-webapi.default-tenant.svc:8081'}, {'name': 'V3IO_USERNAME', 'value': 'admin'}, {'name': 'V3IO_ACCESS_KEY', 'value': '142a98fa-bef9-4095-b2d0-cab733f53238'}, {'name': 'MLRUN_LOG_LEVEL', 'value': 'DEBUG'}, {'name': 'MLRUN_DEFAULT_PROJECT', 'value': 'fsdemo-admin'}, {'name': 'MLRUN_DBPATH', 'value': 'http://mlrun-api:8080'}, {'name': 'MLRUN_NAMESPACE', 'value': 'default-tenant'}, {'name': 'SERVING_SPEC_ENV', 'value': '{"function_uri": "fsdemo-admin/ingest", "version": "v2", "parameters": {"infer_options": 0, "featureset": "store://feature-sets/fsdemo-admin/early_sense", "source": {"kind": "http", "path": "None", "key_field": "patient_id", "time_field": "timestamp", "online": true}}, "graph": {"states": {"DropColumns": {"kind": "task", "class_name": "DropColumns", "class_args": {"columns": ["hr_is_error", "rr_is_error", "spo2_is_error", "movements_is_error", "turn_count_is_error", "is_in_bed_is_error"]}}, "RenameColumns": {"kind": "task", "class_name": "RenameColumns", "class_args": {"mapping": {"bad": "bed"}}, "after": ["DropColumns"]}, "Aggregates": {"kind": "task", "class_name": "storey.AggregateByKey", "class_args": {"aggregates": [{"name": "hr", "column": "hr", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "rr", "column": "rr", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "spo2", "column": "spo2", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "movements", "column": "movements", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "turn_count", "column": "turn_count", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "in_bed", "column": "is_in_bed", "operations": ["avg"], "windows": ["1h", "1d"]}], "table": "."}, "after": ["RenameColumns"]}}}, "load_mode": null, "functions": {}, "graph_initializer": "mlrun.feature_store.ingestion.featureset_initializer", "error_stream": null, "track_models": null}'}], 'eventTimeout': '', 'handler': '01-ingest-datasources:handler', 'maxReplicas': 4, 'minReplicas': 1, 'platform': {}, 'resources': {}, 'runtime': 'python:3.6', 'securityContext': {}, 'serviceType': 'NodePort', 'triggers': {'default-http': {'attributes': {'serviceType': 'NodePort'}, 'class': '', 'kind': 'http', 'maxWorkers': 1, 'name': 'default-http'}}, 'volumes': [{'volume': {'flexVolume': {'driver': 'v3io/fuse', 'options': {'accessKey': '142a98fa-bef9-4095-b2d0-cab733f53238'}}, 'name': 'v3io'}, 'volumeMount': {'mountPath': '/v3io', 'name': 'v3io'}}, {'volume': {'flexVolume': {'driver': 'v3io/fuse', 'options': {'accessKey': '142a98fa-bef9-4095-b2d0-cab733f53238'}}, 'name': 'v3io'}, 'volumeMount': {'mountPath': '/User', 'name': 'v3io', 'subPath': 'users/admin'}}]}}}, 'time': 1620314995613.7964}
> 2021-05-06 15:30:03,749 [info] function deployed, address=default-tenant.app.yh30.iguazio-c0.com:31610
###Markdown
Test the function by sending data to the HTTP endpoint
###Code
test_data = {'patient_id': '838-21-8151',
'bad': 38,
'department': '01e9fe31-76de-45f0-9aed-0f94cc97bca0',
'room': 1,
'hr': 220.0,
'hr_is_error': True,
'rr': 5,
'rr_is_error': True,
'spo2': 85,
'spo2_is_error': True,
'movements': 0.0,
'movements_is_error': True,
'turn_count': 0.0,
'turn_count_is_error': True,
'is_in_bed': 1,
'is_in_bed_is_error': False,
'timestamp': 1606843455.906352
}
import requests
import json
response = requests.post(server, json=test_data)
response.text
###Output
_____no_output_____
###Markdown
Ingest labelsFinally, we define label data, this will be useful in the next notebook where we train a model Create Labels Set
###Code
# Define labels metric from the early sense error data
error_columns = [c for c in early_sense_df.columns if 'error' in c]
labels = early_sense_df.loc[:, ['patient_id', 'timestamp'] + error_columns]
labels['label'] = labels.apply(lambda x: sum([x[c] for c in error_columns])>(len(error_columns)*0.7), axis=1)
labels.to_parquet(data_path + 'labels.parquet')
#labels_df = pd.read_parquet('labels.parquet')
labels_set = fs.FeatureSet("labels", entities=[fs.Entity("patient_id")], timestamp_key='timestamp',
description="training labels")
labels_set.set_targets()
df = fs.infer(labels_set, data_path + 'labels.parquet')
df.head()
df = fs.ingest(labels_set, data_path + 'labels.parquet')
labels_set.save()
###Output
_____no_output_____
###Markdown
Part 1: Data Ingestion In this notebook we will learn how to **Ingest** different data sources to our **Feature Store**. Specifically, this patient data has been successfully used to treat hospitalized COVID-19 patients prior to their condition becoming severe or critical. To do this we will use a medical dataset which includes three types of data: - **Healthcare systems**: Batch updated dataset, containing different lab test results (Blood test results for ex.).- **Patient Records**: Static dataset containing general patient details.- **Real-time sensors**: Real-Time patient metric monitoring sensor. We will walk through creation of ingestion pipeline for each datasource with all the needed preprocessing and validation. We will run the pipeline locally within the notebook and then launch a real-time function to **ingest live data** or schedule a cron to run the task when needed. Environment SetupSince our work is done in a this project scope, first define the project itself for all our MLRun work in this notebook.
###Code
import mlrun
project, artifact_path = mlrun.set_environment(project='fsdemo', user_project=True)
# location of the output data files
data_path = f"{artifact_path}/data/"
def move_timestamps(df, shift='0s'):
''' Update timetsamps to current time so we can see live aggregations '''
now = pd.to_datetime('now')
max_time = df['timestamp'].max()
time_shift = now-max_time
tmp_df = df.copy()
tmp_df['timestamp'] = tmp_df['timestamp'].apply(lambda t: t + time_shift + pd.to_timedelta(shift))
return tmp_df
###Output
_____no_output_____
###Markdown
Create Ingestion Pipeline With MLRunIn this section we will ingest the lab measurements data using MLRun and Storey. Storey is the underlying implementation of the feature store which is used by MLRun. It is the engine that allows you to define and execute complex graphs that create the feature engineering pipeline. With storey, you can define source, transformations and targets, many actions are available as part of the Storey library, but you can define additional actions easily. We will see these custom actions in later sections.For the execution, it is possible to also use Spark. The main difference between Storey and Spark pipelines is that Storey blocks are built for Real-Time workloads while Spark is more Batch oriented. We will now do the following:- Create the `measurements` FeatureSet- Define Preprocessing graph including aggregations- Ingest the data using the defined pipeline
###Code
# Import MLRun's Feature Store
import mlrun.feature_store as fstore
# Import MLRun's Data Sources to set the wanted ingestion pipeline
from mlrun.datastore.sources import CSVSource, ParquetSource, HttpSource
# Import storey so it will be available on our scope
# when testing the pipeline
import storey
# Define the Lab Measurements FeatureSet
measurements_set = fstore.FeatureSet("measurements",
entities=[fstore.Entity("patient_id")],
timestamp_key='timestamp',
description="various patient health measurements")
# Get FeatureSet computation graph
measurements_graph = measurements_set.graph
###Output
_____no_output_____
###Markdown
Define the processing pipeline- Transformation function- Sliding window aggregation- Set targets (NoSQL and Parquet)
###Code
# Import pandas and load the sample CSV and load it as a datasource
# for our ingestion
import pandas as pd
measurements_df = pd.read_csv('https://s3.wasabisys.com/iguazio/data/patients/measurements.csv', index_col=0)
measurements_df['timestamp'] = pd.to_datetime(measurements_df['timestamp'])
measurements_df['timestamp'] = measurements_df['timestamp'].astype("datetime64[ms]")
measurements_df = pd.concat([move_timestamps(measurements_df, '-1h'), move_timestamps(measurements_df)]) # update timestamps
###Output
_____no_output_____
###Markdown
Take a look at the measurements dataset. This dataset includes a a single measurement per row. The measurement type is defined by the `source` and `parameter` column. We would like to transform this data, so each patient has multiple measurement columns. To do that, we will need to create a new column for each `source` and `parameter` combination. For example, if `source` is 3 and `parameter` is 0, then our transformed dataset will have the measurement value in a new feature named `sp_3_0`.Following that, we will create a sliding window aggregation that averages the values across that time window.
###Code
measurements_df.head()
###Output
_____no_output_____
###Markdown
The following code performs the transformation, adds the aggregation and sets the target to store the values to a NoSQL database for online retrieval and parquet files for batch processing.
###Code
# Define transform to create sparse dataset for aggregation
# adding an extra column for the specific source-parameter pair's measurement
# ex: source=3, parameter=4, measurement=100 -> add extra column sp_3_4=100
def transform(event):
event["_".join(['sp', str(event["source"]), str(event["parameter"])])] = event["measurement"]
return event
# Define Measurement FeatureSet pipeline
measurements_graph.to(
"storey.Map", _fn="transform"
)
# Get the available source, parameter pairs for our aggregation
sps = list(measurements_df.apply(lambda x: '_'.join(['sp', str(x['source']), str(x['parameter'])]), axis=1).unique())
# Add aggregations on top of the created sparse
# features by the transorm function
for col in sps:
measurements_set.add_aggregation(name=f'agg_{col}',
column=col,
operations=['avg'],
windows='1h',
period='30m')
# Add default (NoSQL via KV and Parquet) targets to save
# the ingestion results to
measurements_set.set_targets()
###Output
_____no_output_____
###Markdown
You can plot the graph to visualize the pipeline:
###Code
# Plot the ingestion pipeline we defined
measurements_set.plot(rankdir='LR', with_targets=True)
###Output
_____no_output_____
###Markdown
Run ingestion task using MLRun & StoreyIn order to ingest the dataframe to the featureset, use the `ingest` function.
###Code
# User our loaded DF as the datasource and ingest it through
# the define pipeline
resp = fstore.ingest(measurements_set, measurements_df,
infer_options=fstore.InferOptions.default())
resp.head()
# Save the FeatureSet and pipeline definition
measurements_set.save()
###Output
_____no_output_____
###Markdown
Ingest Patient Details Features In this section we will use MLRun to create our patient details datasource. We will do the following:- Create a `patient_details` FeatureSet- Add preprocessing transformations to the pipeline - Map ages to buckets and One Hot Encode them - Impute missing values- Test the processing pipeline with sample data- Run ingestion pipeline on top of the cluster Create the FeatureSet
###Code
# add feature set without time column (stock ticker metadata)
patients_set = fstore.FeatureSet("patient_details", entities=[fstore.Entity("patient_id")],
description="personal and medical patient details")
# Get FeatureSet computation graph
graph = patients_set.spec.graph
###Output
_____no_output_____
###Markdown
Define the computation pipeline
###Code
# Define age buckets for our age value mapping
personal_details = {'age': {'ranges': [{'range': [0, 3], "value": "toddler"},
{'range': [3, 18], "value": "child"},
{'range': [18, 65], "value": "adult"},
{'range': [65, 120], "value": "elder"}]}}
# Define one hot encoding values map
one_hot_encoder_mapping = {'age_mapped': ['toddler', 'child', 'adult', 'elder']}
# Import MLRun's FeatureStore steps for easy
# use in our pipeline
from mlrun.feature_store.steps import *
# Define the pipeline for our FeatureSet
graph.to(MapValues(mapping=personal_details, with_original_features=True))\
.to(OneHotEncoder(mapping=one_hot_encoder_mapping))\
.to(Imputer(method='values', default_value=1, mapping={}))
# Add default NoSQL & Parquet ingestion targets
patients_set.set_targets()
# Plot the FeatureSet pipeline
patients_set.plot(rankdir='LR', with_targets=True)
###Output
_____no_output_____
###Markdown
Test the Feature transformation pipelineCreating a transformation pipeline requires some trial and error. Therefore, it is useful to run the pipeline in memory without storing the resultant data. For this purpose, `infer` is used. This function receives as input any sample DataFrame, performs all the graph steps and outputs the transformed DataFrame.
###Code
# Load the sample patient details data
patients_df = pd.read_parquet('https://s3.wasabisys.com/iguazio/data/patients/patient_details.parquet')
# Run local ingestion test
fstore.preview(patients_set, patients_df.head())
###Output
_____no_output_____
###Markdown
Save the FeatureSet and run full ingestion taskOnce you are satisfied with the transformation pipeline, ingest that full DataFrame and store the data.
###Code
# Save the FeatureSet
patients_set.save()
# Run Ingestion task
resp = fstore.ingest(patients_set, patients_df,
infer_options=fstore.InferOptions.default())
###Output
_____no_output_____
###Markdown
Start Immediate or Scheduled Ingestion Job (over Kubernetes)Another useful method to ingest data, is by creating a Kubernetes job. This may be necessary to process large amounts of data as well as to process any recurring data. With MLRun it is easy to take the pipeline and run it as a job. This is done by:1. Define a source, specifically here we define a parquet file source2. Define a configuration where `local` is set to `False`3. Mount to the provisioned storage by calling `auto_mount`4. Run `ingest` with the source and run configuration
###Code
source = ParquetSource('pq', 'https://s3.wasabisys.com/iguazio/data/patients/patient_details.parquet')
config = fstore.RunConfig(local=False).apply(mlrun.platforms.auto_mount())
fstore.ingest(patients_set, source, run_config=config)
###Output
> 2021-05-06 15:27:08,769 [info] starting run patient_details_ingest uid=76f197b8ab3347d1b995a5ea55d0a98a DB=http://mlrun-api:8080
> 2021-05-06 15:27:09,022 [info] Job is running in the background, pod: patient-details-ingest-g9hgn
> 2021-05-06 15:27:15,073 [info] starting ingestion task to store://feature-sets/fsdemo-admin/patient_details:latest
> 2021-05-06 15:27:15,745 [info] ingestion task completed, targets:
> 2021-05-06 15:27:15,746 [info] [{'name': 'parquet', 'kind': 'parquet', 'path': 'v3io:///projects/fsdemo-admin/fs/parquet/sets/patient_details-latest.parquet', 'status': 'created', 'updated': '2021-05-06T15:27:15.432576+00:00'}, {'name': 'nosql', 'kind': 'nosql', 'path': 'v3io:///projects/fsdemo-admin/fs/nosql/sets/patient_details-latest', 'status': 'created', 'updated': '2021-05-06T15:27:15.432947+00:00'}]
> 2021-05-06 15:27:15,936 [info] run executed, status=completed
final state: completed
###Markdown
Real-time Early-Sense Sensor Ingestion (HTTP or Stream Processing With Nuclio) In this section we will use MLRun to create our Early Sense Sensor datasource. We will do the following:- Create early sense FeatureSet- Add Preprocessing transformations to the Pipeline using custom functions - Drop and Rename columns - Aggregations- Add Feature Validator to detect bad sensor readings- Test the processing pipeline with sample data- Deploy the FeatureSet ingestion service as a live rest endpoint
###Code
early_sense_set = fstore.FeatureSet("early_sense", entities=[fstore.Entity("patient_id")], timestamp_key='timestamp',
description="real time patient bed sensor data")
###Output
_____no_output_____
###Markdown
Define data validation & quality policyWe can define validations on the feature level. For example, define here validation to check if the heart-rate value is between 0 and 220 and respiratory rate is between 0 and 25.
###Code
from mlrun.features import MinMaxValidator
early_sense_set["hr"] = fstore.Feature(validator = MinMaxValidator(min=0, max=220, severity="info"))
early_sense_set["rr"] = fstore.Feature(validator = MinMaxValidator(min=0, max=25, severity="info"))
###Output
_____no_output_____
###Markdown
Define custom processing classesIn the previous sections we used transformation steps that are available as part of Storey. Here we show how to create custom transformation classes. We will later run these functions as part of a Nuclio serverless real-time function, therefore, we also use the nuclio `start-code` and `end-code` comments.
###Code
# nuclio: start-code
# We will import storey here too so it will
# be included in our function code (within the nuclio comment block)
import json
import storey
from typing import List, Dict
# The custom functions are based on `storey.MapClass`
# when they are called in the graph the `do(self, event)`
# function will be activated.
# A to_dict(self) function is also required by MLRun
# to allow the class creation on remote functions
class DropColumns(storey.MapClass):
def __init__(self, columns: List[str], **kwargs):
super().__init__(**kwargs)
self.columns = columns
def do(self, event):
for col in self.columns:
if col in event:
del event[col]
return event
def to_dict(self):
return {
"class_name": "DropColumns",
"name": self.name or "DropColumns",
"class_args": {
"columns": self.columns
},
}
class RenameColumns(storey.MapClass):
def __init__(self, mapping: Dict[str, str], **kwargs):
super().__init__(**kwargs)
self.mapping = mapping
def do(self, event):
for old_col, new_col in self.mapping.items():
try:
event[new_col] = event.pop(old_col)
except Exception as e:
print(f'{old_col} doesnt exist')
return event
def to_dict(self):
return {
"class_name": "RenameColumns",
"name": self.name or "RenameColumns",
"class_args": {"mapping": self.mapping},
}
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Define the Real-Time PipelineDefine the transformation pipeline below. This is done just like the previous sections.
###Code
# Configure the list of columns to drop from
# the raw data
drop_columns = ['hr_is_error',
'rr_is_error',
'spo2_is_error',
'movements_is_error',
'turn_count_is_error',
'is_in_bed_is_error']
# Define the computationala graph including our custom functions
early_sense_set.graph.to(DropColumns(drop_columns), after='start')\
.to(RenameColumns(mapping={'bad': 'bed'}))
# Add real-time aggreagations on top of our sensor readings
for col in ['hr', 'rr', 'spo2', 'movements', 'turn_count']:
early_sense_set.add_aggregation(col + "_h", col, ['avg', 'max', 'min'], "1h")
early_sense_set.add_aggregation(col + "_d", col, ['avg', 'max', 'min'], "1d")
early_sense_set.add_aggregation('in_bed_h', 'is_in_bed', ['avg'], "1h")
early_sense_set.add_aggregation('in_bed_d', 'is_in_bed', ['avg'], "1d")
# Set NoSQL and Parquet default targets
early_sense_set.set_targets()
# Plot the pipeline
early_sense_set.plot(rankdir='LR', with_targets=True)
###Output
_____no_output_____
###Markdown
Test/debug the real-time pipeline locally in the notebook
###Code
# infer schema + stats, show the final feature set (after the data pipeline)
early_sense_df = pd.read_parquet('https://s3.wasabisys.com/iguazio/data/patients/early_sense.parquet')
early_sense_df['timestamp'] = pd.to_datetime(early_sense_df['timestamp'])
early_sense_df = move_timestamps(early_sense_df) # update timestamps
fstore.preview(early_sense_set, early_sense_df.head())
# Run ingest pipeline
df=fstore.ingest(early_sense_set, early_sense_df)
# Save the early-sense Featureset
early_sense_set.save()
# print the FeatureSet spec
print(early_sense_set.status.targets.to_dict())
###Output
[{'name': 'parquet', 'kind': 'parquet', 'path': 'v3io:///projects/fsdemo-admin/fs/parquet/sets/early_sense-latest.parquet', 'status': 'created', 'updated': '2021-05-06T15:27:46.222973+00:00'}, {'name': 'nosql', 'kind': 'nosql', 'path': 'v3io:///projects/fsdemo-admin/fs/nosql/sets/early_sense-latest', 'status': 'created', 'updated': '2021-05-06T15:27:46.223349+00:00'}]
###Markdown
Deploy as Real-Time Stream Processing Function (Nuclio Serverless)Features are not static. For example, it is common that features include different aggregations that need to be updated as data continues to flow. A real-time pipeline requires this data to be up date. Therefore, we need a convenient way to ingest data, not just as batch, but per specific input.MLRun can convert any code to a real-time serverless function, including the pipeline. This is done by performing the following steps:1. Define a source, in this case it's an HTTP source2. Convert the previously defined code to a serving function3. Create a configuration to run the function4. Deploy an ingestion service with the Featureset, source and the configuration
###Code
# Set a new HTTPSource, this will tell our ingestion service
# to setup a Nuclio function to act as the rest endpoint
# to which we would receive the data
source = HttpSource(key_field='patient_id', time_field='timestamp')
# Take the relevant code parts from this notebook and create
# an MLRun function from them so we can run the pipeline
# as a Nuclio function
func = mlrun.code_to_function("ingest", kind="serving")
nuclio_config = fstore.RunConfig(function=func, local=False).apply(mlrun.platforms.auto_mount())
# Deploy the Online ingestion service using the pipeline definition from before
# with our new HTTP Source and our define Function
server = fstore.deploy_ingestion_service(early_sense_set, source, run_config=nuclio_config)
###Output
> 2021-05-06 15:29:52,032 [info] Starting remote function deploy
2021-05-06 15:29:52 (info) Deploying function
{'level': 'info', 'message': 'Deploying function', 'name': 'fsdemo-admin-ingest', 'time': 1620314992169.7139}
2021-05-06 15:29:52 (info) Building
{'level': 'info', 'message': 'Building', 'name': 'fsdemo-admin-ingest', 'time': 1620314992169.7478, 'versionInfo': 'Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3'}
2021-05-06 15:29:52 (info) Staging files and preparing base images
{'level': 'info', 'message': 'Staging files and preparing base images', 'name': 'deployer', 'time': 1620314992237.7905}
2021-05-06 15:29:52 (info) Building processor image
{'imageName': 'fsdemo-admin-fsdemo-admin-ingest-processor:latest', 'level': 'info', 'message': 'Building processor image', 'name': 'deployer', 'time': 1620314992238.347}
2021-05-06 15:29:55 (info) Build complete
{'level': 'info', 'message': 'Build complete', 'name': 'deployer', 'result': {'Image': 'fsdemo-admin-fsdemo-admin-ingest-processor:latest', 'UpdatedFunctionConfig': {'metadata': {'annotations': {'nuclio.io/generated_by': 'function generated from https://github.com/mlrun/mlrun#004d7b6797e3292525d220bb4389470342ebe752:ingest.ipynb'}, 'labels': {'mlrun/class': 'serving', 'nuclio.io/project-name': 'fsdemo-admin'}, 'name': 'fsdemo-admin-ingest', 'namespace': 'default-tenant'}, 'spec': {'build': {'baseImage': 'mlrun/mlrun:0.6.3-rc9', 'codeEntryType': 'sourceCode', 'functionSourceCode': 'IyBHZW5lcmF0ZWQgYnkgbnVjbGlvLmV4cG9ydC5OdWNsaW9FeHBvcnRlcgoKaW1wb3J0IGpzb24KaW1wb3J0IHN0b3JleQpmcm9tIHR5cGluZyBpbXBvcnQgTGlzdCwgRGljdAoKCmNsYXNzIERyb3BDb2x1bW5zKHN0b3JleS5NYXBDbGFzcyk6CiAgICBkZWYgX19pbml0X18oc2VsZiwgY29sdW1uczogTGlzdFtzdHJdLCAqKmt3YXJncyk6CiAgICAgICAgc3VwZXIoKS5fX2luaXRfXygqKmt3YXJncykKICAgICAgICBzZWxmLmNvbHVtbnMgPSBjb2x1bW5zCgogICAgZGVmIGRvKHNlbGYsIGV2ZW50KToKICAgICAgICBmb3IgY29sIGluIHNlbGYuY29sdW1uczoKICAgICAgICAgICAgaWYgY29sIGluIGV2ZW50OgogICAgICAgICAgICAgICAgZGVsIGV2ZW50W2NvbF0KICAgICAgICByZXR1cm4gZXZlbnQKCiAgICBkZWYgdG9fZGljdChzZWxmKToKICAgICAgICByZXR1cm4gewogICAgICAgICAgICAiY2xhc3NfbmFtZSI6ICJEcm9wQ29sdW1ucyIsCiAgICAgICAgICAgICJuYW1lIjogc2VsZi5uYW1lIG9yICJEcm9wQ29sdW1ucyIsCiAgICAgICAgICAgICJjbGFzc19hcmdzIjogewogICAgICAgICAgICAgICAgImNvbHVtbnMiOiBzZWxmLmNvbHVtbnMKICAgICAgICAgICAgfSwKICAgICAgICB9CgpjbGFzcyBSZW5hbWVDb2x1bW5zKHN0b3JleS5NYXBDbGFzcyk6CiAgICBkZWYgX19pbml0X18oc2VsZiwgbWFwcGluZzogRGljdFtzdHIsIHN0cl0sICoqa3dhcmdzKToKICAgICAgICBzdXBlcigpLl9faW5pdF9fKCoqa3dhcmdzKQogICAgICAgIHNlbGYubWFwcGluZyA9IG1hcHBpbmcKCiAgICBkZWYgZG8oc2VsZiwgZXZlbnQpOgogICAgICAgIGZvciBvbGRfY29sLCBuZXdfY29sIGluIHNlbGYubWFwcGluZy5pdGVtcygpOgogICAgICAgICAgICB0cnk6CiAgICAgICAgICAgICAgICBldmVudFtuZXdfY29sXSA9IGV2ZW50LnBvcChvbGRfY29sKQogICAgICAgICAgICBleGNlcHQgRXhjZXB0aW9uIGFzIGU6CiAgICAgICAgICAgICAgICBwcmludChmJ3tvbGRfY29sfSBkb2VzbnQgZXhpc3QnKQogICAgICAgIHJldHVybiBldmVudAoKICAgIGRlZiB0b19kaWN0KHNlbGYpOgogICAgICAgIHJldHVybiB7CiAgICAgICAgICAgICJjbGFzc19uYW1lIjogIlJlbmFtZUNvbHVtbnMiLAogICAgICAgICAgICAibmFtZSI6IHNlbGYubmFtZSBvciAiUmVuYW1lQ29sdW1ucyIsCiAgICAgICAgICAgICJjbGFzc19hcmdzIjogeyJtYXBwaW5nIjogc2VsZi5tYXBwaW5nfSwKICAgICAgICB9CgoKZnJvbSBtbHJ1bi5ydW50aW1lcyBpbXBvcnQgbnVjbGlvX2luaXRfaG9vawpkZWYgaW5pdF9jb250ZXh0KGNvbnRleHQpOgogICAgbnVjbGlvX2luaXRfaG9vayhjb250ZXh0LCBnbG9iYWxzKCksICdzZXJ2aW5nX3YyJykKCmRlZiBoYW5kbGVyKGNvbnRleHQsIGV2ZW50KToKICAgIHJldHVybiBjb250ZXh0Lm1scnVuX2hhbmRsZXIoY29udGV4dCwgZXZlbnQpCg==', 'noBaseImagesPull': True, 'offline': True, 'registry': 'docker-registry.default-tenant.app.yh30.iguazio-c0.com'}, 'env': [{'name': 'V3IO_API', 'value': 'v3io-webapi.default-tenant.svc:8081'}, {'name': 'V3IO_USERNAME', 'value': 'admin'}, {'name': 'V3IO_ACCESS_KEY', 'value': '142a98fa-bef9-4095-b2d0-cab733f53238'}, {'name': 'MLRUN_LOG_LEVEL', 'value': 'DEBUG'}, {'name': 'MLRUN_DEFAULT_PROJECT', 'value': 'fsdemo-admin'}, {'name': 'MLRUN_DBPATH', 'value': 'http://mlrun-api:8080'}, {'name': 'MLRUN_NAMESPACE', 'value': 'default-tenant'}, {'name': 'SERVING_SPEC_ENV', 'value': '{"function_uri": "fsdemo-admin/ingest", "version": "v2", "parameters": {"infer_options": 0, "featureset": "store://feature-sets/fsdemo-admin/early_sense", "source": {"kind": "http", "path": "None", "key_field": "patient_id", "time_field": "timestamp", "online": true}}, "graph": {"states": {"DropColumns": {"kind": "task", "class_name": "DropColumns", "class_args": {"columns": ["hr_is_error", "rr_is_error", "spo2_is_error", "movements_is_error", "turn_count_is_error", "is_in_bed_is_error"]}}, "RenameColumns": {"kind": "task", "class_name": "RenameColumns", "class_args": {"mapping": {"bad": "bed"}}, "after": ["DropColumns"]}, "Aggregates": {"kind": "task", "class_name": "storey.AggregateByKey", "class_args": {"aggregates": [{"name": "hr", "column": "hr", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "rr", "column": "rr", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "spo2", "column": "spo2", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "movements", "column": "movements", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "turn_count", "column": "turn_count", "operations": ["avg", "max", "min"], "windows": ["1h", "1d"]}, {"name": "in_bed", "column": "is_in_bed", "operations": ["avg"], "windows": ["1h", "1d"]}], "table": "."}, "after": ["RenameColumns"]}}}, "load_mode": null, "functions": {}, "graph_initializer": "mlrun.feature_store.ingestion.featureset_initializer", "error_stream": null, "track_models": null}'}], 'eventTimeout': '', 'handler': '01-ingest-datasources:handler', 'maxReplicas': 4, 'minReplicas': 1, 'platform': {}, 'resources': {}, 'runtime': 'python:3.6', 'securityContext': {}, 'serviceType': 'NodePort', 'triggers': {'default-http': {'attributes': {'serviceType': 'NodePort'}, 'class': '', 'kind': 'http', 'maxWorkers': 1, 'name': 'default-http'}}, 'volumes': [{'volume': {'flexVolume': {'driver': 'v3io/fuse', 'options': {'accessKey': '142a98fa-bef9-4095-b2d0-cab733f53238'}}, 'name': 'v3io'}, 'volumeMount': {'mountPath': '/v3io', 'name': 'v3io'}}, {'volume': {'flexVolume': {'driver': 'v3io/fuse', 'options': {'accessKey': '142a98fa-bef9-4095-b2d0-cab733f53238'}}, 'name': 'v3io'}, 'volumeMount': {'mountPath': '/User', 'name': 'v3io', 'subPath': 'users/admin'}}]}}}, 'time': 1620314995613.7964}
> 2021-05-06 15:30:03,749 [info] function deployed, address=default-tenant.app.yh30.iguazio-c0.com:31610
###Markdown
Test the function by sending data to the HTTP endpoint
###Code
test_data = {'patient_id': '838-21-8151',
'bad': 38,
'department': '01e9fe31-76de-45f0-9aed-0f94cc97bca0',
'room': 1,
'hr': 220.0,
'hr_is_error': True,
'rr': 5,
'rr_is_error': True,
'spo2': 85,
'spo2_is_error': True,
'movements': 0.0,
'movements_is_error': True,
'turn_count': 0.0,
'turn_count_is_error': True,
'is_in_bed': 1,
'is_in_bed_is_error': False,
'timestamp': 1606843455.906352
}
import requests
import json
response = requests.post(server, json=test_data)
response.text
###Output
_____no_output_____
###Markdown
Ingest labelsFinally, we define label data, this will be useful in the next notebook where we train a model Create Labels Set
###Code
# Define labels metric from the early sense error data
error_columns = [c for c in early_sense_df.columns if 'error' in c]
labels = early_sense_df.loc[:, ['patient_id', 'timestamp'] + error_columns]
labels['label'] = labels.apply(lambda x: sum([x[c] for c in error_columns])>(len(error_columns)*0.7), axis=1)
labels.to_parquet(data_path + 'labels.parquet')
#labels_df = pd.read_parquet('labels.parquet')
labels_set = fstore.FeatureSet("labels", entities=[fstore.Entity("patient_id")], timestamp_key='timestamp',
description="training labels")
labels_set.set_targets()
df = fstore.preview(labels_set, data_path + 'labels.parquet')
df.head()
df = fstore.ingest(labels_set, data_path + 'labels.parquet')
labels_set.save()
###Output
_____no_output_____
###Markdown
Part 1: Data Ingestion In this notebook we will learn how to **Ingest** different data sources to our **Feature Store**. Specifically, this patient data has been successfully used to treat hospitalized COVID-19 patients prior to their condition becoming severe or critical. To do this we will use a medical dataset which includes three types of data: - **Healthcare systems**: Batch updated dataset, containing different lab test results (Blood test results for ex.).- **Patient Records**: Static dataset containing general patient details.- **Real-time sensors**: Real-Time patient metric monitoring sensor. We will walk through creation of ingestion pipeline for each datasource with all the needed preprocessing and validation. We will run the pipeline locally within the notebook and then launch a real-time function to **ingest live data** or schedule a cron to run the task when needed. Environment SetupSince our work is done in a this project scope, first define the project itself for all our MLRun work in this notebook.
###Code
import mlrun
project, _ = mlrun.set_environment(project='fsdemo', user_project=True)
def move_timestamps(df, shift='0s'):
''' Update timetsamps to current time so we can see live aggregations '''
now = pd.to_datetime('now')
max_time = df['timestamp'].max()
time_shift = now-max_time
tmp_df = df.copy()
tmp_df['timestamp'] = tmp_df['timestamp'].apply(lambda t: t + time_shift + pd.to_timedelta(shift))
return tmp_df
###Output
_____no_output_____
###Markdown
Create Ingestion Pipeline With MLRunIn this section we will ingest the lab measurements data using MLRun and Storey. Storey is the underlying implementation of the feature store which is used by MLRun. It is the engine that allows you to define and execute complex graphs that create the feature engineering pipeline. With storey, you can define source, transformations and targets, many actions are available as part of the Storey library, but you can define additional actions easily. We will see these custom actions in later sections.For the execution, it is possible to also use Spark. The main difference between Storey and Spark pipelines is that Storey blocks are built for Real-Time workloads while Spark is more Batch oriented. We will now do the following:- Create the `measurements` FeatureSet- Define Preprocessing graph including aggregations- Ingest the data using the defined pipeline
###Code
# Import MLRun's Feature Store
import mlrun.feature_store as fstore
# Import MLRun's Data Sources to set the wanted ingestion pipeline
from mlrun.datastore.sources import CSVSource, ParquetSource, HttpSource
# Import storey so it will be available on our scope
# when testing the pipeline
import storey
# Define the Lab Measurements FeatureSet
measurements_set = fstore.FeatureSet("measurements",
entities=[fstore.Entity("patient_id")],
timestamp_key='timestamp',
description="various patient health measurements")
# Get FeatureSet computation graph
measurements_graph = measurements_set.graph
###Output
_____no_output_____
###Markdown
Define the processing pipeline- Transformation function- Sliding window aggregation- Set targets (NoSQL and Parquet)
###Code
# Import pandas and load the sample CSV and load it as a datasource
# for our ingestion
import pandas as pd
measurements_df = pd.read_csv('https://s3.wasabisys.com/iguazio/data/patients/measurements.csv', index_col=0)
measurements_df['timestamp'] = pd.to_datetime(measurements_df['timestamp'])
measurements_df['timestamp'] = measurements_df['timestamp'].astype("datetime64[ms]")
measurements_df = pd.concat([move_timestamps(measurements_df, '-1h'), move_timestamps(measurements_df)]) # update timestamps
###Output
_____no_output_____
###Markdown
Take a look at the measurements dataset. This dataset includes a a single measurement per row. The measurement type is defined by the `source` and `parameter` column. We would like to transform this data, so each patient has multiple measurement columns. To do that, we will need to create a new column for each `source` and `parameter` combination. For example, if `source` is 3 and `parameter` is 0, then our transformed dataset will have the measurement value in a new feature named `sp_3_0`.Following that, we will create a sliding window aggregation that averages the values across that time window.
###Code
measurements_df.head()
###Output
_____no_output_____
###Markdown
The following code performs the transformation, adds the aggregation and sets the target to store the values to a NoSQL database for online retrieval and parquet files for batch processing.
###Code
# Define transform to create sparse dataset for aggregation
# adding an extra column for the specific source-parameter pair's measurement
# ex: source=3, parameter=4, measurement=100 -> add extra column sp_3_4=100
def transform(event):
event["_".join(['sp', str(event["source"]), str(event["parameter"])])] = event["measurement"]
return event
# Define Measurement FeatureSet pipeline
measurements_graph.to(
"storey.Map", _fn="transform"
)
# Get the available source, parameter pairs for our aggregation
sps = list(measurements_df.apply(lambda x: '_'.join(['sp', str(x['source']), str(x['parameter'])]), axis=1).unique())
# Add aggregations on top of the created sparse
# features by the transorm function
for col in sps:
measurements_set.add_aggregation(name=f'agg_{col}',
column=col,
operations=['avg'],
windows='1h',
period='30m')
# Add default (NoSQL via KV and Parquet) targets to save
# the ingestion results to
measurements_set.set_targets()
###Output
_____no_output_____
###Markdown
You can plot the graph to visualize the pipeline:
###Code
# Plot the ingestion pipeline we defined
measurements_set.plot(rankdir='LR', with_targets=True)
###Output
_____no_output_____
###Markdown
Run ingestion task using MLRun & StoreyIn order to ingest the dataframe to the featureset, use the `ingest` function.
###Code
# User our loaded DF as the datasource and ingest it through
# the define pipeline
resp = fstore.ingest(measurements_set, measurements_df,
infer_options=fstore.InferOptions.default())
resp.head()
# Save the FeatureSet and pipeline definition
measurements_set.save()
###Output
_____no_output_____
###Markdown
Ingest Patient Details Features In this section we will use MLRun to create our patient details datasource. We will do the following:- Create a `patient_details` FeatureSet- Add preprocessing transformations to the pipeline - Map ages to buckets and One Hot Encode them - Impute missing values- Test the processing pipeline with sample data- Run ingestion pipeline on top of the cluster Create the FeatureSet
###Code
# add feature set without time column (stock ticker metadata)
patients_set = fstore.FeatureSet("patient_details", entities=[fstore.Entity("patient_id")],
description="personal and medical patient details")
# Get FeatureSet computation graph
graph = patients_set.spec.graph
###Output
_____no_output_____
###Markdown
Define the computation pipeline
###Code
# Define age buckets for our age value mapping
personal_details = {'age': {'ranges': [{'range': [0, 3], "value": "toddler"},
{'range': [3, 18], "value": "child"},
{'range': [18, 65], "value": "adult"},
{'range': [65, 120], "value": "elder"}]}}
# Define one hot encoding values map
one_hot_encoder_mapping = {'age_mapped': ['toddler', 'child', 'adult', 'elder']}
# Import MLRun's FeatureStore steps for easy
# use in our pipeline
from mlrun.feature_store.steps import *
# Define the pipeline for our FeatureSet
graph.to(MapValues(mapping=personal_details, with_original_features=True))\
.to(OneHotEncoder(mapping=one_hot_encoder_mapping))\
.to(Imputer(method='values', default_value=1, mapping={}))
# Add default NoSQL & Parquet ingestion targets
patients_set.set_targets()
# Plot the FeatureSet pipeline
patients_set.plot(rankdir='LR', with_targets=True)
###Output
_____no_output_____
###Markdown
Test the Feature transformation pipelineCreating a transformation pipeline requires some trial and error. Therefore, it is useful to run the pipeline in memory without storing the resultant data. For this purpose, `infer` is used. This function receives as input any sample DataFrame, performs all the graph steps and outputs the transformed DataFrame.
###Code
# Load the sample patient details data
patients_df = pd.read_parquet('https://s3.wasabisys.com/iguazio/data/patients/patient_details.parquet')
# Run local ingestion test
fstore.preview(patients_set, patients_df.head())
###Output
_____no_output_____
###Markdown
Save the FeatureSet and run full ingestion taskOnce you are satisfied with the transformation pipeline, ingest that full DataFrame and store the data.
###Code
# Save the FeatureSet
patients_set.save()
# Run Ingestion task
resp = fstore.ingest(patients_set, patients_df,
infer_options=fstore.InferOptions.default())
###Output
_____no_output_____
###Markdown
Start Immediate or Scheduled Ingestion Job (over Kubernetes)Another useful method to ingest data, is by creating a Kubernetes job. This may be necessary to process large amounts of data as well as to process any recurring data. With MLRun it is easy to take the pipeline and run it as a job. This is done by:1. Define a source, specifically here we define a parquet file source2. Define a configuration where `local` is set to `False`3. Mount to the provisioned storage by calling `auto_mount`4. Run `ingest` with the source and run configuration
###Code
source = ParquetSource('pq', 'https://s3.wasabisys.com/iguazio/data/patients/patient_details.parquet')
config = fstore.RunConfig(local=False).apply(mlrun.platforms.auto_mount())
fstore.ingest(patients_set, source, run_config=config)
###Output
> 2021-07-12 13:51:19,037 [info] starting run patient_details_ingest uid=5da83655a87c492eaa1065eb2b5ca501 DB=http://mlrun-api:8080
> 2021-07-12 13:51:19,134 [info] Job is running in the background, pod: patient-details-ingest-btft8
> 2021-07-12 13:51:24,554 [info] starting ingestion task to store://feature-sets/fsdemo-iguazio/patient_details:latest
> 2021-07-12 13:51:27,785 [info] ingestion task completed, targets:
> 2021-07-12 13:51:27,785 [info] [{'name': 'parquet', 'kind': 'parquet', 'path': 'v3io:///projects/fsdemo-iguazio/FeatureStore/patient_details/parquet/sets/patient_details-latest', 'status': 'created', 'updated': '2021-07-12T13:51:26.477162+00:00'}, {'name': 'nosql', 'kind': 'nosql', 'path': 'v3io:///projects/fsdemo-iguazio/FeatureStore/patient_details/nosql/sets/patient_details-latest', 'status': 'created', 'updated': '2021-07-12T13:51:26.477760+00:00'}]
> 2021-07-12 13:51:27,811 [info] run executed, status=completed
final state: completed
###Markdown
Real-time Early-Sense Sensor Ingestion (HTTP or Stream Processing With Nuclio) In this section we will use MLRun to create our Early Sense Sensor datasource. We will do the following:- Create early sense FeatureSet- Add Preprocessing transformations to the Pipeline using custom functions - Drop and Rename columns - Aggregations- Add Feature Validator to detect bad sensor readings- Test the processing pipeline with sample data- Deploy the FeatureSet ingestion service as a live rest endpoint
###Code
early_sense_set = fstore.FeatureSet("early_sense", entities=[fstore.Entity("patient_id")], timestamp_key='timestamp',
description="real time patient bed sensor data")
###Output
_____no_output_____
###Markdown
Define data validation & quality policyWe can define validations on the feature level. For example, define here validation to check if the heart-rate value is between 0 and 220 and respiratory rate is between 0 and 25.
###Code
from mlrun.features import MinMaxValidator
early_sense_set["hr"] = fstore.Feature(validator = MinMaxValidator(min=0, max=220, severity="info"))
early_sense_set["rr"] = fstore.Feature(validator = MinMaxValidator(min=0, max=25, severity="info"))
###Output
_____no_output_____
###Markdown
Define custom processing classesIn the previous sections we used transformation steps that are available as part of Storey. Here we show how to create custom transformation classes. We will later run these functions as part of a Nuclio serverless real-time function, therefore, we also use the nuclio `start-code` and `end-code` comments.
###Code
# mlrun: start-code
# We will import storey here too so it will
# be included in our function code (within the nuclio comment block)
import json
import storey
from typing import List, Dict
# The custom functions are based on `storey.MapClass`
# when they are called in the graph the `do(self, event)`
# function will be activated.
# A to_dict(self) function is also required by MLRun
# to allow the class creation on remote functions
class DropColumns(storey.MapClass):
def __init__(self, columns: List[str], **kwargs):
super().__init__(**kwargs)
self.columns = columns
def do(self, event):
for col in self.columns:
if col in event:
del event[col]
return event
def to_dict(self):
return {
"class_name": "DropColumns",
"name": self.name or "DropColumns",
"class_args": {
"columns": self.columns
},
}
class RenameColumns(storey.MapClass):
def __init__(self, mapping: Dict[str, str], **kwargs):
super().__init__(**kwargs)
self.mapping = mapping
def do(self, event):
for old_col, new_col in self.mapping.items():
try:
event[new_col] = event.pop(old_col)
except Exception as e:
print(f'{old_col} doesnt exist')
return event
def to_dict(self):
return {
"class_name": "RenameColumns",
"name": self.name or "RenameColumns",
"class_args": {"mapping": self.mapping},
}
# mlrun: end-code
###Output
_____no_output_____
###Markdown
Define the Real-Time PipelineDefine the transformation pipeline below. This is done just like the previous sections.
###Code
# Configure the list of columns to drop from
# the raw data
drop_columns = ['hr_is_error',
'rr_is_error',
'spo2_is_error',
'movements_is_error',
'turn_count_is_error',
'is_in_bed_is_error']
# Define the computationala graph including our custom functions
early_sense_set.graph.to(DropColumns(drop_columns), after='start')\
.to(RenameColumns(mapping={'bad': 'bed'}))
# Add real-time aggreagations on top of our sensor readings
for col in ['hr', 'rr', 'spo2', 'movements', 'turn_count']:
early_sense_set.add_aggregation(col + "_h", col, ['avg', 'max', 'min'], "1h")
early_sense_set.add_aggregation(col + "_d", col, ['avg', 'max', 'min'], "1d")
early_sense_set.add_aggregation('in_bed_h', 'is_in_bed', ['avg'], "1h")
early_sense_set.add_aggregation('in_bed_d', 'is_in_bed', ['avg'], "1d")
# Set NoSQL and Parquet default targets
early_sense_set.set_targets()
# Plot the pipeline
early_sense_set.plot(rankdir='LR', with_targets=True)
###Output
_____no_output_____
###Markdown
Test/debug the real-time pipeline locally in the notebook
###Code
# infer schema + stats, show the final feature set (after the data pipeline)
early_sense_df = pd.read_parquet('https://s3.wasabisys.com/iguazio/data/patients/early_sense_v2.parquet')
early_sense_df['timestamp'] = pd.to_datetime(early_sense_df['timestamp'])
early_sense_df = move_timestamps(early_sense_df) # update timestamps
fstore.preview(early_sense_set, early_sense_df.head())
# Run ingest pipeline
df=fstore.ingest(early_sense_set, early_sense_df)
# Save the early-sense Featureset
early_sense_set.save()
# print the FeatureSet spec
print(early_sense_set.status.targets.to_dict())
###Output
[{'name': 'parquet', 'kind': 'parquet', 'path': 'v3io:///projects/fsdemo-iguazio/FeatureStore/early_sense/parquet/sets/early_sense-latest', 'status': 'created', 'updated': '2021-07-12T13:52:09.109041+00:00'}, {'name': 'nosql', 'kind': 'nosql', 'path': 'v3io:///projects/fsdemo-iguazio/FeatureStore/early_sense/nosql/sets/early_sense-latest', 'status': 'created', 'updated': '2021-07-12T13:52:09.109597+00:00'}]
###Markdown
Deploy as Real-Time Stream Processing Function (Nuclio Serverless)Features are not static. For example, it is common that features include different aggregations that need to be updated as data continues to flow. A real-time pipeline requires this data to be up date. Therefore, we need a convenient way to ingest data, not just as batch, but per specific input.MLRun can convert any code to a real-time serverless function, including the pipeline. This is done by performing the following steps:1. Define a source, in this case it's an HTTP source2. Convert the previously defined code to a serving function3. Create a configuration to run the function4. Deploy an ingestion service with the Featureset, source and the configuration
###Code
# Set a new HTTPSource, this will tell our ingestion service
# to setup a Nuclio function to act as the rest endpoint
# to which we would receive the data
source = HttpSource(key_field='patient_id', time_field='timestamp')
# Take the relevant code parts from this notebook and create
# an MLRun function from them so we can run the pipeline
# as a Nuclio function
func = mlrun.code_to_function("ingest", kind="serving")
nuclio_config = fstore.RunConfig(function=func, local=False).apply(mlrun.platforms.auto_mount())
# Deploy the Online ingestion service using the pipeline definition from before
# with our new HTTP Source and our define Function
server = fstore.deploy_ingestion_service(early_sense_set, source, run_config=nuclio_config)
###Output
> 2021-07-12 13:53:06,632 [info] Starting remote function deploy
2021-07-12 13:53:06 (info) Deploying function
2021-07-12 13:53:06 (info) Building
2021-07-12 13:53:06 (info) Staging files and preparing base images
2021-07-12 13:53:07 (info) Building processor image
2021-07-12 13:53:08 (info) Build complete
> 2021-07-12 13:53:16,889 [info] function deployed, address=default-tenant.app.dev65.lab.iguazeng.com:31969
###Markdown
Test the function by sending data to the HTTP endpoint
###Code
test_data = {'patient_id': '838-21-8151',
'bad': 38,
'department': '01e9fe31-76de-45f0-9aed-0f94cc97bca0',
'room': 1,
'hr': 220.0,
'hr_is_error': True,
'rr': 5,
'rr_is_error': True,
'spo2': 85,
'spo2_is_error': True,
'movements': 0.0,
'movements_is_error': True,
'turn_count': 0.0,
'turn_count_is_error': True,
'is_in_bed': 1,
'is_in_bed_is_error': False,
'timestamp': 1606843455.906352
}
import requests
import json
response = requests.post(server, json=test_data)
response.text
###Output
_____no_output_____
###Markdown
Ingest labelsFinally, we define label data, this will be useful in the next notebook where we train a model Create Labels Set
###Code
labels_df = pd.read_parquet('https://s3.wasabisys.com/iguazio/data/patients/labels.parquet')
labels_df['timestamp'] = pd.to_datetime(labels_df['timestamp'])
labels_df = move_timestamps(labels_df) # update timestamps
labels_set = fstore.FeatureSet("labels", entities=[fstore.Entity("patient_id")], timestamp_key='timestamp',
description="training labels")
labels_set.set_targets()
df = fstore.preview(labels_set, labels_df)
df.head()
df = fstore.ingest(labels_set, labels_df)
labels_set.save()
###Output
_____no_output_____
###Markdown
Part 1: Data Ingestion This demo showcases financial fraud prevention and using the MLRun feature store to define complex features that help identify fraud. Fraud prevention specifically is a challenge as it requires processing raw transaction and events in real-time and being able to quickly respond and block transactions before they occur.To address this, we create a development pipeline and a production pipeline. Both pipelines share the same feature engineering and model code, but serve data very differently. Furthermore, we automate the data and model monitoring process, identify drift and trigger retraining in a CI/CD pipeline. This process is described in the diagram below: The raw data is described as follows:| TRANSACTIONS || &x2551; |USER EVENTS || |-----------------|----------------------------------------------------------------|----------|-----------------|----------------------------------------------------------------|| **age** | age group value 0-6. Some values are marked as U for unknown | &x2551; | **source** | The party/entity related to the event || **gender** | A character to define the age | &x2551; | **event** | event, such as login or password change || **zipcodeOri** | ZIP code of the person originating the transaction | &x2551; | **timestamp** | The date and time of the event || **zipMerchant** | ZIP code of the merchant receiving the transaction | &x2551; | | || **category** | category of the transaction (e.g., transportation, food, etc.) | &x2551; | | || **amount** | the total amount of the transaction | &x2551; | | || **fraud** | whether the transaction is fraudulent | &x2551; | | || **timestamp** | the date and time in which the transaction took place | &x2551; | | || **source** | the ID of the party/entity performing the transaction | &x2551; | | || **target** | the ID of the party/entity receiving the transaction | &x2551; | | || **device** | the device ID used to perform the transaction | &x2551; | | | This notebook introduces how to **Ingest** different data sources to the **Feature Store**.The following FeatureSets will be created:- **Transactions**: Monetary transactions between a source and a target.- **Events**: Account events such as account login or a password change.- **Label**: Fraud label for the data.By the end of this tutorial you’ll learn how to:- Create an ingestion pipeline for each data source.- Define preprocessing, aggregation and validation of the pipeline.- Run the pipeline locally within the notebook.- Launch a real-time function to ingest live data.- Schedule a cron to run the task when needed.
###Code
project_name = 'fraud-demo'
import mlrun
# Initialize the MLRun project object
project = mlrun.get_or_create_project(project_name, context="./", user_project=True)
###Output
> 2022-01-10 21:27:08,455 [warning] Failed resolving version info. Ignoring and using defaults
> 2022-01-10 21:27:11,778 [warning] Server or client version is unstable. Assuming compatible: {'server_version': '0.9.1', 'client_version': '0.0.0+unstable'}
> 2022-01-10 21:27:11,813 [info] created and saved project fraud-demo
###Markdown
Step 1 - Fetch, Process and Ingest our datasets 1.1 - Transactions Transactions
###Code
# Helper functions to adjust the timestamps of our data
# while keeping the order of the selected events and
# the relative distance from one event to the other
def date_adjustment(sample, data_max, new_max, old_data_period, new_data_period):
'''
Adjust a specific sample's date according to the original and new time periods
'''
sample_dates_scale = ((data_max - sample) / old_data_period)
sample_delta = new_data_period * sample_dates_scale
new_sample_ts = new_max - sample_delta
return new_sample_ts
def adjust_data_timespan(dataframe, timestamp_col='timestamp', new_period='2d', new_max_date_str='now'):
'''
Adjust the dataframe timestamps to the new time period
'''
# Calculate old time period
data_min = dataframe.timestamp.min()
data_max = dataframe.timestamp.max()
old_data_period = data_max-data_min
# Set new time period
new_time_period = pd.Timedelta(new_period)
new_max = pd.Timestamp(new_max_date_str)
new_min = new_max-new_time_period
new_data_period = new_max-new_min
# Apply the timestamp change
df = dataframe.copy()
df[timestamp_col] = df[timestamp_col].apply(lambda x: date_adjustment(x, data_max, new_max, old_data_period, new_data_period))
return df
import pandas as pd
# Fetch the transactions dataset from the server
transactions_data = pd.read_csv('https://s3.wasabisys.com/iguazio/data/fraud-demo-mlrun-fs-docs/data.csv', parse_dates=['timestamp'])
# Adjust the samples timestamp for the past 2 days
transactions_data = adjust_data_timespan(transactions_data.sample(50000), new_period='2d')
# Preview
transactions_data.head(3)
###Output
_____no_output_____
###Markdown
Transactions - Create a FeatureSet and Preprocessing PipelineCreate the FeatureSet (data pipeline) definition for the **credit transaction processing** which describes the offline/online data transformations and aggregations.The feature store will automatically add an offline `parquet` target and an online `NoSQL` target by using `set_targets()`.The data pipeline consists of:* **Extracting** the data components (hour, day of week)* **Mapping** the age values* **One hot encoding** for the transaction category and the gender* **Aggregating** the amount (avg, sum, count, max over 2/12/24 hour time windows)* **Aggregating** the transactions per category (over 14 days time windows)* **Writing** the results to **offline** (Parquet) and **online** (NoSQL) targets
###Code
# Import MLRun's Feature Store
import mlrun.feature_store as fstore
from mlrun.feature_store.steps import OneHotEncoder, MapValues, DateExtractor
# Define the transactions FeatureSet
transaction_set = fstore.FeatureSet("transactions",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="transactions feature set")
# Define and add value mapping
main_categories = ["es_transportation", "es_health", "es_otherservices",
"es_food", "es_hotelservices", "es_barsandrestaurants",
"es_tech", "es_sportsandtoys", "es_wellnessandbeauty",
"es_hyper", "es_fashion", "es_home", "es_contents",
"es_travel", "es_leisure"]
# One Hot Encode the newly defined mappings
one_hot_encoder_mapping = {'category': main_categories,
'gender': list(transactions_data.gender.unique())}
# Define the graph steps
transaction_set.graph\
.to(DateExtractor(parts = ['hour', 'day_of_week'], timestamp_col = 'timestamp'))\
.to(MapValues(mapping={'age': {'U': '0'}}, with_original_features=True))\
.to(OneHotEncoder(mapping=one_hot_encoder_mapping))
# Add aggregations for 2, 12, and 24 hour time windows
transaction_set.add_aggregation(name='amount',
column='amount',
operations=['avg','sum', 'count','max'],
windows=['2h', '12h', '24h'],
period='1h')
# Add the category aggregations over a 14 day window
for category in main_categories:
transaction_set.add_aggregation(name=category,column=f'category_{category}',
operations=['count'], windows=['14d'], period='1d')
# Add default (offline-parquet & online-nosql) targets
transaction_set.set_targets()
# Plot the pipeline so we can see the different steps
transaction_set.plot(rankdir="LR", with_targets=True)
###Output
_____no_output_____
###Markdown
Transactions - Ingestion
###Code
# Ingest our transactions dataset through our defined pipeline
transactions_df = fstore.ingest(transaction_set, transactions_data,
infer_options=fstore.InferOptions.default())
transactions_df.head(3)
###Output
_____no_output_____
###Markdown
1.2 - User Events User Events - Fetching
###Code
# Fetch our user_events dataset from the server
user_events_data = pd.read_csv('https://s3.wasabisys.com/iguazio/data/fraud-demo-mlrun-fs-docs/events.csv',
index_col=0, quotechar="\'", parse_dates=['timestamp'])
# Adjust to the last 2 days to see the latest aggregations in our online feature vectors
user_events_data = adjust_data_timespan(user_events_data, new_period='2d')
# Preview
user_events_data.head(3)
###Output
_____no_output_____
###Markdown
User Events - Create a FeatureSet and Preprocessing PipelineNow we will define the events feature set.This is a pretty straight forward pipeline in which we only one hot encode the event categories and save the data to the default targets.
###Code
user_events_set = fstore.FeatureSet("events",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="user events feature set")
# Define and add value mapping
events_mapping = {'event': list(user_events_data.event.unique())}
# One Hot Encode
user_events_set.graph.to(OneHotEncoder(mapping=events_mapping))
# Add default (offline-parquet & online-nosql) targets
user_events_set.set_targets()
# Plot the pipeline so we can see the different steps
user_events_set.plot(rankdir="LR", with_targets=True)
###Output
_____no_output_____
###Markdown
User Events - Ingestion
###Code
# Ingestion of our newly created events feature set
events_df = fstore.ingest(user_events_set, user_events_data)
events_df.head(3)
###Output
_____no_output_____
###Markdown
Step 2 - Create a labels dataset for model training Label Set - Create a FeatureSetThis feature set contains the label for the fraud demo, it will be ingested directly to the default targets without any changes
###Code
def create_labels(df):
labels = df[['fraud','source','timestamp']].copy()
labels = labels.rename(columns={"fraud": "label"})
labels['timestamp'] = labels['timestamp'].astype("datetime64[ms]")
labels['label'] = labels['label'].astype(int)
labels.set_index('source', inplace=True)
return labels
# Define the "labels" feature set
labels_set = fstore.FeatureSet("labels",
entities=[fstore.Entity("source")],
timestamp_key='timestamp',
description="training labels",
engine="pandas")
labels_set.graph.to(name="create_labels", handler=create_labels)
# specify only Parquet (offline) target since its not used for real-time
labels_set.set_targets(['parquet'], with_defaults=False)
labels_set.plot(with_targets=True)
###Output
_____no_output_____
###Markdown
Label Set - Ingestion
###Code
# Ingest the labels feature set
labels_df = fstore.ingest(labels_set, transactions_data)
labels_df.head(3)
###Output
_____no_output_____
###Markdown
Step 3 - Deploy a real-time pipelineWhen dealing with real-time aggregation, it's important to be able to update these aggregations in real-time.For this purpose, we will create live serving functions that will update the online feature store of the `transactions` FeatureSet and `Events` FeatureSet.Using MLRun's `serving` runtime, craetes a nuclio function loaded with our feature set's computational graph definitionand an `HttpSource` to define the HTTP trigger.Notice that the implementation below does not require any rewrite of the pipeline logic. 3.1 - Transactions Transactions - Deploy our FeatureSet live endpoint
###Code
# Create iguazio v3io stream and transactions push API endpoint
transaction_stream = f'v3io:///projects/{project.name}/streams/transaction'
transaction_pusher = mlrun.datastore.get_stream_pusher(transaction_stream)
# Define the source stream trigger (use v3io streams)
# we will define the `key` and `time` fields (extracted from the Json message).
source = mlrun.datastore.sources.StreamSource(path=transaction_stream , key_field='source', time_field='timestamp')
# Deploy the transactions feature set's ingestion service over a real-time (Nuclio) serverless function
# you can use the run_config parameter to pass function/service specific configuration
transaction_set_endpoint = fstore.deploy_ingestion_service(featureset=transaction_set, source=source)
###Output
> 2022-01-10 22:08:17,147 [info] Starting remote function deploy
2022-01-10 22:08:17 (info) Deploying function
2022-01-10 22:08:17 (info) Building
2022-01-10 22:08:18 (info) Staging files and preparing base images
2022-01-10 22:08:18 (info) Building processor image
2022-01-10 22:08:20 (info) Build complete
2022-01-10 22:08:25 (info) Function deploy complete
> 2022-01-10 22:08:26,697 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-fraud-demo-admin-transactions-ingest.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['fraud-demo-admin-transactions-ingest-fraud-demo-admin.default-tenant.app.yh41.iguazio-cd1.com/']}
###Markdown
Transactions - Test the feature set HTTP endpoint By defining our `transactions` feature set we can now use MLRun and Storey to deploy it as a live endpoint, ready to ingest new data!Using MLRun's `serving` runtime, we will create a nuclio function loaded with our feature set's computational graph definition and an `HttpSource` to define the HTTP trigger.
###Code
import requests
import json
# Select a sample from the dataset and serialize it to JSON
transaction_sample = json.loads(transactions_data.sample(1).to_json(orient='records'))[0]
transaction_sample['timestamp'] = str(pd.Timestamp.now())
transaction_sample
# Post the sample to the ingestion endpoint
requests.post(transaction_set_endpoint, json=transaction_sample).text
###Output
_____no_output_____
###Markdown
3.2 - User Events User Events - Deploy our FeatureSet live endpointDeploy the events feature set's ingestion service using the feature set and all the previously defined resources.
###Code
# Create iguazio v3io stream and transactions push API endpoint
events_stream = f'v3io:///projects/{project.name}/streams/events'
events_pusher = mlrun.datastore.get_stream_pusher(events_stream)
# Define the source stream trigger (use v3io streams)
# we will define the `key` and `time` fields (extracted from the Json message).
source = mlrun.datastore.sources.StreamSource(path=events_stream , key_field='source', time_field='timestamp')
# Deploy the transactions feature set's ingestion service over a real-time (Nuclio) serverless function
# you can use the run_config parameter to pass function/service specific configuration
events_set_endpoint = fstore.deploy_ingestion_service(featureset=user_events_set, source=source)
###Output
> 2022-01-10 22:10:02,576 [info] Starting remote function deploy
2022-01-10 22:10:03 (info) Deploying function
2022-01-10 22:10:03 (info) Building
2022-01-10 22:10:04 (info) Staging files and preparing base images
2022-01-10 22:10:04 (info) Building processor image
2022-01-10 22:10:06 (info) Build complete
2022-01-10 22:10:11 (info) Function deploy complete
> 2022-01-10 22:10:12,856 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-fraud-demo-admin-events-ingest.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['fraud-demo-admin-events-ingest-fraud-demo-admin.default-tenant.app.yh41.iguazio-cd1.com/']}
###Markdown
User Events - Test the feature set HTTP endpoint
###Code
# Select a sample from the events dataset and serialize it to JSON
user_events_sample = json.loads(user_events_data.sample(1).to_json(orient='records'))[0]
user_events_sample['timestamp'] = str(pd.Timestamp.now())
user_events_sample
# Post the sample to the ingestion endpoint
requests.post(events_set_endpoint, json=user_events_sample).text
###Output
_____no_output_____ |
Ensemble.ipynb | ###Markdown
Grab external data
###Code
img_ids = []
labels = []
for img in os.listdir('data/Abnormal/'):
img_ids.append(os.path.join('Abnormal', img))
labels.append([0, 1, 0, 0, 0])
for img in os.listdir('data/ETIS-LaribPolypDB/'):
img_ids.append(os.path.join('ETIS-LaribPolypDB/', img))
labels.append([0, 0, 0, 0, 1])
for img in os.listdir('data/Kvasir-SEG/images/'):
img_ids.append(os.path.join('Kvasir-SEG/images/', img))
labels.append([0, 0, 0, 0, 1])
ext_df = pd.concat([pd.Series(img_ids), pd.DataFrame(labels)], 1)
ext_df.columns = ['img', 'BE', 'suspicious', 'HGD', 'cancer', 'polyp']
ext_df.to_csv('./data/external_data.csv')
###Output
_____no_output_____
###Markdown
Split original data
###Code
imgs_dir = "data/originalImages/"
masks_dir = "data/masks/"
classes = [
"BE",
"suspicious",
"HGD",
"cancer",
"polyp",
]
img_labels = []
img_ids = []
for img in os.listdir(imgs_dir):
img_ids.append(img)
img_path = os.path.join(imgs_dir, img)
img_label = []
for cls in classes:
mask_path = os.path.join(masks_dir, img.replace(".jpg", f"_{cls}.tif"))
if os.path.exists(mask_path):
img_label.append(1)
else:
img_label.append(0)
img_labels.append(img_label)
df = pd.concat([pd.Series(img_ids),
pd.DataFrame(img_labels)], axis=1)
df.columns = ["img"] + classes
for cls in classes:
print(f"Class {cls} - num. samples {df[cls].value_counts()[0]}")
NUM_FOLDS = 5
SEED = 2709
iterkfold = IterativeStratification(n_splits=5, random_state=SEED)
x, y = df.iloc[:, 0].values, df.iloc[:, 1:].values
for i, (train, test) in enumerate(iterkfold.split(x, y)):
print(x[train].shape, x[test].shape)
df.loc[train].to_csv(f"data/train_fold{i}.csv", index=False)
df.loc[test].to_csv(f"data/valid_fold{i}.csv", index=False)
###Output
_____no_output_____
###Markdown
Search thresholds
###Code
seg_threshold = [0.5] * 5
_grid_thresholds = np.linspace(0.1, 0.6, 100)
classes = [
"BE",
"suspicious",
"HGD",
"cancer",
"polyp"
]
models = ['b4_unet', 'b3_unet', 'resnet50_fpn']
def search_threshold(inputs, targets,
grid_thresholds=np.linspace(0.1, 0.6, 100),
metric_func=f1_score):
num_classes = inputs.shape[1]
best_cls_thresholds = []
for i in range(num_classes):
class_inp = inputs[:, i]
class_tar = targets[:, i]
grid_scores = []
for thresh in _grid_thresholds:
grid_scores.append(metric_func(class_tar, class_inp > thresh))
best_t = grid_thresholds[np.argmax(grid_scores)]
best_score = np.max(grid_scores)
best_cls_thresholds.append(best_t)
return best_cls_thresholds
ens_mask_output_list = []
valid_mask_list = []
for seg_weights in [
[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.],
# [0.5, 0.4, 0.1],
# [0.6, 0.3, 0.1],
# [0.7, 0.2, 0.1],
[1./3, 1./3, 1./3]
]:
for i in range(5):
valid_mask = torch.load(f'thresholds_tuning/mask_{i}.pth')
ens_mask_output = 0
for model, w in zip(models, seg_weights):
if model == 'b4_unet':
model_output = torch.load(f'thresholds_tuning/{model}_fold{i}.pth')
model_output = F.interpolate(model_output, 384,
mode='bilinear', align_corners=False)
else:
model_output = torch.load(f'thresholds_tuning/{model}_fold{i}.pth')
ens_mask_output += model_output * w
ens_mask_output_list.append(ens_mask_output)
valid_mask_list.append(valid_mask)
ens_mask_output = torch.cat(ens_mask_output_list, 0)
valid_mask = torch.cat(valid_mask_list, 0)
dice_score = binary_dice_metric(ens_mask_output, valid_mask, seg_threshold).mean().item()
iou = binary_iou_metric(ens_mask_output, valid_mask, seg_threshold).mean().item()
print(f'\nWeights: {seg_weights} Dice score: {dice_score} - IoU: {iou}')
seg_weights = [0.5, 0.4, 0.1]
num_folds = 5
ens_mask_pred = 0
for m, seg_w in zip(models, seg_weights):
single_mask_pred = 0
for f in range(num_folds): # folds
if m == "b4_unet":
single_mask_pred += F.interpolate(
torch.load(f'./thresholds_tuning/{m}_fold{f}_test.pth'),
384, align_corners=False, mode='bilinear'
) / num_folds
else:
single_mask_pred += torch.load(f'./thresholds_tuning/{m}_fold{f}_test.pth') / num_folds
ens_mask_pred += single_mask_pred * seg_w
ens_mask_pred = torch.where(ens_mask_pred!=0,
torch.sigmoid(ens_mask_pred), ens_mask_pred)
ens_mask_pred = torch.stack([
ens_mask_pred[:, i, ...] > th
for i, th in enumerate(best_seg_thresholds)], 1)
ens_mask_pred = ens_mask_pred.float()
for out, i, o_sz in zip(ens_mask_pred, img_id, orig_size):
out = F.interpolate(out.unsqueeze(0), o_sz,
mode="bilinear", align_corners=False)
out = out.squeeze(0)
out = out.cpu().numpy().astype(np.uint8) * 255
save_path = os.path.join(mask_pred_dir, i.replace(".jpg", ".tif"))
tiff.imwrite(save_path, out)
###Output
_____no_output_____
###Markdown
TRANQUANGDAT ATOMIC BOMB
###Code
best_seg_thresholds = [0.5] * 5
# models = ['rx101-x448', 'rx50-x384-iter-focal']
# seg_weights = [.7, .3]
# models = ['rx101-x448', 'rx50-x384-iter-focal', 'rx101-fpn']
# seg_weights = [.5, .3, .2]
models = ['rx101-x448', 'rx50-x384-iter-focal', 'rx101-fpn', 'b4-fpn']
seg_weights = [.45, .3, .2, .05]
out_dir = 'dattran2346_kfold/'
img_id = torch.load(f'{out_dir}test_img_ids.pth')
orig_size = torch.load(f'{out_dir}test_sizes.pth')
mask_pred_dir = out_dir
num_folds = 3
ens_mask_pred = 0
for m, seg_w in zip(models, seg_weights):
single_mask_pred = 0
for f in range(num_folds): # folds
if m == "rx101-x448" or m == "rx101-fpn" or m == "b4-fpn":
single_mask_pred += F.interpolate(
torch.load(f'{out_dir}{m}_test_{f}.pth'),
384, align_corners=False, mode='bilinear'
) / num_folds
else:
single_mask_pred += torch.load(f'{out_dir}{m}_test_{f}.pth') / num_folds
ens_mask_pred += single_mask_pred * seg_w
ens_mask_pred = torch.where(ens_mask_pred!=0,
torch.sigmoid(ens_mask_pred), ens_mask_pred)
ens_mask_pred = torch.stack([
ens_mask_pred[:, i, ...] > th
for i, th in enumerate(best_seg_thresholds)], 1)
ens_mask_pred = ens_mask_pred.float()
min_ins_ratio = 0.000927
min_art_ratio = 0.000293
min_sat_ratio = 0.000380
for out, i, o_sz in zip(ens_mask_pred, img_id, orig_size):
out = F.interpolate(out.unsqueeze(0), o_sz,
mode="bilinear", align_corners=False)
out = out.squeeze(0)
area = np.prod(out.shape[1:])
instrument_area = out[0].sum()
artefact_area = out[2].sum()
saturation_area = out[-1].sum()
if instrument_area > 0:
ins_ratio = instrument_area / area
if ins_ratio < min_ins_ratio: # less than min area in training set
print('Instrument ', i)
out[0] = 0
if artefact_area > 0:
art_ratio = artefact_area / area
if art_ratio < min_art_ratio:
print('Artefact ', i)
out[2] = 0
if saturation_area > 0:
sat_ratio = saturation_area / area
if sat_ratio < min_sat_ratio:
print('Saturation ', i)
out[-1] = 0
out = out.cpu().numpy().astype(np.uint8) * 255
save_path = os.path.join(mask_pred_dir, i+'.tif')
tiff.imwrite(save_path, out)
min_ins_ratio = 0.000927
min_art_ratio = 0.000293
min_sat_ratio = 0.000380
for out, i, o_sz in zip(ens_mask_pred, img_id, orig_size):
out = F.interpolate(out.unsqueeze(0), o_sz,
mode="bilinear", align_corners=False)
out = out.squeeze(0)
area = np.prod(out.shape[1:])
instrument_area = out[0].sum()
artefact_area = out[2].sum()
saturation_area = out[-1].sum()
if instrument_area > 0:
ins_ratio = instrument_area / area
if ins_ratio < min_ins_ratio: # less than min area in training set
print('Instrument ', i)
out[0] = 0
if artefact_area > 0:
art_ratio = artefact_area / area
if art_ratio < min_art_ratio:
print('Artefact ', i)
out[2] = 0
if saturation_area > 0:
sat_ratio = saturation_area / area
if sat_ratio < min_sat_ratio:
print('Saturation ', i)
out[-1] = 0
out = out.cpu().numpy().astype(np.uint8) * 255
save_path = os.path.join(mask_pred_dir, i+'.tif')
tiff.imwrite(save_path, out)
###Output
_____no_output_____
###Markdown
Search Segmentation threshold
###Code
# best_seg_thresholds = []
# for i in range(5): # 5 classes
# cls_out = ens_mask_output[:, i, ...].unsqueeze(1)
# cls_mask = valid_mask[:, i, ...].unsqueeze(1)
# _grid_dice_scores = []
# _grid_ious = []
# for thresh in _grid_thresholds:
# _grid_dice_scores.append(binary_dice_metric(cls_out, cls_mask, thresh).mean().item())
# _grid_ious.append(binary_iou_metric(cls_out, cls_mask, thresh).mean().item())
# best_t = _grid_thresholds[np.argmax(_grid_dice_scores)]
# # best_t = _grid_thresholds[np.argmax(_grid_ious)]
# best_dice = np.max(_grid_dice_scores)
# best_iou = np.max(_grid_ious)
# best_seg_thresholds.append(best_t)
# # for i in range(5):
# for i in range(1):
# valid_dice = binary_dice_metric(
# ens_mask_output, valid_mask, best_seg_thresholds)
# valid_iou = binary_iou_metric(
# ens_mask_output, valid_mask, best_seg_thresholds)
# print(f'Dice Score - Fold {i}: ', valid_dice.mean(0).mean(0).item())
# print(f'IoU - Fold {i}: ', valid_iou.mean(0).mean(0).item())
###Output
_____no_output_____
###Markdown
Seed 20+300+321 0.83193277 (FL) Seed 50+300+321 0.834033 (FL) Seed 20+300+321 0.731092 (SL) Seed 50+300+321 0.733193 (SL) Seed 20+50+300 0.73949 (SL)
###Code
def get_wrong_records(pred, levelFlag):
test = pd.read_csv("data/feedback/test_set.csv")
test['predLabel']=pred
test = test[test[levelFlag] != test['predLabel']]
return test
get_wrong_records(pred_ensemble_fl, fl)
get_wrong_records(pred_ensemble_sl, sl)
###Output
_____no_output_____
###Markdown
EnsembleThis notebook completed to ensemble 3 submissions based on 3 xgboost models, and 3 neural networks.
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
# Any results you write to the current directory are saved as output.
print("Reading the data...\n")
df1 = pd.read_csv('./submission/y_pred1.csv')
df2 = pd.read_csv('./submission/y_pred2.csv')
df3 = pd.read_csv('./submission/y_pred3.csv')
df4 = pd.read_csv('./submission/model_0.csv')
df1.head()
df3.head()
# df3.iloc[:, 1:2]
# df4.iloc[:,1:2]
#
models = { 'df1' :{ 'name':'dnn1',
'score':81.9738,
'df':df1 },
# 'df2' :{ 'name':'dnn2',
# 'score':81.9694,
# 'df':df2 },
# 'df3' :{ 'name':'dnn3',
# 'score':82.0703,
# 'df':df3 },
'df4' :{ 'name':'xgboost',
'score':93.5107,
'df':df4 }
}
df1.head()
isa_lg = 0
isa_hm = 0
isa_am = 0
isa_gm=0
print("Blending...\n")
for df in models.keys() :
if df == 'df4':
isa_lg += np.log(models[df]['df'].pred_prob_0)
isa_hm += 1/(models[df]['df'].pred_prob_0)
isa_am +=isa_am
isa_gm *= isa_gm
else:
isa_lg += np.log(models[df]['df'][u'0'])
isa_hm += 1/(models[df]['df'][u'0'])
isa_am +=isa_am
isa_gm *= isa_gm
isa_lg = np.exp(isa_lg/5)
isa_hm = 5/isa_hm
isa_am = isa_am/5
isa_gm = (isa_gm)**(1/5)
print("Isa log\n")
print(isa_lg[:5])
print()
print("Isa harmo\n")
print(isa_hm[:5])
# sub_log = pd.DataFrame()
# sub_log['click_id'] = df1['click_id']
# sub_log['is_attributed'] = isa_lg
# sub_log.head()
# sub_hm = pd.DataFrame()
# sub_hm['click_id'] = df1['click_id']
# sub_hm['is_attributed'] = isa_hm
# sub_hm.head()
sub_fin=pd.DataFrame()
#sub_fin['click_id']=df1['click_id']
sub_fin['is_attributed']= (5*isa_lg+3*isa_hm+2*isa_am)/10
print("Writing...")
# sub_log.to_csv('submission_log2.csv', index=False, float_format='%.9f')
# sub_hm.to_csv('submission_hm2.csv', index=False, float_format='%.9f')
sub_fin.to_csv('submission_esb_1x_1n.csv', index=False, float_format='%.9f')
#sub_fin.to_csv('submission_esb_3n.csv', index=False, float_format='%.9f')
print("DONE!")
sub_fin.head()
###Output
_____no_output_____
###Markdown
Ensemble
###Code
from __future__ import division
from IPython.display import display
from matplotlib import pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import random, sys, os, re
###Output
_____no_output_____
###Markdown
The test set has duplicates so we get the list of IDs in the sample file in order
###Code
id_list = []
with open('../submissions/Submission_Format.csv', 'r') as f:
lines = f.read().splitlines()
for line in lines:
ID,prob = line.split(',')
if ID == '': continue
id_list.append(ID)
def get_filepaths(directory):
"""
This function will generate the file names in a directory
tree by walking the tree either top-down or bottom-up. For each
directory in the tree rooted at directory top (including top itself),
it yields a 3-tuple list (dirpath, dirnames, filenames).
"""
import os
file_paths = [] # List which will store all of the full filepaths.
# Walk the tree.
for root, directories, files in os.walk(directory):
for filename in files:
# Join the two strings in order to form the full filepath.
filepath = os.path.join(root, filename)
file_paths.append(filepath) # Add it to the list.
return file_paths
###Output
_____no_output_____
###Markdown
Get the list of submission files * remove the example file * and all ensembles BEFORE
###Code
file_list = get_filepaths('../submissions')
file_list
###Output
_____no_output_____
###Markdown
AFTER
###Code
# why do it more than once? For some reason it doesn't work if only run once. Who knows?
# ======================================================================================
for i in range(3):
for file_name in file_list:
if 'Format' in file_name: file_list.remove(file_name)
if 'Ensemble' in file_name: file_list.remove(file_name)
if 'ensemble' in file_name: file_list.remove(file_name)
file_list.sort(key=lambda x: x[26:32])
from copy import copy
file_list_all = copy(file_list)
file_list
###Output
_____no_output_____
###Markdown
--------------------------------------------- Ensemble ALL the submissions --------------------------------------------- Find the average probability for all IDs
###Code
from collections import defaultdict
aggregates = defaultdict(list)
averages = defaultdict(list)
# 1. collect the probabilities for each ID from all the submission files
# ======================================================================
for file_name in file_list:
with open(file_name, 'r') as f:
lines = f.read().splitlines()
for line in lines:
ID,prob = line.split(',')
if ID == '': continue
aggregates[ID].append(prob)
# 2. find the average of all the probabilities for each ID
# ========================================================
averages.update((ID, np.mean(map(float, probs))) for ID, probs in aggregates.items())
aggregates['1'],averages['1']
len(aggregates),len(averages)
###Output
_____no_output_____
###Markdown
Create a submission file of the ensemble of averages
###Code
# f = open("../submissions/submission_EnsembleOfAveragesALL.csv", "w")
# f.write(",Made Donation in March 2007\n")
# for ID in id_list:
# f.write("{},{}\n".format(ID, averages[ID]))
# f.close()
###Output
_____no_output_____
###Markdown
--------------------------------------------------------------- Ensemble the submissions with high scores --------------------------------------------------------------- BEFORE
###Code
file_list
###Output
_____no_output_____
###Markdown
AFTER
###Code
# why do it more than once? For some reason it doesn't work if only run once. Who knows?
# ======================================================================================
for _ in range(2):
for _ in range(4):
for file_name in file_list:
if 'Format' in file_name: file_list.remove(file_name)
if 'Ensemble' in file_name: file_list.remove(file_name)
# scores of 0.4... or 0.3... are good
# files with SEED... are good-scoring models that were re-run with different random seeds
if ('bagged_nolearn' not in file_name):
file_list.remove(file_name)
file_list
from collections import defaultdict
aggregates = defaultdict(list)
averages = defaultdict(list)
# 1. collect the probabilities for each ID from all the submission files
# ======================================================================
for file_name in file_list:
with open(file_name, 'r') as f:
lines = f.read().splitlines()
for line in lines:
ID,prob = line.split(',')
if ID == '': continue
aggregates[ID].append(prob)
# 2. find the average of all the probabilities for each ID
# ========================================================
averages.update((ID, np.mean(map(float, probs))) for ID, probs in aggregates.items())
aggregates['1'],averages['1']
len(aggregates),len(averages)
f = open("../submissions/submission_EnsembleOfAveragesBEST_SEED.csv", "w")
f.write(",Made Donation in March 2007\n")
for ID in id_list:
f.write("{},{}\n".format(ID, averages[ID]))
f.close()
###Output
_____no_output_____
###Markdown
--------------------------------------------------------------- Ensemble the least-correlated submissions --------------------------------------------------------------- Create a dataframe with one column per submission
###Code
from os.path import split
corr_table = pd.read_csv(file_list_all[0],names=['id',split(file_list_all[0])[1][11:-4]],header=0,index_col=0)
corr_table.head()
for file_path in file_list_all[1:]:
temp = pd.read_csv(file_path,names=['id',split(file_path)[1][11:-4]],header=0,index_col=0)
corr_table[temp.columns[0]] = temp[[temp.columns[0]]]
corr_table.head()
###Output
_____no_output_____
###Markdown
Display the correlations among the submissions
###Code
import seaborn as sns
# Compute the correlation matrix
corr_matrix = corr_table.corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr_matrix, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr_matrix, mask=mask, cmap=cmap, vmax=.9,
square=True, xticklabels=4, yticklabels=3,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
Find the least-correlated pairs of submissions
###Code
corr_threshold = 0.20
indices = np.where(corr_matrix < corr_threshold)
indices = [(corr_matrix.index[x], corr_matrix.columns[y], corr_matrix.ix[x,y]) for x, y in zip(*indices)
if x != y and x < y]
from operator import itemgetter
indices.sort(key=itemgetter(2))
len(indices),indices
least_corr = set(set(['../submissions/submission_'+a+'.csv' for a,b,c in indices]).\
union(set(['../submissions/submission_'+b+'.csv' for a,b,c in indices])))
len(least_corr), least_corr
from collections import defaultdict
aggregates = defaultdict(list)
averages = defaultdict(list)
# 1. collect the probabilities for each ID from all the submission files
# ======================================================================
for file_name in least_corr:
with open(file_name, 'r') as f:
lines = f.read().splitlines()
for line in lines:
ID,prob = line.split(',')
if ID == '': continue
aggregates[ID].append(prob)
# 2. find the average of all the probabilities for each ID
# ========================================================
averages.update((ID, np.mean(map(float, probs))) for ID, probs in aggregates.items())
aggregates['1'],averages['1']
# f = open("../submissions/submission_EnsembleOfAveragesLeastCorr.csv", "w")
# f.write(",Made Donation in March 2007\n")
# for ID in id_list:
# f.write("{},{}\n".format(ID, averages[ID]))
# f.close()
###Output
_____no_output_____
###Markdown
Machine Learning Models and Ensemble Method---1. Split X-Features and y-labels2. 80-20 train-validation split3. fit models (on train), evaluate (on validation): 1. DNN 2. SVM 3. RF 4. XGB 5. LogReg4. __Manual__ Ensemble Method: 1. Evaluate Ensemble on Validation data5. Evaluate Ensemble on TEST data
###Code
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1" # disable GPU
from tqdm import tqdm # progress bar
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# processing / validation
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
# keras/tf
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation,Dropout
# models
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestRegressor
from xgboost import XGBClassifier
from sklearn.linear_model import LogisticRegression
#from sklearn.model_selection import GridSearchCV # hp-tuning
# metrics
from sklearn import metrics
from sklearn.metrics import classification_report
from sklearn.metrics import roc_auc_score
# constant seed for reproducibility
SEED = 111
os.environ['PYTHONHASHSEED'] = str(SEED)
np.random.seed(SEED)
tf.random.set_seed(SEED)
###Output
/Users/rezanaghshineh/opt/anaconda3/lib/python3.7/site-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
1 & 2 - X/y + Train/Test Split:
###Code
df = pd.read_csv("data/UFC_TRAIN.csv")
# tackling imbalance issue
#theMin = df["Winner"].value_counts().min()
#minority = df[df["Winner"]==1].iloc[0:theMin]
#undersampleMaj = df[df["Winner"]==0].iloc[0:theMin]
#df = pd.concat([minority, undersampleMaj], axis=0)
#df["Winner"].value_counts()
# feature/label and train/test split
X = df.drop(["date","Winner","B_fighter","R_fighter"], axis=1).values
y = df["Winner"].values
X_TRAIN, X_VAL, y_TRAIN, y_VAL = train_test_split(X,y, test_size=0.20, random_state=SEED)
###Output
_____no_output_____
###Markdown
Baseline: Always predict red (i.e: 0)
###Code
metrics.accuracy_score(np.zeros(len(df.index)),df["Winner"])
###Output
_____no_output_____
###Markdown
Baseline accuracy is 67.96 % in unbalanced dataset 3- ML Models A: DNN - Using a deep neural network with early stopping functionality to prevent divergence of loss & val_loss:
###Code
# scaling
scaler = MinMaxScaler()
scaler.fit(X_TRAIN)
X_train_scaled = scaler.transform(X_TRAIN)
X_val_scaled = scaler.transform(X_VAL)
print(f"X_train_scaled shape: {X_train_scaled.shape} | X_val_scaled shape: {X_val_scaled.shape} | y_train shape: {y_TRAIN.shape} | y_val shape: {y_VAL.shape}")
# model
dnnClf = Sequential()
# first hiden layer
dnnClf.add(Dense(units=20, input_dim=42,activation='relu'))
#dnnClf.add(Dropout(0.5)) # deactivates 50% of nodes
dnnClf.add(Dense(units=10, activation='relu'))
dnnClf.add(Dropout(0.5)) # deactivates 50% of nodes
# output layer
dnnClf.add(Dense(units=1, activation='sigmoid'))
dnnClf.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
from tensorflow.keras.callbacks import EarlyStopping # prevent divergence of loss & val_loss
early_stop = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=16)
dnnClf.fit(x=X_train_scaled,
y=y_TRAIN,
epochs=400,
validation_data=(X_val_scaled, y_VAL), verbose=1,
callbacks=[early_stop]
)
model_loss = pd.DataFrame(dnnClf.history.history)
model_loss.plot()
dnnPreds = dnnClf.predict(scaler.transform(X_VAL))
dnnPreds = [round(i[0]) for i in dnnPreds]
target_names = ['class 0', 'class 1']
print("DNN Validation Performance on UNBALANCED(!):\n------------------\n",classification_report(y_VAL, dnnPreds , target_names=target_names))
print("AUC: ",roc_auc_score(y_VAL, dnnPreds))
###Output
DNN Validation Performance on UNBALANCED(!):
------------------
precision recall f1-score support
class 0 0.71 0.91 0.80 590
class 1 0.58 0.25 0.35 291
accuracy 0.69 881
macro avg 0.64 0.58 0.57 881
weighted avg 0.67 0.69 0.65 881
AUC: 0.5787960859688974
###Markdown
B: SVM - Support Vector Machine:
###Code
svmClf = SVC(kernel="linear")
svmClf.fit(X_TRAIN,y_TRAIN)
svmPreds = svmClf.predict(X_VAL)
print("SVM Validation Performance on UNBALANCED(!):\n------------------\n",classification_report(y_VAL, svmPreds , target_names=target_names))
print("AUC: ",roc_auc_score(y_VAL, svmPreds))
###Output
SVM Validation Performance on UNBALANCED(!):
------------------
precision recall f1-score support
class 0 0.68 0.99 0.81 590
class 1 0.71 0.05 0.10 291
accuracy 0.68 881
macro avg 0.70 0.52 0.45 881
weighted avg 0.69 0.68 0.57 881
AUC: 0.5206884501135768
###Markdown
C: RF - RandomForest:
###Code
rfClf = RandomForestRegressor(n_estimators = 1000)
rfClf.fit(X_TRAIN, y_TRAIN)
rfPreds = rfClf.predict(X_VAL)
rfPreds = [round(i) for i in rfPreds]
print("RF Validation Performance on UNBALANCED(!):\n------------------\n",classification_report(y_VAL, rfPreds , target_names=target_names))
print("AUC: ",roc_auc_score(y_VAL, rfPreds))
###Output
RF Validation Performance on UNBALANCED(!):
------------------
precision recall f1-score support
class 0 0.71 0.89 0.79 590
class 1 0.55 0.26 0.36 291
accuracy 0.68 881
macro avg 0.63 0.58 0.57 881
weighted avg 0.66 0.68 0.65 881
AUC: 0.5780651173626886
###Markdown
D: XGB - Gradient Boost:
###Code
xgbClf = XGBClassifier(n_estimators=200)
xgbClf.fit(X_TRAIN, y_TRAIN)
xgbPreds = xgbClf.predict(X_VAL)
print("XGB Validation Performance on UNBALANCED(!):\n------------------\n",classification_report(y_VAL, xgbPreds , target_names=target_names))
print("AUC: ",roc_auc_score(y_VAL, xgbPreds))
###Output
XGB Validation Performance on UNBALANCED(!):
------------------
precision recall f1-score support
class 0 0.71 0.91 0.79 590
class 1 0.56 0.23 0.33 291
accuracy 0.69 881
macro avg 0.63 0.57 0.56 881
weighted avg 0.66 0.69 0.64 881
AUC: 0.5702050206768011
###Markdown
E: LR - Logistic Regression:
###Code
lrClf = LogisticRegression(solver="newton-cg")
lrClf.fit(X_TRAIN, y_TRAIN)
lrPreds = lrClf.predict(X_VAL)
print("LogReg Validation Performance on UNBALANCED(!):\n------------------\n",classification_report(y_VAL, lrPreds , target_names=target_names))
print("AUC: ",roc_auc_score(y_VAL, lrPreds))
###Output
LogReg Validation Performance on UNBALANCED(!):
------------------
precision recall f1-score support
class 0 0.70 0.92 0.80 590
class 1 0.56 0.20 0.29 291
accuracy 0.68 881
macro avg 0.63 0.56 0.54 881
weighted avg 0.65 0.68 0.63 881
AUC: 0.5606733065408586
###Markdown
4- Ensemble Method with Validation Performance:Ensemble method aggregates the votes of each model and gives the most frequent vote as output
###Code
def predictEnsemble(sample, models=0):
"""predicts the label of a given sample by aggregating votes of number of models.
by default, models = 0, takes into account all models. Otherwise, for a given list of codes,
it involves the corresponsing model. codes:
1: dnn | 2: svm | 3: rf | 4: xgb | 5: lr
"""
modelsDict = {
# models predictions dictionary
1:dnnClf.predict(scaler.transform(sample.reshape(1,-1))).tolist()[0][0],
2:svmClf.predict(sample.reshape(1,-1)).tolist()[0],
3:rfClf.predict(sample.reshape(1,-1)).tolist()[0],
4:xgbClf.predict(sample.reshape(1,-1)).tolist()[0],
5:lrClf.predict(sample.reshape(1,-1)).tolist()[0]
}
preds = []
if models == 0: # use all models
[preds.append(model) for model in modelsDict.values()]
else: # use only specified models
for model_code in models:
preds.append(modelsDict[model_code])
#print(preds)
preds = [round(i) for i in preds] # transform probability to label (threshold 0.5)
#print(preds)
#print(max(set(preds), key=preds.count))
return(max(set(preds), key=preds.count))
ensPreds = []
[ensPreds.append(predictEnsemble(sample, models=[3,4,5])) for sample in tqdm(X_VAL)]
print("Ensemble Validation Performance on UNBALANCED(!):\n------------------\n",classification_report(y_VAL, ensPreds , target_names=target_names))
print("AUC: ",roc_auc_score(y_VAL, ensPreds))
###Output
100%|██████████| 881/881 [02:23<00:00, 6.13it/s]
###Markdown
5- Performance Evaluation on TEST (unseen data)
###Code
TEST = pd.read_csv("data/UFC_TEST.csv")
X_TEST = TEST.drop(["date","B_fighter","R_fighter","Winner"],axis=1).values
y_TEST = TEST["Winner"].values
ensPreds_TEST = []
[ensPreds_TEST.append(predictEnsemble(test_sample, models=[3,4,5])) for test_sample in tqdm(X_TEST)]
print("Ensemble TEST Performance on UNBALANCED(!):\n------------------\n",classification_report(y_TEST, ensPreds_TEST , target_names=target_names))
print("AUC: ",roc_auc_score(y_TEST, ensPreds_TEST))
# save models to disk
#import pickle
#dnnClf.save('resources/dnn_model.h5')
#pickle.dump(svmClf, open('resources/svm_model.sav', 'wb'))
#pickle.dump(rfClf, open('resources/rf_model.sav', 'wb'))
#pickle.dump(xgbClf, open('resources/xgb_model.sav', 'wb'))
#pickle.dump(lrClf, open('resources/lr_model.sav', 'wb'))
#pickle.dump(scaler, open('resources/scaler.pkl', 'wb'))
# notes:
'''
dnnPreds2 = dnnClf.predict(scaler.transform(X_TEST))
dnnPreds2 = [round(i[0]) for i in dnnPreds2]
dnnAcc2 = metrics.accuracy_score(dnnPreds2, y_TEST)
print("DNN Accuracy:",round(dnnAcc2,3))
svmPreds2 = svmClf.predict(X_TEST)
svmAcc2 = metrics.accuracy_score(svmPreds2, y_TEST)
print("SVM Accuracy:",round(svmAcc2,3))
rfPreds2 = rfClf.predict(X_TEST)
rfPreds2 = [round(i) for i in rfPreds2]
rfAcc2 = metrics.accuracy_score(rfPreds2, y_TEST)
print("RF Accuracy:",round(rfAcc2,3))
xgbPreds2 = xgbClf.predict(X_TEST)
xgbAcc2 = metrics.accuracy_score(xgbPreds2, y_TEST)
print("XGB Accuracy:",round(xgbAcc2,3))
lrPreds2 = lrClf.predict(X_TEST)
lrAcc2 = metrics.accuracy_score(lrPreds2, y_TEST)
print("LogReg Accuracy:",round(lrAcc2,3))
accTable = pd.DataFrame({"Model":["DNN", "SVM", "RF", "XGB", "LogReg", "Ensemble"],
"Val_Accuracy":[dnnAcc, svmAcc, rfAcc, xgbAcc, lrAcc, ensAcc],
"Test_Accuracy":[dnnAcc2, svmAcc2, rfAcc2, xgbAcc2, lrAcc2, ensAcc2]})
accTable.plot(kind="bar",ylim=(0.5,0.8),x="Model",title="Models Performance on Validation and Test Data")
# grid-search hyper-parameter tuning
# svm hp-tuning with gridSearch
#svm_param = {"kernel":("linear","poly","rbf", "sigmoid"),
# "C":[1,52,10],
# "degree":[3,8],
# "gamma":("auto","scale"),
# "coef0":[0.001,10,0.5]}
#svmClf = SVC()
#svmGrid = GridSearchCV(svmClf, svm_param,cv=2)
#svmGrid.fit(X_TRAIN, y_TRAIN)
'''
###Output
_____no_output_____
###Markdown
> This notebook aims to push the public LB under 0.50. Certainly, the competition is not yet at its peak and there clearly remains room for improvement. Credits and comments on changesThis notebook is based on [m5-first-public-notebook-under-0-50](https://www.kaggle.com/kneroma/m5-first-public-notebook-under-0-50) v.6 by @kkiller Presently it's sole purpose is to test accelerated prediction stage (vs original notebook) where I generate lag features only for the days that need sales forecasts. Everything else is unchanged vs the original _kkiller's_ notebook (as in version 6).
###Code
CAL_DTYPES={"event_name_1": "category", "event_name_2": "category", "event_type_1": "category",
"event_type_2": "category", "weekday": "category", 'wm_yr_wk': 'int16', "wday": "int16",
"month": "int16", "year": "int16", "snap_CA": "float32", 'snap_TX': 'float32', 'snap_WI': 'float32' }
PRICE_DTYPES = {"store_id": "category", "item_id": "category", "wm_yr_wk": "int16","sell_price":"float32" }
pd.options.display.max_columns = 50
h = 28
max_lags = 57
tr_last = 1913
fday = datetime(2016,4, 25)
fday
def create_dt(is_train = True, nrows = None, first_day = 1200):
prices = pd.read_csv("../input/m5-forecasting-accuracy/sell_prices.csv", dtype = PRICE_DTYPES)
for col, col_dtype in PRICE_DTYPES.items():
if col_dtype == "category":
prices[col] = prices[col].cat.codes.astype("int16")
prices[col] -= prices[col].min()
cal = pd.read_csv("../input/m5-forecasting-accuracy/calendar.csv", dtype = CAL_DTYPES)
cal["date"] = pd.to_datetime(cal["date"])
for col, col_dtype in CAL_DTYPES.items():
if col_dtype == "category":
cal[col] = cal[col].cat.codes.astype("int16")
cal[col] -= cal[col].min()
start_day = max(1 if is_train else tr_last-max_lags, first_day)
numcols = [f"d_{day}" for day in range(start_day,tr_last+1)]
catcols = ['id', 'item_id', 'dept_id','store_id', 'cat_id', 'state_id']
dtype = {numcol:"float32" for numcol in numcols}
dtype.update({col: "category" for col in catcols if col != "id"})
dt = pd.read_csv("../input/m5-forecasting-accuracy/sales_train_validation.csv",
nrows = nrows, usecols = catcols + numcols, dtype = dtype)
for col in catcols:
if col != "id":
dt[col] = dt[col].cat.codes.astype("int16")
dt[col] -= dt[col].min()
if not is_train:
for day in range(tr_last+1, tr_last+ 28 +1):
dt[f"d_{day}"] = np.nan
dt = pd.melt(dt,
id_vars = catcols,
value_vars = [col for col in dt.columns if col.startswith("d_")],
var_name = "d",
value_name = "sales")
dt = dt.merge(cal, on= "d", copy = False)
dt = dt.merge(prices, on = ["store_id", "item_id", "wm_yr_wk"], copy = False)
return dt
def create_fea(dt):
lags = [7, 28]
lag_cols = [f"lag_{lag}" for lag in lags ]
for lag, lag_col in zip(lags, lag_cols):
dt[lag_col] = dt[["id","sales"]].groupby("id")["sales"].shift(lag)
wins = [7, 28]
for win in wins :
for lag,lag_col in zip(lags, lag_cols):
dt[f"rmean_{lag}_{win}"] = dt[["id", lag_col]].groupby("id")[lag_col].transform(lambda x : x.rolling(win).mean())
date_features = {
"wday": "weekday",
"week": "weekofyear",
"month": "month",
"quarter": "quarter",
"year": "year",
"mday": "day",
# "ime": "is_month_end",
# "ims": "is_month_start",
}
# dt.drop(["d", "wm_yr_wk", "weekday"], axis=1, inplace = True)
for date_feat_name, date_feat_func in date_features.items():
if date_feat_name in dt.columns:
dt[date_feat_name] = dt[date_feat_name].astype("int16")
else:
dt[date_feat_name] = getattr(dt["date"].dt, date_feat_func).astype("int16")
FIRST_DAY = 350 # If you want to load all the data set it to '1' --> Great memory overflow risk !
%%time
df = create_dt(is_train=True, first_day= FIRST_DAY)
df.shape
df.head()
df.info()
%%time
create_fea(df)
df.shape
df.info()
df.head()
df.dropna(inplace = True)
df.shape
cat_feats = ['item_id', 'dept_id','store_id', 'cat_id', 'state_id'] + ["event_name_1", "event_name_2", "event_type_1", "event_type_2"]
useless_cols = ["id", "date", "sales","d", "wm_yr_wk", "weekday"]
train_cols = df.columns[~df.columns.isin(useless_cols)]
X_train = df[train_cols]
y_train = df["sales"]
%%time
np.random.seed(777)
fake_valid_inds = np.random.choice(X_train.index.values, 2_000_000, replace = False)
train_inds = np.setdiff1d(X_train.index.values, fake_valid_inds)
X_train=X_train.loc[fake_valid_inds]
y_train = y_train.loc[fake_valid_inds]
#del df, X_train, y_train, fake_valid_inds,train_inds ; gc.collect()
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.model_selection import KFold, cross_val_score
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import LabelEncoder
#ML Algoirthm
from sklearn.linear_model import ElasticNetCV, LassoCV, RidgeCV
import sklearn.linear_model as linear_model
from sklearn.svm import SVR
from lightgbm import LGBMRegressor
from sklearn.ensemble import GradientBoostingRegressor,RandomForestRegressor
from xgboost import XGBRegressor
from mlxtend.regressor import StackingCVRegressor
kf = KFold(n_splits=12, random_state=42, shuffle=True)
# Define error metrics
def cv_rmse(model, X=X_train):
rmse = np.sqrt(-cross_val_score(model, X, y_train, scoring="neg_mean_squared_error", cv=kf))
return (rmse)
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
from mlxtend.regressor import StackingCVRegressor
ridge_alphas = [1e-15, 1e-10, 1e-8, 9e-4, 7e-4, 5e-4, 3e-4, 1e-4, 1e-3, 5e-2, 1e-2, 0.1, 0.3, 1, 3, 5, 10, 15, 18, 20, 30, 50, 75, 100]
ridge = make_pipeline(RobustScaler(), RidgeCV(alphas=ridge_alphas, cv=kf))
# Support Vector Regressor
#svr = make_pipeline(RobustScaler(), SVR(C= 5, epsilon= 0.008, gamma=0.0003))
# Gradient Boosting Regressor
gbr = GradientBoostingRegressor(n_estimators=100,
learning_rate=0.075)
rf=RandomForestRegressor(n_estimators=10)
lightgbm1 = LGBMRegressor(objective='poisson',
metric ='rmse',
learning_rate = 0.075,
sub_row = 0.75,
bagging_freq = 1,
lambda_l2 = 0.1,
verbosity= 1,
n_estimators = 200,
num_leaves= 128,
min_data_in_leaf= 100)
lightgbm2 = LGBMRegressor(objective='tweedie',
metric ='rmse',
learning_rate = 0.075,
sub_row = 0.75,
bagging_freq = 1,
lambda_l2 = 0.1,
verbosity= 1,
n_estimators = 200,
num_leaves= 128,
min_data_in_leaf= 100)
xgboost = XGBRegressor(objective='count:poisson',
learning_rate=0.075,
n_estimators=100,
min_child_weight=50)
stackReg = StackingCVRegressor(regressors=(lightgbm1,lightgbm2),
meta_regressor=(xgboost),
use_features_in_secondary=True,
random_state=42)
model_score = {}
score = cv_rmse(lightgbm1)
lgb_model1_full_data = lightgbm1.fit(X_train, y_train)
print("lightgbm1: {:.4f}".format(score.mean()))
model_score['lgb1'] = score.mean()
score = cv_rmse(lightgbm2)
lgb_model2_full_data = lightgbm2.fit(X_train, y_train)
print("lightgbm2: {:.4f}".format(score.mean()))
model_score['lgb2'] = score.mean()
score = cv_rmse(xgboost)
xgboost_full_data = xgboost.fit(X_train, y_train)
print("xgboost: {:.4f}".format(score.mean()))
model_score['xgb'] = score.mean()
score = cv_rmse(ridge)
ridge_full_data = ridge.fit(X_train, y_train)
print("ridge: {:.4f}".format(score.mean()))
model_score['ridge'] = score.mean()
# score = cv_rmse(svr)
# svr_full_data = svr.fit(X_train, y_train)
# print("svr: {:.4f}".format(score.mean()))
# model_score['svr'] = score.mean()
score = cv_rmse(gbr)
gbr_full_data = gbr.fit(X_train, y_train)
print("gbr: {:.4f}".format(score.mean()))
model_score['gbr'] = score.mean()
score = cv_rmse(rf)
rf_full_data = rf.fit(X_train, y_train)
print("rf: {:.4f}".format(score.mean()))
model_score['rf'] = score.mean()
score = cv_rmse(stackReg)
stackReg_full_data = stackReg.fit(X_train, y_train)
print("stackReg: {:.4f}".format(score.mean()))
model_score['stackReg'] = score.mean()
def rmsle(y, y_pred):
return np.sqrt(mean_squared_error(y, y_pred))
def blended_predictions(X_train,weight):
return ((weight[0] * ridge_full_data.predict(X_train)) + \
(weight[1] * rf_full_data.predict(X_train)) + \
(weight[2] * gbr_full_data.predict(X_train)) + \
(weight[3] * xgboost_full_data.predict(X_train)) + \
(weight[4] * lgb_model1_full_data.predict(X_train)) + \
(weight[5] * stackReg_full_data.predict(np.array(X_train))))
# Blended model predictions
blended_score = rmsle(y_train, blended_predictions(X_train,[0.15,0.2,0.18,0.1,0.27,0.1]))
print("blended score: {:.4f}".format(blended_score))
model_score['blended_model'] = blended_score
model_score
#my_model = stacked_ensemble(X_train,y_train)
import warnings
warnings.filterwarnings("default")
# %%time
# blend= blended_predictions(X_train,[0.15,0.2,0.1,0.18,0.1,0.27])
###Output
_____no_output_____
###Markdown
Prediction stage(updated vs original)
###Code
def create_lag_features_for_test(dt, day):
# create lag feaures just for single day (faster)
lags = [7, 28]
lag_cols = [f"lag_{lag}" for lag in lags]
for lag, lag_col in zip(lags, lag_cols):
dt.loc[dt.date == day, lag_col] = \
dt.loc[dt.date ==day-timedelta(days=lag), 'sales'].values # !!! main
windows = [7, 28]
for window in windows:
for lag in lags:
df_window = dt[(dt.date <= day-timedelta(days=lag)) & (dt.date > day-timedelta(days=lag+window))]
df_window_grouped = df_window.groupby("id").agg({'sales':'mean'}).reindex(dt.loc[dt.date==day,'id'])
dt.loc[dt.date == day,f"rmean_{lag}_{window}"] = \
df_window_grouped.sales.values
def create_date_features_for_test(dt):
# copy of the code from `create_dt()` above
date_features = {
"wday": "weekday",
"week": "weekofyear",
"month": "month",
"quarter": "quarter",
"year": "year",
"mday": "day",
}
for date_feat_name, date_feat_func in date_features.items():
if date_feat_name in dt.columns:
dt[date_feat_name] = dt[date_feat_name].astype("int16")
else:
dt[date_feat_name] = getattr(
dt["date"].dt, date_feat_func).astype("int16")
%%time
alphas = [1.028, 1.023, 1.018]
weights = [1/len(alphas)]*len(alphas) # equal weights
te0 = create_dt(False) # create master copy of `te`
create_date_features_for_test (te0)
for icount, (alpha, weight) in enumerate(zip(alphas, weights)):
te = te0.copy() # just copy
# te1 = te0.copy()
cols = [f"F{i}" for i in range(1, 29)]
for tdelta in range(0, 28):
day = fday + timedelta(days=tdelta)
print(tdelta, day.date())
tst = te[(te.date >= day - timedelta(days=max_lags))
& (te.date <= day)].copy()
# tst1 = te1[(te1.date >= day - timedelta(days=max_lags))
# & (te1.date <= day)].copy()
# create_fea(tst) # correct, but takes much time
create_lag_features_for_test(tst, day) # faster
tst = tst.loc[tst.date == day, train_cols]
te.loc[te.date == day, "sales"] = \
alpha * blended_predictions(tst,[0.15,0.2,0.18,0.1,0.27,0.1]) # magic multiplier by kyakovlev
# create_lag_features_for_test(tst1, day) # faster
# tst1 = tst1.loc[tst1.date == day, train_cols]
# te1.loc[te1.date == day, "sales"] = \
# alpha * m_lgb1.predict(tst1) # magic multiplier by kyakovlev
te_sub = te.loc[te.date >= fday, ["id", "sales"]].copy()
# te_sub1 = te1.loc[te1.date >= fday, ["id", "sales"]].copy()
te_sub["F"] = [f"F{rank}" for rank in te_sub.groupby("id")[
"id"].cumcount()+1]
# te_sub1["F"] = [f"F{rank}" for rank in te_sub1.groupby("id")[
# "id"].cumcount()+1]
te_sub = te_sub.set_index(["id", "F"]).unstack()[
"sales"][cols].reset_index()
# te_sub1 = te_sub1.set_index(["id", "F"]).unstack()[
# "sales"][cols].reset_index()
te_sub.fillna(0., inplace=True)
# te_sub1.fillna(0., inplace=True)
te_sub.sort_values("id", inplace=True)
# te_sub1.sort_values("id", inplace=True)
te_sub.reset_index(drop=True, inplace=True)
# te_sub1.reset_index(drop=True, inplace=True)
te_sub.to_csv(f"submission_{icount}.csv", index=False)
# te_sub1.to_csv(f"submission1_{icount}.csv", index=False)
if icount == 0:
sub = te_sub
sub[cols] *= weight
# sub1 = te_sub1
# sub1[cols] *= weight
else:
sub[cols] += te_sub[cols]*weight
# sub1[cols] += te_sub1[cols]*weight
print(icount, alpha, weight)
sub.head(10)
sub.id.nunique(), sub["id"].str.contains("validation$").sum()
# sub1.id.nunique(), sub1["id"].str.contains("validation$").sum()
sub.shape
# sub1.shape
sub2 = sub.copy()
sub2["id"] = sub2["id"].str.replace("validation$", "evaluation")
sub = pd.concat([sub, sub2], axis=0, sort=False)
sub.to_csv("submissionp.csv",index=False)
# sub3 = sub1.copy()
# sub3["id"] = sub3["id"].str.replace("validation$", "evaluation")
# sub1 = pd.concat([sub1, sub3], axis=0, sort=False)
# sub.to_csv("submissiont.csv",index=False)
# poisson = sub.sort_values(by = 'id').reset_index(drop = True)
# tweedie = sub1.sort_values(by = 'id').reset_index(drop = True)
# sub5 = poisson.copy()
# for i in sub5.columns :
# if i != 'id' :
# sub5[i] = 0.5*poisson[i] + 0.5*tweedie[i]
# sub5.to_csv('submissionavg.csv', index = False)
###Output
_____no_output_____
###Markdown
Weighting
###Code
w_soft = 0.275 #0.58
w_sig = 0.5 #0.609
w_crop = 0.05 #0.55
w_pred2 = 0.1 #0.562
w_pred3 = 0.075 #0.5
softmax['0'] = w_soft*softmax['0'] + w_crop*soft_crop['0'] + w_sig*sigmoid['0'] + w_pred2*pred_2['0'] + w_pred3*pred_3['0']
softmax['1'] = w_soft*softmax['1'] + w_crop*soft_crop['1'] + w_sig*sigmoid['1'] + w_pred2*pred_2['1'] + w_pred3*pred_3['1']
softmax.head()
softmax['Predicted'] = np.where(softmax['1']>0.95, 1, 0)
softmax.head(10)
pred = softmax[['Id', 'Predicted']]
pred.head(10)
pred.to_csv('Ensemble.csv', index=False)
###Output
_____no_output_____
###Markdown
Classificação Utilizando Ensemble Dataset: Telco Customer ChurnContext"Predict behavior to retain customers. You can analyze all relevant customer data and develop focused customer retention programs." [IBM Sample Data Sets]ContentEach row represents a customer, each column contains customer’s attributes described on the column Metadata.The data set includes information about:Customers who left within the last month – the column is called ChurnServices that each customer has signed up for – phone, multiple lines, internet, online security, online backup, device protection, tech support, and streaming TV and moviesCustomer account information – how long they’ve been a customer, contract, payment method, paperless billing, monthly charges, and total chargesDemographic info about customers – gender, age range, and if they have partners and dependentsInspirationTo explore this type of models and learn more about the subject.New version from IBM:https://community.ibm.com/community/user/businessanalytics/blogs/steven-macko/2019/07/11/telco-customer-churn-1113https://www.kaggle.com/blastchar/telco-customer-churn Obtendo Dados do Dataset
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
data = pd.read_csv("/content/sample_data/Telco-Churn.csv")
data.head()
data = data.replace({'Yes': 1, 'No': 0, "No internet service": 0})
data.head()
data.info()
def preprocessamento(x):
colunasParaRemover = ["customerID", "OnlineSecurity", "MultipleLines", "InternetService", "gender", "Contract", "PaymentMethod", "TotalCharges","Churn"]
onehot = pd.get_dummies(x["gender"], prefix='gender',prefix_sep='_')
x = pd.concat([x, onehot],axis=1)
onehot = pd.get_dummies(x["Contract"], prefix='Contract',prefix_sep='_')
x = pd.concat([x, onehot],axis=1)
onehot = pd.get_dummies(x["PaymentMethod"], prefix='PaymentMethod',prefix_sep='_')
x = pd.concat([x, onehot],axis=1)
y = x["Churn"]
x = x.drop(colunasParaRemover, axis=1)
return x, y
print(data["TotalCharges"])
X, y = preprocessamento(data)
X_tr, X_te, y_tr, y_te = train_test_split(X, y, random_state=42, train_size=0.7)
X_tr.shape, X_te.shape, y_tr.shape, y_te.shape
X_tr.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 4930 entries, 1695 to 860
Data columns (total 21 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 SeniorCitizen 4930 non-null int64
1 Partner 4930 non-null int64
2 Dependents 4930 non-null int64
3 tenure 4930 non-null int64
4 PhoneService 4930 non-null int64
5 OnlineBackup 4930 non-null int64
6 DeviceProtection 4930 non-null int64
7 TechSupport 4930 non-null int64
8 StreamingTV 4930 non-null int64
9 StreamingMovies 4930 non-null int64
10 PaperlessBilling 4930 non-null int64
11 MonthlyCharges 4930 non-null float64
12 gender_Female 4930 non-null uint8
13 gender_Male 4930 non-null uint8
14 Contract_Month-to-month 4930 non-null uint8
15 Contract_One year 4930 non-null uint8
16 Contract_Two year 4930 non-null uint8
17 PaymentMethod_Bank transfer (automatic) 4930 non-null uint8
18 PaymentMethod_Credit card (automatic) 4930 non-null uint8
19 PaymentMethod_Electronic check 4930 non-null uint8
20 PaymentMethod_Mailed check 4930 non-null uint8
dtypes: float64(1), int64(11), uint8(9)
memory usage: 544.0 KB
###Markdown
Classificadores Simples KNeighborsClassifier
###Code
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=3)
model.fit(X_tr,y_tr)
ypred = model.predict(X_te)
score = sum(ypred == y_te)/len(y_te)
score
###Output
_____no_output_____
###Markdown
DecisionTreeClassifier
###Code
from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier()
model.fit(X_tr,y_tr)
ypred = model.predict(X_te)
score = sum(ypred == y_te)/len(y_te)
score
###Output
_____no_output_____
###Markdown
Perceptron
###Code
from sklearn.linear_model import Perceptron
model = Perceptron()
model.fit(X_tr,y_tr)
ypred = model.predict(X_te)
score = sum(ypred == y_te)/len(y_te)
score
###Output
_____no_output_____
###Markdown
Bagging BaggingClassifier
###Code
# Diversificação por amostragem: pega amostras e substui por outras mantendo o mesmo tamanho do dataset
# Bootstrap (bagging) utiliza aleatóriedade da amostragem para criar diversidade. As árvores geradas variam (esse algoritmo gera 10 árvores por padrão)
# Por padrão BaggingClassifier utiliza arvore de decisao
# A implementação abaixo é do RandomForest! (Random Forest é um Bagging de arvores de decisão)
from sklearn.ensemble import BaggingClassifier
model = BaggingClassifier(DecisionTreeClassifier(splitter='random'), n_estimators=100, max_features=0.5, random_state=42)
model.fit(X_tr, y_tr)
ypred = model.predict(X_te)
score = sum(ypred == y_te)/len(y_te)
score
###Output
_____no_output_____
###Markdown
RandomForestClassifier
###Code
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(random_state=42)
model.fit(X_tr, y_tr)
ypred = model.predict(X_te)
score = sum(ypred == y_te)/len(y_te)
score
###Output
_____no_output_____
###Markdown
ExtraTreesClassifier
###Code
# Florestas extremamente aleatórias
from sklearn.ensemble import ExtraTreesClassifier
model = ExtraTreesClassifier(random_state=42)
model.fit(X_tr, y_tr)
ypred = model.predict(X_te)
score = sum(ypred == y_te)/len(y_te)
score
###Output
_____no_output_____
###Markdown
VotingClassifier
###Code
from sklearn.ensemble import VotingClassifier
model = VotingClassifier([
('knn', KNeighborsClassifier(n_neighbors=1)),
('arvore', DecisionTreeClassifier()),
('perceptron', Perceptron())])
model.fit(X_tr,y_tr)
vo_pred = model.predict(X_te)
score = sum(vo_pred == y_te)/len(y_te)
score
###Output
_____no_output_____
###Markdown
Boosting AdaBoostClassifier
###Code
from sklearn.ensemble import AdaBoostClassifier
model = AdaBoostClassifier(random_state=42)
model.fit(X_tr, y_tr)
ypred = model.predict(X_te)
score = sum(ypred == y_te)/len(y_te)
score
###Output
_____no_output_____
###Markdown
XGBClassifier
###Code
from xgboost import XGBClassifier
model = XGBClassifier(use_label_encoder=False,random_state=42)
model.fit(X_tr, y_tr)
ypred = model.predict(X_te)
score = sum(ypred == y_te)/len(y_te)
score
###Output
_____no_output_____
###Markdown
Stacking
###Code
from sklearn.ensemble import StackingClassifier
from sklearn.model_selection import GridSearchCV
voting = VotingClassifier([
('knn', KNeighborsClassifier()),
('arvore', DecisionTreeClassifier()),
('perceptron', Perceptron())])
model = StackingClassifier([
('voting', voting),
('xgbclassifier', XGBClassifier(use_label_encoder=False,random_state=42)),
('randomforest', RandomForestClassifier(random_state=42))])
params = [
{
"voting__knn__n_neighbors" : [1,3,5],
"voting__arvore__criterion" : ['gini', 'entropy'],
"voting__arvore__max_depth" : [15,20,50],
"randomforest__bootstrap" : [True, False],
"randomforest__min_samples_leaf": [1,2,3]
}
]
grid_search_model = GridSearchCV(model, params, cv=3, verbose=3, return_train_score=True, n_jobs=1)
grid_search_model.fit(X_tr, y_tr)
grid_pred = grid_search_model.predict(X_te)
score = sum(grid_pred == y_te)/len(y_te)
score
grid_search_model.best_estimator_
model.fit(X_tr,y_tr)
stk_pred = model.predict(X_te)
score = sum(stk_pred == y_te)/len(y_te)
score
from sklearn.metrics import confusion_matrix
confusion_matrix(stk_pred, y_te)
from sklearn import metrics
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots()
x = np.linspace(*ax.get_xlim())
ax.plot(x, x, color='green', linestyle='dashed',
linewidth=1, markersize=1)
metrics.plot_roc_curve(grid_search_model, X_te, y_te, ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
#split dataset into train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle = True, random_state=1, stratify=y)
#create knn model
log_reg = LogisticRegression(penalty = 'l2',dual = False,tol= 0.0001, C = 1.0, fit_intercept = True,intercept_scaling = 1,
class_weight = None, random_state = 666, solver = 'liblinear', max_iter = 100, multi_class = 'ovr',
verbose = 0, warm_start = False, n_jobs = None)
log_reg.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Random Forest
###Code
# train the classifier
rf = RandomForestClassifier()
# dictionary of parameters to test
params_rf = {
'bootstrap': [True],
'max_depth': [80, 90, 100, 110],
'max_features': [2, 3],
'min_samples_leaf': [3, 4, 5],
'min_samples_split': [8, 10, 12],
'n_estimators': [100, 200, 300, 1000]
}
#use gridsearch to test all values for n_estimators
rf_gs = GridSearchCV(rf, params_rf, cv=5)
#fit model to training data
rf_gs.fit(X_train, y_train)
rf_best = rf_gs.best_estimator_
#check best n_estimators value
print(rf_gs.best_params_)
###Output
{'bootstrap': True, 'max_depth': 100, 'max_features': 3, 'min_samples_leaf': 3, 'min_samples_split': 10, 'n_estimators': 100}
###Markdown
XGB
###Code
xgb = XGBClassifier()
xgb.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
SVM
###Code
svclassifier = SVC(kernel='linear')
Cs = [0.001, 0.01, 0.1, 1, 10]
gammas = [0.001, 0.01, 0.1, 1]
param_grid = {'C': Cs, 'gamma' : gammas}
sv_gs = GridSearchCV(svclassifier, param_grid, cv=5)
sv_gs.fit(X_train, y_train)
sv_best = sv_gs.best_estimator_
print(sv_gs.best_params_)
print('rf: {}'.format(rf_best.score(X_test, y_test)))
print('log_reg: {}'.format(log_reg.score(X_test, y_test)))
print('xgb: {}'.format(xgb.score(X_test, y_test)))
print('SVM: {}'.format(sv_best.score(X_test, y_test)))
###Output
rf: 0.63
log_reg: 0.67
xgb: 0.725
SVM: 0.68
###Markdown
Essemble Classifier
###Code
from sklearn.ensemble import VotingClassifier
#create a dictionary of our models
estimators=[('xgb', xgb), ('rf', rf_best), ('log_reg', log_reg), ('svclassifier', svclassifier)]
#create our voting classifier, inputting our models
ensemble = VotingClassifier(estimators, voting='hard')
#fit model to training data
ensemble.fit(X_train, y_train)
# make predictions for test data
y_pred = ensemble.predict(X_test)
#test our model on the test data
ensemble.score(X_test, y_test)
# Confusion Matrix
import seaborn as sn
genres = df["genre"].unique()
cmx = confusion_matrix(y_test,y_pred)
df_cm = pd.DataFrame(cmx,genres,genres)
sn.set(font_scale = 1.4)
sn.heatmap(df_cm, annot = True, annot_kws = {"size": 16})
plt.ylabel('True Label')
plt.xlabel('Predicted Label')
###Output
_____no_output_____ |
Chapter09/a_b_deployment_with_production_variants.ipynb | ###Markdown
Chapter 10 : SageMaker Endpoint Production Variants and Deployment StrategiesThis notebook demonstrates how to update a deployed model using SageMaker Endpoint Production variants. Specifically it demonstrates the A/B deployment strategy. You can use this notebook as a starting point to implement other strategies discussed in Chapter 10, since the APIs used to either deploy a new endpoint or update an existing endpoint remain the same. Overview1. Set up2. Prepare (Reuse or Train) models to deploy and update3. Create an endpoint (with single production variant)4. Invoke the endpoint5. Update endpoint (with two production variants)6. CloudWatch Analysis7. Update endpoint8. Clean up 1. Set up 1.1 Imports
###Code
##Imports
import sagemaker
import boto3
import time
from datetime import datetime, timedelta
from sagemaker import image_uris
from sagemaker.session import Session
from sagemaker.inputs import TrainingInput
from sagemaker.session import production_variant
from botocore.response import StreamingBody
###Output
_____no_output_____
###Markdown
1.2 Setup variables
###Code
s3_bucket = 'datascience-environment-notebookinstance--06dc7a0224df'
s3_prefix = 'prepared'
m_prefix = 'xgboost-sample'
sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_region_name
###Output
_____no_output_____
###Markdown
1.3 Setup service clients
###Code
sm = boto3.Session().client("sagemaker")
smrt = boto3.Session().client("sagemaker-runtime")
s3 = boto3.client("s3")
### Define variable to toggle between using trained models from previous chapters and training the models in this notebook
### Set use_trained_models to True, if you have XGBoost models trained in previous chapters, use those models to save training time and costs.
### To train models in this notebook set use_trained_model to False.
#use_trained_models = 'False'
use_trained_models = 'True'
if use_trained_models == 'True':
print("Using models trained before")
else:
print("Train the model")
###Output
_____no_output_____
###Markdown
Section 2 - Prepare (Reuse or Train) models to deploy and update
###Code
### Use the XGBoost models previously trained
### Note: Update to use the models available in your datascience account
if use_trained_models == 'True':
model_name_1='sagemaker-xgboost-2021-06-24-02-34-20-510'
model_name_2='sagemaker-xgboost-2021-06-24-02-47-08-912'
if use_trained_models == 'False':
# set an output path where the trained model will be saved
output_path = 's3://{}/{}/{}/output'.format(s3_bucket, m_prefix, 'xgboost')
# this line automatically looks for the XGBoost image URI and builds an XGBoost container.
# specify the repo_version depending on your preference.
xgboost_container = sagemaker.image_uris.retrieve("xgboost", region, "1.2-1")
# define the data type and paths to the training and validation datasets
content_type = "csv"
train_input = TrainingInput("s3://{}/{}/{}/".format(s3_bucket, s3_prefix, 'train'), content_type=content_type)
validation_input = TrainingInput("s3://{}/{}/{}/".format(s3_bucket, s3_prefix, 'validation'), content_type=content_type)
#### Train and get the name of the first model
# initialize hyperparameters
hyperparameters_1 = {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"objective":"reg:squarederror",
"num_round":"5"}
# construct a SageMaker estimator that calls the xgboost-container
estimator_1 = sagemaker.estimator.Estimator(image_uri=xgboost_container,
hyperparameters=hyperparameters_1,
role=sagemaker.get_execution_role(),
instance_count=1,
instance_type='ml.m5.12xlarge',
volume_size=200, # 5 GB
output_path=output_path)
# execute the XGBoost training job
estimator_1.fit({'train': train_input, 'validation': validation_input})
training_job_name_1 = estimator_1.latest_training_job.name
model_name_1 = sagemaker_session.create_model_from_job(training_job_name_1)
#### Train and get the name of the second model
# initialize hyperparameters
hyperparameters_2 = {
"max_depth":"10", ##Different value of the hyperparameter
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"objective":"reg:squarederror",
"num_round":"5"}
# construct a SageMaker estimator that calls the xgboost-container
estimator_2 = sagemaker.estimator.Estimator(image_uri=xgboost_container,
hyperparameters=hyperparameters_2,
role=sagemaker.get_execution_role(),
instance_count=1,
instance_type='ml.m5.12xlarge',
volume_size=200, # 5 GB
output_path=output_path)
# execute the XGBoost training job
estimator_2.fit({'train': train_input, 'validation': validation_input})
training_job_name_2 = estimator_2.latest_training_job.name
model_name_2 = sagemaker_session.create_model_from_job(training_job_name_2)
print("Model 1 : " , model_name_1)
print("Model 2 : " , model_name_2)
###Output
_____no_output_____
###Markdown
3 Create an endpoint (with single production variant)
###Code
#Create production variant A
variantA = production_variant(model_name=model_name_1,
instance_type="ml.m5.xlarge",
initial_instance_count=1,
variant_name='VariantA',
initial_weight=1)
#Variable for endpoint name
endpoint_name=f"abtest-{datetime.now():%Y-%m-%d-%H-%M-%S}"
##First create an endpoint with single variant
##Note this step automatically creates an endpointconfig with same name as the endpoint, that you can update later
#Create an endpoint with a single production variant
sagemaker_session.endpoint_from_production_variants(
name=endpoint_name,
production_variants=[variantA]
)
###Output
_____no_output_____
###Markdown
4. Invoke the endpoint
###Code
##Get the file name at index from the 'prefix' folder
def get_file_in_bucket(prefix,index):
response = s3.list_objects(
Bucket=s3_bucket,
Prefix=s3_prefix + "/" + prefix
)
## At '0' index you will find the SUCCESS/FAILURE of file uploades to S3. First data file is at index 1
file_name = response['Contents'][index]['Key']
print("Returing file name : " + file_name)
return file_name
##Download the test files to execute inferences
s3.download_file(s3_bucket, get_file_in_bucket('test',1), 't_file.csv')
with open('t_file.csv', 'r') as TF:
t_lines = TF.readlines()
### Define a method to run inferences against the endpoint
def get_predictions():
#Skip the first line since it has column headers
for tl in t_lines[1:50]:
#Remove the first column since it is the label
test_list = tl.split(",")
test_list.pop(0)
test_string = ','.join([str(elem) for elem in test_list])
result = smrt.invoke_endpoint(EndpointName=endpoint_name,
ContentType="text/csv",
Body=test_string)
#print(result)
rbody = StreamingBody(raw_stream=result['Body'],content_length=int(result['ResponseMetadata']['HTTPHeaders']['content-length']))
print(f"Result from {result['InvokedProductionVariant']} = {rbody.read().decode('utf-8')}")
#Get predictions
get_predictions()
###Output
_____no_output_____
###Markdown
5. Update endpoint with two production variants
###Code
#Create production variant B
variantB = production_variant(model_name=model_name_2,
instance_type="ml.m5.xlarge",
initial_instance_count=1,
variant_name='VariantB',
initial_weight=1)
##Next update the endpoint to include both production variants
endpoint_config_new =f"abtest-new-config-{datetime.now():%Y-%m-%d-%H-%M-%S}"
sagemaker_session.create_endpoint_config_from_existing (
existing_config_name=endpoint_name,
new_config_name=endpoint_config_new,
new_production_variants=[variantA,variantB] ## Two production variants
)
##Update the endpoint
sagemaker_session.update_endpoint(endpoint_name=endpoint_name, endpoint_config_name=endpoint_config_new, wait=False)
#Show that you can still get inferences while the endpoint is being updated
#Get predictions
get_predictions()
###Output
_____no_output_____
###Markdown
6. CloudWatch Analysis Observe the CloudWatch metrics generated for the two variants to understand the endpoint behavior. Here we are plotting the number of invocations of each variant.You can use the same pattern to plot other metrics.
###Code
##Define utility methods to retrieve and plot cloudwatch metrics
import pandas as pd
cw = boto3.Session().client("cloudwatch")
def get_invocation_metrics_for_endpoint_variant(endpoint_name, variant_name, start_time, end_time):
metrics = cw.get_metric_statistics(
Namespace="AWS/SageMaker",
MetricName="Invocations",
StartTime=start_time,
EndTime=end_time,
Period=60,
Statistics=["Sum"],
Dimensions=[
{"Name": "EndpointName", "Value": endpoint_name},
{"Name": "VariantName", "Value": variant_name},
],
)
return (
pd.DataFrame(metrics["Datapoints"])
.sort_values("Timestamp")
.set_index("Timestamp")
.drop("Unit", axis=1)
.rename(columns={"Sum": variant_name})
)
def plot_endpoint_metrics(start_time=None):
start_time = start_time or datetime.now() - timedelta(minutes=60)
end_time = datetime.now()
metrics_variant1 = get_invocation_metrics_for_endpoint_variant(
endpoint_name, variantA["VariantName"], start_time, end_time
)
metrics_variant2 = get_invocation_metrics_for_endpoint_variant(
endpoint_name, variantB["VariantName"], start_time, end_time
)
metrics_variants = metrics_variant1.join(metrics_variant2, how="outer")
metrics_variants.plot()
return metrics_variants
##Send traffic to endpoint for about 2 minutes.
##You should see both the variants serving traffic, after the endpoint is updated.
print(f"Sending test traffic to the endpoint {endpoint_name}. \nPlease wait...")
#Skip the first line since it has column headers
for tl in t_lines[1:200]:
#print(".", end="", flush=True)
#Remove the first column since it is the label
test_list = tl.split(",")
test_list.pop(0)
test_string = ','.join([str(elem) for elem in test_list])
result = smrt.invoke_endpoint(EndpointName=endpoint_name,
ContentType="text/csv",
Body=test_string)
#print(result)
rbody = StreamingBody(raw_stream=result['Body'],content_length=int(result['ResponseMetadata']['HTTPHeaders']['content-length']))
print(f"Result from {result['InvokedProductionVariant']} = {rbody.read().decode('utf-8')}")
time.sleep(0.5)
print("Done!")
print("Waiting a minute for initial metric creation...")
time.sleep(60)
plot_endpoint_metrics()
###Output
_____no_output_____
###Markdown
7. Update endpoint to contain just the VariantB 7.1 - Gradually update the weights of each production variants
###Code
#Update the product variant weight to route 60% of traffic to VariantB
sm.update_endpoint_weights_and_capacities(
EndpointName=endpoint_name,
DesiredWeightsAndCapacities=[
{"DesiredWeight": 4, "VariantName": variantA["VariantName"]},
{"DesiredWeight": 6, "VariantName": variantB["VariantName"]},
],
)
###Output
_____no_output_____
###Markdown
7.2 - Alternatively, update the endpoint to route all live traffic to VariantB in a single step
###Code
##Update the endpoint to point to VariantB
endpoint_config_new =f"abtest-b-config-{datetime.now():%Y-%m-%d-%H-%M-%S}"
sagemaker_session.create_endpoint_config_from_existing (
existing_config_name=endpoint_name,
new_config_name=endpoint_config_new,
new_production_variants=[variantB]
)
##Update the endpoint
##Note : This step will fail if the endpoint is still updating
sagemaker_session.update_endpoint(endpoint_name=endpoint_name, endpoint_config_name=endpoint_config_new, wait=False)
###Output
_____no_output_____
###Markdown
8. Cleanup
###Code
# If you do not plan to use this endpoint further, you should delete the endpoint to avoid incurring additional charges.
sagemaker_session.delete_endpoint(endpoint_name)
###Output
_____no_output_____ |
Homework_03.ipynb | ###Markdown
CPSC 4300/6300: Applied Data Science Homework 2: k-NN Regression**Clemson University****Fall 2021****Instructor(s):** Nina Hubig ---
###Code
""" RUN THIS CELL TO GET THE RIGHT FORMATTING """
import requests
from IPython.core.display import HTML
css_file = 'https://raw.githubusercontent.com/bsethwalker/clemson-cs4300/main/css/cpsc6300.css'
styles = requests.get(css_file).text
HTML(styles)
###Output
_____no_output_____
###Markdown
INSTRUCTIONS- To submit your assignment follow the instructions given in Canvas.- Restart the kernel and run the whole notebook again before you submit. - If you submit individually and you have worked with someone, please include the name of your [one] partner below. - As much as possible, try and stick to the hints and functions we import at the top of the homework, as those are the ideas and tools the class supports and is aiming to teach. And if a problem specifies a particular library you're required to use that library, and possibly others from the import list.- Please use .head() when viewing data. Do not submit a notebook that is excessively long because output was not suppressed or otherwise limited. ---In this homework, we will explore regression methods for predicting a quantitative variable. Specifically, we will build regression models that can predict the number of taxi pickups in New York City at any given time of the day. These prediction models will be useful, for example, in monitoring traffic in the city.The data set for this problem is given in the file `nyc_taxi.csv`. You will need to separate it into training and test sets. The first column contains the time of a day in minutes, and the second column contains the number of pickups observed at that time. The data set covers taxi pickups recorded in NYC during Jan 2015.We will fit models that use the time of the day (in minutes) as a predictor and predict the average number of taxi pickups at that time. The models will be fitted to the training set and evaluated on the test set. The performance of the models will be evaluated using the $R^2$ metric.
###Code
import numpy as np
import pandas as pd
from sklearn.metrics import r2_score
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import statsmodels.api as sm
from statsmodels.api import OLS
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
###Output
_____no_output_____
###Markdown
Question 1 We next consider simple linear regression, which we know from lecture is a parametric approach for regression that assumes that the response variable has a linear relationship with the predictor. Use the `statsmodels` module for Linear Regression. This module has built-in functions to summarize the results of regression and to compute confidence intervals for estimated regression parameters. Question 1.1 Use pandas to load the dataset from the csv file `nyc_taxi.csv` into a pandas data frame. Use the `train_test_split` method from `sklearn` with a `random_state` of 42 and a `test_size` of 0.2 to split the dataset into training and test sets. Store your train set data frame as `train_data` and your test set data frame as `test_data`.
###Code
# Your code here
nyc_taxi = pd.read_csv("nyc_taxi.csv")
train_data, test_data = train_test_split(nyc_taxi, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
Question 1.2 Again choose `TimeMin` as your predictor and `PickupCount` as your response variable. Create an `OLS` class instance and use it to fit a Linear Regression model on the training set (`train_data`). Store your fitted model in the variable `OLSModel`.
###Code
# Your code here
x_train = train_data['TimeMin']
y_train = train_data['PickupCount']
x_test = test_data['TimeMin']
y_test = test_data['PickupCount']
X_train = sm.add_constant(x_train)
X_test = sm.add_constant(x_test)
OLS = sm.OLS(y_train, X_train)
OLSModel = OLS.fit()
print(OLSModel.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: PickupCount R-squared: 0.243
Model: OLS Adj. R-squared: 0.242
Method: Least Squares F-statistic: 320.4
Date: Fri, 17 Sep 2021 Prob (F-statistic): 2.34e-62
Time: 02:27:17 Log-Likelihood: -4232.9
No. Observations: 1000 AIC: 8470.
Df Residuals: 998 BIC: 8480.
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 16.7506 1.058 15.838 0.000 14.675 18.826
TimeMin 0.0233 0.001 17.900 0.000 0.021 0.026
==============================================================================
Omnibus: 203.688 Durbin-Watson: 1.910
Prob(Omnibus): 0.000 Jarque-Bera (JB): 462.910
Skew: 1.111 Prob(JB): 3.02e-101
Kurtosis: 5.485 Cond. No. 1.63e+03
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.63e+03. This might indicate that there are
strong multicollinearity or other numerical problems.
###Markdown
Question 1.3 Create a plot just like you did in question 2.2 from Homework 2 (but with fewer subplots): plot both the observed values and the predictions from `OLSModel` on the training and test set. You should have one figure with two subplots, one subplot for the training set and one for the test set.**Hints**:1. Each subplot should use different color and/or markers to distinguish Linear Regression prediction values from that of the actual data values.2. Each subplot must have appropriate axis labels, title, and legend.3. The overall figure should have a title.
###Code
# Your code here
ytrain_pred = OLSModel.predict(X_train)
ytest_pred = OLSModel.predict(X_test)
fig, (ax1, ax2) = plt.subplots(1,2, figsize = (15, 5))
fig.suptitle('Predictions vs Actuals', fontsize=14)
ax1.scatter(x_train, y_train, color='b',label='Actual')
ax1.scatter(x_train, ytrain_pred, color='tab:orange',label ='Predicted')
ax1.set_title('Training Set')
ax1.set_xlabel('Time of Day in Minutes')
ax1.set_ylabel('Pickup Count')
ax1.legend()
ax2.scatter(x_test, y_test, label='Actual', color='b')
ax2.scatter(x_test, ytest_pred, color='tab:orange',label ='Predicted')
ax2.set_title('Test Set')
ax2.set_xlabel('Time of Day in Minutes')
ax2.set_ylabel('Pickup Count')
ax2.legend()
###Output
_____no_output_____
###Markdown
Question 1.4 Report the $R^2$ score for the fitted model on both the training and test sets.
###Code
# Your code here
r2_train = r2_score(y_train, ytrain_pred)
r2_test = r2_score(y_test, ytest_pred)
print('r2_training is', r2_train)
print('r2_test is', r2_test)
###Output
r2_training is 0.24302603531893352
r2_test is 0.240661535615741
###Markdown
Question 1.5 Report the estimates for the slope and intercept for the fitted linear model.
###Code
# Your code here
beta0_sm = OLSModel.params[0]
beta1_sm = OLSModel.params[1]
print(f'The regression coef from statsmodels are: beta_0 = {beta0_sm:8.6f} and beta_1 = {beta1_sm:8.6f}')
###Output
The regression coef from statsmodels are: beta_0 = 16.750601 and beta_1 = 0.023335
###Markdown
Question 1.6 Report the $95\%$ confidence intervals (CIs) for the slope and intercept.
###Code
# Your code here
OLSModel.conf_int(alpha=0.05, cols =None)
###Output
_____no_output_____
###Markdown
Question 1.7 Discuss the results:1. How does the test $R^2$ score compare with the best test $R^2$ value obtained with k-NN regression? Describe why this is not surprising for these data.2. What does the sign of the slope of the fitted linear model convey about the data? 3. Interpret the $95\%$ confidence intervals from 3.5. Based on these CIs is there evidence to suggest that the number of taxi pickups has a significant linear relationship with time of day? How do you know? 4. How would $99\%$ confidence intervals for the slope and intercept compare to the $95\%$ confidence intervals (in terms of midpoint and width)? Briefly explain your answer. 5. Based on the data structure, what restriction on the model would you put at the endpoints (at $x\approx0$ and $x\approx1440$)? What does this say about the appropriateness of a linear model? *your answer here* 1. The test R^2 score is smaller than the best test R^2 value (at K=75) with k-NN regression. Because these data have two trends, upward and downward, but linear regression has only one trend depending on the slope, thus the resulting model will provide a poor fit to the data than k-NN.2. The slope of the fitted linear model convey about the ratio between the predictor and the response variable. In other words, we can understand that the slope is the rate of change in the response variable when the predictor increases by 1.3. The 95% confidence intervals is a range of values that you can be 95% certain contains the slope and intercept of the fitted linear model. We are able to say that the number of taxi as a signigicant linear relationship with time of day. It is easy to know as the 95% confidence interval of the slope is very small.4. The 99% confidence intervals for slope and intercept are larger in width, and the same with the midpoint compare to the 95% confidence intervals. Because the fitted model has a certain slope and intercept, so the midpoint is unchanged. The 99% confidence intervals have a more extensive range of values, thus larger in width than 95% confidence intervals.5. At the endpoints 𝑥≈0 and 𝑥≈1440, it should be two values close to the same based on the data structure, but the fitted model calculates two very far different values. A linear model is only appropriate with the data has one clear tendency upward or downward. Question 2 You may recall from lectures that OLS Linear Regression can be susceptible to outliers in the data. We're going to look at a dataset that includes some outliers and get a sense for how that affects modeling data with Linear Regression. **Note, this is an open-ended question, there is not one correct solution (or one correct definition of an outlier).** Question 2.1 We've provided you with two files `outliers_train.csv` and `outliers_test.csv` corresponding to training set and test set data. What does a visual inspection of training set tell you about the existence of outliers in the data?
###Code
# Your code here
trainingset = pd.read_csv("outliers_train.csv")
x_otrain = trainingset['X']
y_otrain = trainingset['Y']
fig, ax = plt.subplots(1, 1, figsize=(15,6))
ax.scatter(x_otrain, y_otrain, label='Training set', alpha=0.5)
ax.set_title('Training set')
ax.set_xlabel('x_otrain')
ax.set_ylabel('y_otrain')
ax.legend();
###Output
_____no_output_____
###Markdown
*Your answer here*As very clearly visible in the graph, we can see three points are out of the trend of the training set. Two points are at the left upper corner; one point is at the right lower corner. Question 2.2 Choose `X` as your feature variable and `Y` as your response variable. Use `statsmodel` to create a Linear Regression model on the training set data. Store your model in the variable `OutlierOLSModel` and display the model summary.
###Code
# Your code here
X_otrain = sm.add_constant(x_otrain)
OutlierOLS = sm.OLS(y_otrain, X_otrain)
OutlierOLSModel = OutlierOLS.fit()
print(OutlierOLSModel.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: Y R-squared: 0.084
Model: OLS Adj. R-squared: 0.066
Method: Least Squares F-statistic: 4.689
Date: Fri, 17 Sep 2021 Prob (F-statistic): 0.0351
Time: 02:27:44 Log-Likelihood: -343.59
No. Observations: 53 AIC: 691.2
Df Residuals: 51 BIC: 695.1
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -9.5063 22.192 -0.428 0.670 -54.059 35.046
X 47.3554 21.869 2.165 0.035 3.452 91.259
==============================================================================
Omnibus: 2.102 Durbin-Watson: 1.758
Prob(Omnibus): 0.350 Jarque-Bera (JB): 1.251
Skew: 0.215 Prob(JB): 0.535
Kurtosis: 3.617 Cond. No. 1.06
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Question 2.3 You're given the knowledge ahead of time that there are 3 outliers in the training set data. The test set data doesn't have any outliers. You want to remove the 3 outliers in order to get the optimal intercept and slope. In the case that you're sure of the existence and number (3) of outliers ahead of time, one potential brute force method to outlier detection might be to find the best Linear Regression model on all possible subsets of the training set data with 3 points removed. Using this method, how many times will you have to calculate the Linear Regression coefficients on the training data? *Your answer here.*We have to choose 50 out of 53 points. It means the number of times we will have to calculate the Linear Regression coefficients on the training data are 'Combinations without repetition' C = 53!/((3!)((53-3)!)) = 23426. Question 2.4 In CPSC 4300 we're strong believers that creating heuristic models is a great way to build intuition. In that spirit, construct an approximate algorithm to find the 3 outlier candidates in the training data by taking advantage of the Linear Regression residuals. Place your algorithm in the function `find_outliers_simple`. It should take the parameters `dataset_x` and `dataset_y`, and `num_outliers` representing your features, response variable values (make sure your response variable is stored as a numpy column vector), and the number of outliers to remove. The return value should be a list `outlier_indices` representing the indices of the `num_outliers` outliers in the original datasets you passed in. Run your algorithm and remove the outliers that your algorithm identified, use `statsmodels` to create a Linear Regression model on the remaining training set data, and store your model in the variable `OutlierFreeSimpleModel` display the summary of this model.
###Code
def find_outliers_simple(dataset_x, dataset_y, num_outliers):
df = pd.concat([dataset_x, dataset_y], axis=1)
# get predictions
x = sm.add_constant(dataset_x)
df['y_pred'] = OutlierOLSModel.predict(x)
# get residuals
df['y_residual'] = abs(df['Y'] - df['y_pred'])
# sort by residual
df = df.sort_values(by=['y_residual'], ascending=False)
# identify top n residuals as outliers
outliers = df[0:num_outliers]
return outliers.index.tolist();
find_outliers_simple(x_otrain, y_otrain, 3)
XTrain_out = x_otrain.drop([50, 51, 52])
YTrain_out = y_otrain.drop([50, 51, 52])
# Your code here
OutlierFreeSimpleModel = sm.OLS(YTrain_out, XTrain_out)
OutlierFreeSimpleModel = OutlierFreeSimpleModel.fit()
# printing the summary table
print(OutlierFreeSimpleModel.summary())
# Your code here
X_otrain = sm.add_constant(x_otrain)
OutlierOLS = sm.OLS(y_otrain, X_otrain)
OutlierOLSModel = OutlierOLS.fit()
OutlierFreeSimpleModel = sm.OLS(YTrain_out, XTrain_out)
OutlierFreeSimpleModel = OutlierFreeSimpleModel.fit()
y_otrainpred = OutlierOLSModel.predict(X_otrain)
y_oftrainpred = OutlierFreeSimpleModel.predict(XTrain_out)
###Output
_____no_output_____
###Markdown
Question 2.5 Create a figure with two subplots. The first is a scatterplot where the color of the points denotes the outliers from the non-outliers in the training set, and include two regression lines on this scatterplot: one fitted with the outliers included and one fitted with the outlier removed (all on the training set). The second plot should include a scatterplot of points from the test set with the same two regression lines fitted on the training set: with and without outliers. Visually which model fits the test set data more closely?
###Code
# Your code here
testset = pd.read_csv("outliers_test.csv")
x_otest = testset['X']
y_otest = testset['Y']
fig, (ax1, ax2) = plt.subplots(1,2, figsize = (15, 5))
ax1.scatter(x_otrain, y_otrain, color='k',label='Outliers')
ax1.scatter(XTrain_out, YTrain_out, color='tab:orange',label ='Non-outliers')
ax1.scatter(x_otrain, y_otrainpred, color='g',label='Regression line with outliers')
ax1.scatter(XTrain_out, y_oftrainpred, color='m',label='Regression line without outliers')
ax1.set_title('Training Set')
ax1.set_xlabel('X')
ax1.set_ylabel('Y')
ax1.legend(loc=3)
ax2.scatter(x_otest, y_otest, label='Test set', color='r')
ax2.scatter(x_otrain, y_otrainpred, color='g',label='Regression line with outliers')
ax2.scatter(XTrain_out, y_oftrainpred, color='m',label='Regression line without outliers')
ax2.set_title('Test Set')
ax2.set_xlabel('X')
ax2.set_ylabel('Y')
ax2.legend(loc=3)
###Output
_____no_output_____
###Markdown
*Your answer here*It can be seen that the model without outliers fits more closely than the model with outliers. Question 2.6 Calculate the $R^2$ score for the `OutlierOLSModel` and the `OutlierFreeSimpleModel` on the test set data. Which model produces a better $R^2$ score?
###Code
# Your code here
X_otest = sm.add_constant(x_otest)
y_otestpred = OutlierOLSModel.predict(X_otest)
y_oftestpred = OutlierFreeSimpleModel.predict(x_otest)
r2_outliers = r2_score(y_otest, y_otestpred)
r2_woutliers = r2_score(y_otest, y_oftestpred)
print('r2_outliers is', r2_outliers)
print('r2_outlierfree is', r2_woutliers)
print('The model without outliers produces a better R^2 score')
###Output
r2_outliers is 0.34085656043405654
r2_outlierfree is 0.4579491642913984
The model without outliers produces a better R^2 score
###Markdown
1. Попробуйте обучить нейронную сеть на TensorFlow 2 на датасете imdb_reviews. Опишите в комментарии к уроку - какого результата вы добились от нейросети? Что помогло вам улучшить ее точность?2. Поработайте с документацией TensorFlow 2. Найдите полезные команды не разобранные на уроке.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
from keras.preprocessing import sequence
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
from keras.datasets import imdb
NUM_WORDS = 10000
(train_x, train_y), (test_x, test_y) = imdb.load_data()
train_x.shape
print(train_x[0])
train_y.shape
train_y[0]
print("Количество классов: ")
print(np.unique(train_y))
print("Количество слов: ")
print(len(np.unique(np.hstack(train_x))))
from matplotlib import pyplot
print("Длина обзора: ")
result = [len(x) for x in train_x]
print("Средняя длина %.2f слов со стандартным отклонением (%.2f)" % (np.mean(result), np.std(result)))
print("95 персентиль длины обзора: ", np.percentile(result, 95))
pyplot.boxplot(result)
pyplot.show()
test_x.shape
test_y.shape
index = imdb.get_word_index()
reverse_index = dict([(value, key) for (key, value) in index.items()])
decoded = " ".join( [reverse_index.get(i - 3, "#") for i in train_x[0]] )
print(decoded)
(train_x, train_y), (test_x, test_y) = imdb.load_data(num_words=NUM_WORDS)
MAX_WORDS = 610
train_x = sequence.pad_sequences(train_x, maxlen=MAX_WORDS)
test_x = sequence.pad_sequences(test_x, maxlen=MAX_WORDS)
# train_y = np.array(train_y).astype("float32")
# test_y = np.array(test_y).astype("float32")
model = keras.Sequential([
keras.layers.Embedding(NUM_WORDS, 32, input_length=MAX_WORDS),
keras.layers.Flatten(),
keras.layers.Dense(512, activation='sigmoid'),
keras.layers.Dropout(0.5),
keras.layers.Dense(256, activation='sigmoid'),
keras.layers.Dropout(0.5),
keras.layers.Dense(256, activation='sigmoid'),
keras.layers.Dropout(0.5),
keras.layers.Dense(128, activation='sigmoid'),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid')
])
print(model.summary())
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=['accuracy'])
model.fit(train_x, train_y, validation_data=(test_x, test_y), epochs=3, batch_size=32)
test_loss, test_acc = model.evaluate(test_x, test_y, verbose=1)
print('\nTest accuracy:', test_acc)
###Output
25000/25000 [==============================] - 3s 124us/sample - loss: 0.4030 - accuracy: 0.8748
Test accuracy: 0.87476
|
notebooks/.ipynb_checkpoints/2.0_clean_data-checkpoint.ipynb | ###Markdown
Clean DataThis notebook intends to clean the Raw DataFrame having as outcome one or more interim DataFrame, ready to has the features engineered (next step).The Clean Data steps that might be followed in order to clean the Raw DataFrame are:1. NaN2. Features that has same value in all rows3. Duplicated Features(identical to other existent column)4. High Correlation Features5. Window Selection (0-2) Sumário * [Importe das Bibliotecas](import) * [Leitura dos dados](leitura) * [Dados faltantes (NaN)](nan) * [Valores repetidos](repet) * [Features duplicadas](dup_feat) * [Nome features exame de sangue](rename) * [Seleção da Janela de tempo](win_select) * [Salva dados limpos](clean) * [Conclusão](conclusion) Importe das Bibliotecas Bibliotecas Externas
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Bibliotecas Internas
###Code
import sys
sys.path.insert(1, "../src/")
from clean import print_nan_count_by_feature, neighborhood_missing_data
from clean import drop_features_with_same_value_for_all_observations, plot_features_with_same_value_for_all_observations
from clean import drop_duplicated_features, plot_duplicated_features
from clean import rename_portion_of_columns
from clean import drop_patient_moved_to_icu_on_first_window, plot_patient_moved_to_icu_on_first_window, prepare_window
###Output
_____no_output_____
###Markdown
-----------------Retornar ao [Sumário](sumario) Leitura dos dadosFaz a leitura dos dados que serão usados para a limpeza e futura modelagem.**```df : pd.DataFrame```** é o DataFrame que receberá os valores *raw* baixados do Kaggle.
###Code
# Leitura do raw data desse projeto no Github
df = pd.read_excel('https://github.com/fdrigui/covid19_icu_admission_prediction/raw/main/data/raw/Kaggle_Sirio_Libanes_ICU_Prediction.xlsx' )
# Imprime na tela todas as colunas em vez de as 10 primeiras e 10 ultimas
pd.set_option('max_columns', df.shape[1])
df.head()
###Output
_____no_output_____
###Markdown
-----------------Retornar ao [Sumário](sumario) Dados faltantes (NaN)Para saber mais detalhes sobre a estratégia de eliminação dos dados faltantes (NaN) foram tratados, veja o documento:[0.0_understanding_the_data.md](0.0_understanding_the_data.md), no tópico: **Dados faltantes (NaN)**. Existem dados faltantes no DataFrame?Essa pergunta é importante uma vez que muitos dos modelos de predição não conseguem trabalhar com dados NaN.A linha conta todas as ocorrencias de 'valores' ```NaN``` no **df**, e vemos que a quantidade é de **223863** NaN.
###Code
print(f'Existem no DataFrame df {df.isna().sum().sum()} "valores" NaN')
###Output
Existem no DataFrame df 223863 "valores" NaN
###Markdown
Uma vez que sabemos que existem valores faltantes (NaN) vamos começar a trata-los. Limpando os NaN com a função 'neighborhood_missing_data'Usando a função ```neighborhood_missing_data``` para eliminar os NaN.**```df_1_without_nan : pd.DataFrame```** é o DataFrame após tirar os NaN com a função ```neighborhood_missing_data```.Após isso, uma contagem é ralizada para saber se ainda existem dados faltantes.
###Code
# Usando a função neighborhood_missing_data para eliminar os NaN
df_1_without_nan = neighborhood_missing_data(df, 'PATIENT_VISIT_IDENTIFIER')
# Avaliando quantos dados ainda sobraram como NaN
print(f'O total de NaN existentes no DataFrame df_1_without_nan é:{df_1_without_nan.isna().sum().sum()}')
###Output
O total de NaN existentes no DataFrame df_1_without_nan é:2025
###Markdown
É possível observar uma considerável redução na quantidade de ```NaN```, que passou de: **223863** para: **2025**, mesmo assim, é necessário entender e eliminar esses dados remanescentes.A função ```print_nan_count_by_feature``` imprime todas as colunas, e mostra a quantidade de dados ```NaN``` das de cada uma.É possível observar que temos algumas colunas com 5 e outras com 10 NaN. Como a a janela possui 5 perídos ```(0-2, 2-4, 4-6, 6-12 e mais que 12)```, então temos 1 paciente para aquelas que apresentam 5 valores faltantes e 2 pacientes para aquelas que apresentam 10 valores faltantes.
###Code
print_nan_count_by_feature(df_1_without_nan)
###Output
Count - Feature Name
--------------------
0000 - PATIENT_VISIT_IDENTIFIER
0000 - AGE_ABOVE65
0000 - AGE_PERCENTIL
0000 - GENDER
0005 - DISEASE GROUPING 1
0005 - DISEASE GROUPING 2
0005 - DISEASE GROUPING 3
0005 - DISEASE GROUPING 4
0005 - DISEASE GROUPING 5
0005 - DISEASE GROUPING 6
0005 - HTN
0005 - IMMUNOCOMPROMISED
0005 - OTHER
0010 - ALBUMIN_MEDIAN
0010 - ALBUMIN_MEAN
0010 - ALBUMIN_MIN
0010 - ALBUMIN_MAX
0010 - ALBUMIN_DIFF
0010 - BE_ARTERIAL_MEDIAN
0010 - BE_ARTERIAL_MEAN
0010 - BE_ARTERIAL_MIN
0010 - BE_ARTERIAL_MAX
0010 - BE_ARTERIAL_DIFF
0010 - BE_VENOUS_MEDIAN
0010 - BE_VENOUS_MEAN
0010 - BE_VENOUS_MIN
0010 - BE_VENOUS_MAX
0010 - BE_VENOUS_DIFF
0010 - BIC_ARTERIAL_MEDIAN
0010 - BIC_ARTERIAL_MEAN
0010 - BIC_ARTERIAL_MIN
0010 - BIC_ARTERIAL_MAX
0010 - BIC_ARTERIAL_DIFF
0010 - BIC_VENOUS_MEDIAN
0010 - BIC_VENOUS_MEAN
0010 - BIC_VENOUS_MIN
0010 - BIC_VENOUS_MAX
0010 - BIC_VENOUS_DIFF
0010 - BILLIRUBIN_MEDIAN
0010 - BILLIRUBIN_MEAN
0010 - BILLIRUBIN_MIN
0010 - BILLIRUBIN_MAX
0010 - BILLIRUBIN_DIFF
0010 - BLAST_MEDIAN
0010 - BLAST_MEAN
0010 - BLAST_MIN
0010 - BLAST_MAX
0010 - BLAST_DIFF
0010 - CALCIUM_MEDIAN
0010 - CALCIUM_MEAN
0010 - CALCIUM_MIN
0010 - CALCIUM_MAX
0010 - CALCIUM_DIFF
0010 - CREATININ_MEDIAN
0010 - CREATININ_MEAN
0010 - CREATININ_MIN
0010 - CREATININ_MAX
0010 - CREATININ_DIFF
0010 - FFA_MEDIAN
0010 - FFA_MEAN
0010 - FFA_MIN
0010 - FFA_MAX
0010 - FFA_DIFF
0010 - GGT_MEDIAN
0010 - GGT_MEAN
0010 - GGT_MIN
0010 - GGT_MAX
0010 - GGT_DIFF
0010 - GLUCOSE_MEDIAN
0010 - GLUCOSE_MEAN
0010 - GLUCOSE_MIN
0010 - GLUCOSE_MAX
0010 - GLUCOSE_DIFF
0010 - HEMATOCRITE_MEDIAN
0010 - HEMATOCRITE_MEAN
0010 - HEMATOCRITE_MIN
0010 - HEMATOCRITE_MAX
0010 - HEMATOCRITE_DIFF
0010 - HEMOGLOBIN_MEDIAN
0010 - HEMOGLOBIN_MEAN
0010 - HEMOGLOBIN_MIN
0010 - HEMOGLOBIN_MAX
0010 - HEMOGLOBIN_DIFF
0010 - INR_MEDIAN
0010 - INR_MEAN
0010 - INR_MIN
0010 - INR_MAX
0010 - INR_DIFF
0010 - LACTATE_MEDIAN
0010 - LACTATE_MEAN
0010 - LACTATE_MIN
0010 - LACTATE_MAX
0010 - LACTATE_DIFF
0010 - LEUKOCYTES_MEDIAN
0010 - LEUKOCYTES_MEAN
0010 - LEUKOCYTES_MIN
0010 - LEUKOCYTES_MAX
0010 - LEUKOCYTES_DIFF
0010 - LINFOCITOS_MEDIAN
0010 - LINFOCITOS_MEAN
0010 - LINFOCITOS_MIN
0010 - LINFOCITOS_MAX
0010 - LINFOCITOS_DIFF
0010 - NEUTROPHILES_MEDIAN
0010 - NEUTROPHILES_MEAN
0010 - NEUTROPHILES_MIN
0010 - NEUTROPHILES_MAX
0010 - NEUTROPHILES_DIFF
0010 - P02_ARTERIAL_MEDIAN
0010 - P02_ARTERIAL_MEAN
0010 - P02_ARTERIAL_MIN
0010 - P02_ARTERIAL_MAX
0010 - P02_ARTERIAL_DIFF
0010 - P02_VENOUS_MEDIAN
0010 - P02_VENOUS_MEAN
0010 - P02_VENOUS_MIN
0010 - P02_VENOUS_MAX
0010 - P02_VENOUS_DIFF
0010 - PC02_ARTERIAL_MEDIAN
0010 - PC02_ARTERIAL_MEAN
0010 - PC02_ARTERIAL_MIN
0010 - PC02_ARTERIAL_MAX
0010 - PC02_ARTERIAL_DIFF
0010 - PC02_VENOUS_MEDIAN
0010 - PC02_VENOUS_MEAN
0010 - PC02_VENOUS_MIN
0010 - PC02_VENOUS_MAX
0010 - PC02_VENOUS_DIFF
0010 - PCR_MEDIAN
0010 - PCR_MEAN
0010 - PCR_MIN
0010 - PCR_MAX
0010 - PCR_DIFF
0010 - PH_ARTERIAL_MEDIAN
0010 - PH_ARTERIAL_MEAN
0010 - PH_ARTERIAL_MIN
0010 - PH_ARTERIAL_MAX
0010 - PH_ARTERIAL_DIFF
0010 - PH_VENOUS_MEDIAN
0010 - PH_VENOUS_MEAN
0010 - PH_VENOUS_MIN
0010 - PH_VENOUS_MAX
0010 - PH_VENOUS_DIFF
0010 - PLATELETS_MEDIAN
0010 - PLATELETS_MEAN
0010 - PLATELETS_MIN
0010 - PLATELETS_MAX
0010 - PLATELETS_DIFF
0010 - POTASSIUM_MEDIAN
0010 - POTASSIUM_MEAN
0010 - POTASSIUM_MIN
0010 - POTASSIUM_MAX
0010 - POTASSIUM_DIFF
0010 - SAT02_ARTERIAL_MEDIAN
0010 - SAT02_ARTERIAL_MEAN
0010 - SAT02_ARTERIAL_MIN
0010 - SAT02_ARTERIAL_MAX
0010 - SAT02_ARTERIAL_DIFF
0010 - SAT02_VENOUS_MEDIAN
0010 - SAT02_VENOUS_MEAN
0010 - SAT02_VENOUS_MIN
0010 - SAT02_VENOUS_MAX
0010 - SAT02_VENOUS_DIFF
0010 - SODIUM_MEDIAN
0010 - SODIUM_MEAN
0010 - SODIUM_MIN
0010 - SODIUM_MAX
0010 - SODIUM_DIFF
0010 - TGO_MEDIAN
0010 - TGO_MEAN
0010 - TGO_MIN
0010 - TGO_MAX
0010 - TGO_DIFF
0010 - TGP_MEDIAN
0010 - TGP_MEAN
0010 - TGP_MIN
0010 - TGP_MAX
0010 - TGP_DIFF
0010 - TTPA_MEDIAN
0010 - TTPA_MEAN
0010 - TTPA_MIN
0010 - TTPA_MAX
0010 - TTPA_DIFF
0010 - UREA_MEDIAN
0010 - UREA_MEAN
0010 - UREA_MIN
0010 - UREA_MAX
0010 - UREA_DIFF
0010 - DIMER_MEDIAN
0010 - DIMER_MEAN
0010 - DIMER_MIN
0010 - DIMER_MAX
0010 - DIMER_DIFF
0005 - BLOODPRESSURE_DIASTOLIC_MEAN
0005 - BLOODPRESSURE_SISTOLIC_MEAN
0005 - HEART_RATE_MEAN
0005 - RESPIRATORY_RATE_MEAN
0005 - TEMPERATURE_MEAN
0005 - OXYGEN_SATURATION_MEAN
0005 - BLOODPRESSURE_DIASTOLIC_MEDIAN
0005 - BLOODPRESSURE_SISTOLIC_MEDIAN
0005 - HEART_RATE_MEDIAN
0005 - RESPIRATORY_RATE_MEDIAN
0005 - TEMPERATURE_MEDIAN
0005 - OXYGEN_SATURATION_MEDIAN
0005 - BLOODPRESSURE_DIASTOLIC_MIN
0005 - BLOODPRESSURE_SISTOLIC_MIN
0005 - HEART_RATE_MIN
0005 - RESPIRATORY_RATE_MIN
0005 - TEMPERATURE_MIN
0005 - OXYGEN_SATURATION_MIN
0005 - BLOODPRESSURE_DIASTOLIC_MAX
0005 - BLOODPRESSURE_SISTOLIC_MAX
0005 - HEART_RATE_MAX
0005 - RESPIRATORY_RATE_MAX
0005 - TEMPERATURE_MAX
0005 - OXYGEN_SATURATION_MAX
0005 - BLOODPRESSURE_DIASTOLIC_DIFF
0005 - BLOODPRESSURE_SISTOLIC_DIFF
0005 - HEART_RATE_DIFF
0005 - RESPIRATORY_RATE_DIFF
0005 - TEMPERATURE_DIFF
0005 - OXYGEN_SATURATION_DIFF
0005 - BLOODPRESSURE_DIASTOLIC_DIFF_REL
0005 - BLOODPRESSURE_SISTOLIC_DIFF_REL
0005 - HEART_RATE_DIFF_REL
0005 - RESPIRATORY_RATE_DIFF_REL
0005 - TEMPERATURE_DIFF_REL
0005 - OXYGEN_SATURATION_DIFF_REL
0000 - WINDOW
0000 - ICU
###Markdown
Foi escolhido aleatóriamente uma frature que tem 10 valores NaN, nesse caso a ```UREA_MEDIAN```, e foi feita uma query para selecionar os dados NaN dessa coluna.É possível observar que existem 2 VISITAS que apresentam NaN, a ID **199** e a ID **287**, isso afirma a hipótese de ser multiplos das 5 janelas.
###Code
df_1_without_nan.query('UREA_MEDIAN.isnull()', engine='python')
###Output
_____no_output_____
###Markdown
**```df_2_without_nan : pd.DataFrame```** é a ```variável df_1_without_nan``` após o drop dos index 199 e 287, que continham valores NaN
###Code
df_2_without_nan = df_1_without_nan.drop(df_1_without_nan.query('UREA_MEDIAN.isnull()', engine='python').index)
print(f'A quantidade de valores NaN após a remocção dos dois IDs de visita é: {df_2_without_nan.isna().sum().sum()}')
###Output
A quantidade de valores NaN após a remocção dos dois IDs de visita é: 0
###Markdown
-----------------Retornar ao [Sumário](sumario) Valores repetidosEssa etapa procura *features* ou colunas com um único valor repetido em todas as linhas ou observações.Uma feature com um único valor repetido em todas as linhas é desnecessária para o modelo, uma vez que não existe variação, logo, se não serve para o modelo de predição, deve ser removido do DataFrame.A função ```plot_features_with_same_value_for_all_observations``` busca por colunas nessa condição, e plota o nome das colunas.Existem 36 resultados relacionados com **exames de sangue**, e note que o total de colunas repetidas é 36. Isso porque o exame de sangue é coletado apenas uma vez por dia, não tendo assim um novo exame para se calcular a **diferença** entre a medição anterior e a atual, ou seja, os resultados relacionados com o exame de sangue com o sulfixo **_DIFF** podem ser removidos do DataFrame sem impactos negativos para a modelagem.
###Code
plot_features_with_same_value_for_all_observations(df_2_without_nan)
###Output
Nome das Colunas:
--------------------
ALBUMIN_DIFF
BE_ARTERIAL_DIFF
BE_VENOUS_DIFF
BIC_ARTERIAL_DIFF
BIC_VENOUS_DIFF
BILLIRUBIN_DIFF
BLAST_DIFF
CALCIUM_DIFF
CREATININ_DIFF
FFA_DIFF
GGT_DIFF
GLUCOSE_DIFF
HEMATOCRITE_DIFF
HEMOGLOBIN_DIFF
INR_DIFF
LACTATE_DIFF
LEUKOCYTES_DIFF
LINFOCITOS_DIFF
NEUTROPHILES_DIFF
P02_ARTERIAL_DIFF
P02_VENOUS_DIFF
PC02_ARTERIAL_DIFF
PC02_VENOUS_DIFF
PCR_DIFF
PH_ARTERIAL_DIFF
PH_VENOUS_DIFF
PLATELETS_DIFF
POTASSIUM_DIFF
SAT02_ARTERIAL_DIFF
SAT02_VENOUS_DIFF
SODIUM_DIFF
TGO_DIFF
TGP_DIFF
TTPA_DIFF
UREA_DIFF
DIMER_DIFF
--------------------
Total: 36
###Markdown
A função ```drop_features_with_same_value_for_all_observations``` remove do DataFrame essas colunas apontadas pela célula anterior.**```df_3_without_same_value_col : pd.DataFrame```** é o DataFrame após a remoção das colunas com valores repetidos.
###Code
df_3_without_same_value_col = drop_features_with_same_value_for_all_observations(df_2_without_nan, False)
###Output
Total of dropped columns: 36
###Markdown
Confirmação de que todas as colunas com valores repetidos foram excluidas com sucesso.
###Code
plot_features_with_same_value_for_all_observations(df_3_without_same_value_col)
###Output
Nome das Colunas:
--------------------
--------------------
Total: 0
###Markdown
-----------------Retornar ao [Sumário](sumario) Features duplicadasEssa etapa procura por *features* ou colunas que tem os valores idênticos a uma ou mais colunas.Features duplicadas são desnecessárias para o modelo de regressão, uma vez que não trazem nenhuma informação nova para o modelo, e devem ser removidas do DataFrame.A função ```plot_duplicated_features``` vai mostrar quais *features* são duplicadas. quando existem duas ou mais *features* identicas, a função deixa a primeira e retorna todas as demais, indicando quais devem ser excluidas.
###Code
plot_duplicated_features(df_3_without_same_value_col)
###Output
Nome das Colunas:
--------------------
ALBUMIN_MEAN
ALBUMIN_MIN
ALBUMIN_MAX
BE_ARTERIAL_MEAN
BE_ARTERIAL_MIN
BE_ARTERIAL_MAX
BE_VENOUS_MEAN
BE_VENOUS_MIN
BE_VENOUS_MAX
BIC_ARTERIAL_MEAN
BIC_ARTERIAL_MIN
BIC_ARTERIAL_MAX
BIC_VENOUS_MEAN
BIC_VENOUS_MIN
BIC_VENOUS_MAX
BILLIRUBIN_MEAN
BILLIRUBIN_MIN
BILLIRUBIN_MAX
BLAST_MEAN
BLAST_MIN
BLAST_MAX
CALCIUM_MEAN
CALCIUM_MIN
CALCIUM_MAX
CREATININ_MEAN
CREATININ_MIN
CREATININ_MAX
FFA_MEAN
FFA_MIN
FFA_MAX
GGT_MEAN
GGT_MIN
GGT_MAX
GLUCOSE_MEAN
GLUCOSE_MIN
GLUCOSE_MAX
HEMATOCRITE_MEAN
HEMATOCRITE_MIN
HEMATOCRITE_MAX
HEMOGLOBIN_MEAN
HEMOGLOBIN_MIN
HEMOGLOBIN_MAX
INR_MEAN
INR_MIN
INR_MAX
LACTATE_MEAN
LACTATE_MIN
LACTATE_MAX
LEUKOCYTES_MEAN
LEUKOCYTES_MIN
LEUKOCYTES_MAX
LINFOCITOS_MEAN
LINFOCITOS_MIN
LINFOCITOS_MAX
NEUTROPHILES_MEAN
NEUTROPHILES_MIN
NEUTROPHILES_MAX
P02_ARTERIAL_MEAN
P02_ARTERIAL_MIN
P02_ARTERIAL_MAX
P02_VENOUS_MEAN
P02_VENOUS_MIN
P02_VENOUS_MAX
PC02_ARTERIAL_MEAN
PC02_ARTERIAL_MIN
PC02_ARTERIAL_MAX
PC02_VENOUS_MEAN
PC02_VENOUS_MIN
PC02_VENOUS_MAX
PCR_MEAN
PCR_MIN
PCR_MAX
PH_ARTERIAL_MEAN
PH_ARTERIAL_MIN
PH_ARTERIAL_MAX
PH_VENOUS_MEAN
PH_VENOUS_MIN
PH_VENOUS_MAX
PLATELETS_MEAN
PLATELETS_MIN
PLATELETS_MAX
POTASSIUM_MEAN
POTASSIUM_MIN
POTASSIUM_MAX
SAT02_ARTERIAL_MEAN
SAT02_ARTERIAL_MIN
SAT02_ARTERIAL_MAX
SAT02_VENOUS_MEAN
SAT02_VENOUS_MIN
SAT02_VENOUS_MAX
SODIUM_MEAN
SODIUM_MIN
SODIUM_MAX
TGO_MEAN
TGO_MIN
TGO_MAX
TGP_MEAN
TGP_MIN
TGP_MAX
TTPA_MEAN
TTPA_MIN
TTPA_MAX
UREA_MEAN
UREA_MIN
UREA_MAX
DIMER_MEAN
DIMER_MIN
DIMER_MAX
--------------------
Total: 108
###Markdown
Existem então **108** *features* duplicadas nesse DataFrame. Se é possivel observar que **108** é um multiplo de **36**, ou seja, os dados de sangue, por possuírem um único valor por visita de paciente (uma única medição de sangue por paciente), isso faz com que as características como: ```MIN```, ```MAX```, ```MEAN``` e ```MEDIAN``` sejam iguais.
###Code
single_value_array = [5]
print(f'Média: {np.mean(single_value_array)}\nMediana: {np.median(single_value_array)}\nMínimo: {np.min(single_value_array)}\nMáximo: {np.max(single_value_array)}')
###Output
Média: 5.0
Mediana: 5.0
Mínimo: 5
Máximo: 5
###Markdown
É necessário então remover as colunas duplicadas, e para tal, foi criada a função ```drop_duplicated_features```, que elimina do DataFrame as features duplicadas.**```df_4_without_duplicated_features : pd.DataFrame```** é o DataFrame contendo os valores do ```df_3_without_same_value_col``` com as features duplicadas removidas.
###Code
df_4_without_duplicated_features = drop_duplicated_features(df_3_without_same_value_col, False)
###Output
Total dropped columns: 108
###Markdown
Confirmando que essas colunas foram removidas:
###Code
plot_duplicated_features(df_4_without_duplicated_features)
###Output
Nome das Colunas:
--------------------
--------------------
Total: 0
###Markdown
-----------------Retornar ao [Sumário](sumario) Nome features exame de sangue Caso dos nomesCompunham o *raw data* 36 exames de sangue, como: ```[ALBUMIN, BE_ARTERIAL, BE_VENOUS, BIC_ARTERIAL]```.Cada exame tinham 5 features associadas, sendo ```[MEDIAN, MEAN, MIN, MAX e DIFF]```.Como o exame de sangue é feito somente uma vez, o valor da ```DIFF``` é irrelevante, bem como os valores de ```MAX```, ```MIN``` e ```MEAN``` que são iguais ao valor de ```MEDIAN```.Após todo o processamento que fizemos até aqui, a sobraram no DataFrame 36 features relacionadas com o exame de sangue, só que com o sulfixo ```_MEDIAN```. O problema é que é inapropriado apontar a Mediana de um conjunto contendo um único valor. Isso poderia confundir uma pessoa que está por avaliar o modelo, por esse motivo, os sulfixos ```_MEDIAN``` serão removidos dos nomes das features relacionadas com o exame de sangue.
###Code
df_5_renamed = rename_portion_of_columns(df_4_without_duplicated_features, 13, (13+36), '_MEDIAN', '')
df_5_renamed.columns[13: (13+36)]
###Output
_____no_output_____
###Markdown
-----------------Retornar ao [Sumário](sumario) Seleção da Janela de tempoPara saber mais detalhes sobre a estratégia de seleção das janelas de tempo, veja o documento:[0.0_understanding_the_data.md](0.0_understanding_the_data.md), no tópico: **Quanto antes, melhor**. Removendo Entrada direta na UTIOs pacientes que entraram para o hospital e foram encaminhados diretamente para a UTI (contendo a Janela de tempo 0 - 2h com ICU == 1) serão removidas do DataFrame.**```df_6_icu_on_first_window: pd.DataFrame```** é a variável que vai conter os dados do DataFrame com após a remoção dos valores com ICU == 1 na primeira janela temporal. Para isso, será utilizada a função: ```drop_patient_moved_to_icu_on_first_window```.
###Code
plot_patient_moved_to_icu_on_first_window(df_5_renamed)
df_6_icu_on_first_window = drop_patient_moved_to_icu_on_first_window(df_5_renamed)
###Output
Total dropped visit id: 32
###Markdown
confirmando se os casos de pacientes que entraram diretamente na UTI foram removidos
###Code
plot_patient_moved_to_icu_on_first_window(df_6_icu_on_first_window)
###Output
PATIENT_VISIT_IDENTIFIER:
--------------------
--------------------
Total: 0
###Markdown
Seleção da Janela com ICU futuroFiltrar do ```df_6_icu_on_first_window``` os resultados relacionados com a primeira janela temporal de cada visita (0 - 2h).Precisa-se também manipular os resultados da ```ICU``` de maneira a indicar 1 se o paciente foi direcionado para a UTI em qualquer uma das demais janelas temporais.**```df_7_cleaned: pd.DataFrame```** é a variável que vai conter os dados limpos, após a seleção de janelas. A seleção de janelas será feita agrupando o DataFrame por ```PATIENT_VISIT_IDENTIFIER``` e aplicando a função ```prepare_window``` que serve justamente para selecionar a janela 1 e alterar o ICU para 1 na Janela 1 se em qualquer outra janela desse paciente o valor for 1.
###Code
df_7_cleaned = df_6_icu_on_first_window.groupby("PATIENT_VISIT_IDENTIFIER", as_index=False).apply(prepare_window)\
.reset_index().drop(['level_0', 'level_1'], axis=1)
df_7_cleaned.head()
###Output
_____no_output_____
###Markdown
-----------------Retornar ao [Sumário](sumario) Salva dados limposApós a limpeza dos dados, é importante salvar esse arquivo no projeto, para que não seja necessário passar por todas essas etapas para produzir um arquivo com um DataFrame limpo.O DataFrame: ```df_7_cleaned``` será salvo como csv na pasta ```../data/interim/```. Isso porque os dados podem ainda sofrer alterações na etapa de análise das *features*.
###Code
df_7_cleaned.to_csv('../data/interim/df_7_cleaned.csv')
###Output
_____no_output_____
###Markdown
-----------------Retornar ao [Sumário](sumario) Conclusão -----------------Retornar ao [Sumário](sumario)
###Code
df_7_cleaned.shape
###Output
_____no_output_____ |
exercises/.ipynb_checkpoints/feature_sets-checkpoint.ipynb | ###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Feature Sets **Learning Objective:** Create a minimal set of features that performs just as well as a more complex feature set So far, we've thrown all of our features into the model. Models with fewer features use fewer resources and are easier to maintain. Let's see if we can build a model on a minimal set of housing features that will perform equally as well as one that uses all the features in the data set. SetupAs before, let's load and prepare the California housing data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
Training examples summary:
###Markdown
Task 1: Develop a Good Feature Set**What's the best performance you can get with just 2 or 3 features?**A **correlation matrix** shows pairwise correlations, both for each feature compared to the target and for each feature compared to other features.Here, correlation is defined as the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient). You don't have to understand the mathematical details for this exercise.Correlation values have the following meanings: * `-1.0`: perfect negative correlation * `0.0`: no correlation * `1.0`: perfect positive correlation
###Code
correlation_dataframe = training_examples.copy()
correlation_dataframe["target"] = training_targets["median_house_value"]
correlation_dataframe.corr()
###Output
_____no_output_____
###Markdown
Features that have strong positive or negative correlations with the target will add information to our model. We can use the correlation matrix to find such strongly correlated features.We'd also like to have features that aren't so strongly correlated with each other, so that they add independent information.Use this information to try removing features. You can also try developing additional synthetic features, such as ratios of two raw features.For convenience, we've included the training code from the previous exercise.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_model(
learning_rate,
steps,
batch_size,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a linear regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `LinearRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a linear regressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=construct_feature_columns(training_examples),
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period,
)
# Take a break and compute predictions.
training_predictions = linear_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = linear_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
return linear_regressor
###Output
_____no_output_____
###Markdown
Spend 5 minutes searching for a good set of features and training parameters. Then check the solution to see what we chose. Don't forget that different features may require different learning parameters.
###Code
#
# Your code here: add your features of choice as a list of quoted strings.
#
minimal_features = [
]
assert minimal_features, "You must select at least one feature!"
minimal_training_examples = training_examples[minimal_features]
minimal_validation_examples = validation_examples[minimal_features]
#
# Don't forget to adjust these parameters.
#
train_model(
learning_rate=0.001,
steps=500,
batch_size=5,
training_examples=minimal_training_examples,
training_targets=training_targets,
validation_examples=minimal_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 228.83
period 01 : 221.17
period 02 : 213.61
period 03 : 206.16
period 04 : 198.84
period 05 : 191.66
period 06 : 184.63
period 07 : 177.78
period 08 : 171.13
period 09 : 164.70
Model training finished.
###Markdown
SolutionClick below for a solution.
###Code
minimal_features = [
"median_income",
"latitude",
]
minimal_training_examples = training_examples[minimal_features]
minimal_validation_examples = validation_examples[minimal_features]
_ = train_model(
learning_rate=0.01,
steps=500,
batch_size=5,
training_examples=minimal_training_examples,
training_targets=training_targets,
validation_examples=minimal_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 165.75
period 01 : 126.77
period 02 : 116.79
period 03 : 115.96
period 04 : 115.84
period 05 : 114.99
period 06 : 114.37
period 07 : 115.16
period 08 : 113.80
period 09 : 112.85
Model training finished.
###Markdown
Task 2: Make Better Use of LatitudePlotting `latitude` vs. `median_house_value` shows that there really isn't a linear relationship there.Instead, there are a couple of peaks, which roughly correspond to Los Angeles and San Francisco.
###Code
plt.scatter(training_examples["latitude"], training_targets["median_house_value"])
###Output
_____no_output_____
###Markdown
**Try creating some synthetic features that do a better job with latitude.**For example, you could have a feature that maps `latitude` to a value of `|latitude - 38|`, and call this `distance_from_san_francisco`.Or you could break the space into 10 different buckets. `latitude_32_to_33`, `latitude_33_to_34`, etc., each showing a value of `1.0` if `latitude` is within that bucket range and a value of `0.0` otherwise.Use the correlation matrix to help guide development, and then add them to your model if you find something that looks good.What's the best validation performance you can get?
###Code
#
# YOUR CODE HERE: Train on a new data set that includes synthetic features based on latitude.
#
###Output
_____no_output_____
###Markdown
SolutionClick below for a solution. Aside from `latitude`, we'll also keep `median_income`, to compare with the previous results.We decided to bucketize the latitude. This is fairly straightforward in Pandas using `Series.apply`.
###Code
def select_and_transform_features(source_df):
LATITUDE_RANGES = zip(range(32, 44), range(33, 45))
selected_examples = pd.DataFrame()
selected_examples["median_income"] = source_df["median_income"]
for r in LATITUDE_RANGES:
selected_examples["latitude_%d_to_%d" % r] = source_df["latitude"].apply(
lambda l: 1.0 if l >= r[0] and l < r[1] else 0.0)
return selected_examples
selected_training_examples = select_and_transform_features(training_examples)
selected_validation_examples = select_and_transform_features(validation_examples)
_ = train_model(
learning_rate=0.01,
steps=500,
batch_size=5,
training_examples=selected_training_examples,
training_targets=training_targets,
validation_examples=selected_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 227.63
period 01 : 217.43
period 02 : 207.31
period 03 : 197.31
period 04 : 187.41
period 05 : 177.65
period 06 : 168.05
period 07 : 158.65
period 08 : 149.45
period 09 : 140.52
Model training finished.
|
test_m1_madg.ipynb | ###Markdown
Dependencies
###Code
!nvidia-smi
!jupyter notebook list
%env CUDA_VISIBLE_DEVICES=1
%matplotlib inline
%load_ext autoreload
%autoreload 2
import time
from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
from models import tiramisu
from models import tiramisu_bilinear
from models import tiramisu_m1
from models import unet
from datasets import deepglobe
from datasets import madg
from datasets import joint_transforms
import utils.imgs
import utils.training as train_utils
# tensorboard
from torch.utils.tensorboard import SummaryWriter
###Output
_____no_output_____
###Markdown
Dataset- Download the DeepGlobe dataset from https://competitions.codalab.org/competitions/18467. Place it in datasets/deepglobe/dataset/train,test,valid- Download the Massachusetts Road Dataset from https://www.cs.toronto.edu/~vmnih/data/. Combine the training, validation, and test sets, process with `crop_dataset.ipynb` and place the output in datasets/maroads/dataset/map,sat- Run `combine_datasets.ipynb` to combine the two and output to datasets/madg
###Code
run = "expM.1.madg.4"
DEEPGLOBE_PATH = Path('datasets/', 'deepglobe/dataset')
MADG_PATH = Path('datasets/', 'madg/dataset')
RESULTS_PATH = Path('.results/')
WEIGHTS_PATH = Path('.weights/') / run
RUNS_PATH = Path('.runs/')
RESULTS_PATH.mkdir(exist_ok=True)
WEIGHTS_PATH.mkdir(exist_ok=True)
RUNS_PATH.mkdir(exist_ok=True)
batch_size = 1 # TODO: Should be `MAX_BATCH_PER_CARD * torch.cuda.device_count()` (which in this case is 1 assuming max of 1 batch per card)
# resize = joint_transforms.JointRandomCrop((300, 300))
# normalize = transforms.Normalize(mean=deepglobe.mean, std=deepglobe.std)
# normalize = transforms.Normalize(mean=madg.mean, std=madg.std)
train_joint_transformer = transforms.Compose([
# resize,
joint_transforms.JointRandomHorizontalFlip(),
joint_transforms.JointRandomVerticalFlip(),
joint_transforms.JointRandomRotate()
])
train_dset = madg.Madg(MADG_PATH, 'train',
joint_transform=train_joint_transformer,
transform=transforms.Compose([
# transforms.ColorJitter(brightness=.4,contrast=.4,saturation=.4),
transforms.ToTensor(),
# normalize,
]))
train_loader = torch.utils.data.DataLoader(
train_dset, batch_size=batch_size, shuffle=True)
resize_joint_transformer = None
val_dset = madg.Madg(
MADG_PATH, 'valid', joint_transform=resize_joint_transformer,
transform=transforms.Compose([
transforms.ToTensor(),
# normalize
]))
val_loader = torch.utils.data.DataLoader(
val_dset, batch_size=batch_size, shuffle=False)
test_dset = madg.Madg(
MADG_PATH, 'test', joint_transform=resize_joint_transformer,
transform=transforms.Compose([
transforms.ToTensor(),
# normalize
]))
test_loader = torch.utils.data.DataLoader(
test_dset, batch_size=batch_size, shuffle=False)
print("Train: %d" %len(train_loader.dataset))
print("Val: %d" %len(val_loader.dataset.imgs))
print("Test: %d" %len(test_loader.dataset.imgs))
# print("Classes: %d" % len(train_loader.dataset.classes))
print((iter(train_loader)))
inputs, targets = next(iter(train_loader))
print("Inputs: ", inputs.size())
print("Targets: ", targets.size())
utils.imgs.view_image(inputs[0])
# utils.imgs.view_image(targets[0])
utils.imgs.view_annotated(targets[0])
print(inputs[0].max(),inputs[0].min())
print(targets[0].max(),targets[0].min())
###Output
Train: 4771
Val: 1182
Test: 1182
<torch.utils.data.dataloader._SingleProcessDataLoaderIter object at 0x7f42100832d0>
Inputs: torch.Size([1, 3, 1024, 1024])
Targets: torch.Size([1, 1024, 1024])
###Markdown
Train
###Code
LR = 1e-4
LR_DECAY = 0.995
DECAY_EVERY_N_EPOCHS = 1
N_EPOCHS = 1000
torch.cuda.manual_seed(0)
from utils.bceloss import dice_bce_loss
from loss.BCESSIM import BCESSIM
model = tiramisu_m1.FCDenseNetSmall(n_classes=1, dropout_rate=0.2).cuda()
optimizer = torch.optim.RMSprop(model.parameters(), lr=LR, weight_decay=1e-4)
# criterion = dice_bce_loss()
criterion = BCESSIM()
# summary(model, input_size=inputs[0].shape)
start_epoch = 0
!ls -l {WEIGHTS_PATH/'latest.th'}
# start_epoch = train_utils.load_weights(model, (WEIGHTS_PATH/'latest.th')) + 1
start_epoch = train_utils.load_weights(model, (WEIGHTS_PATH/'weights-212-0.194-0.528.pth')) + 1
print("Starting from epoch", start_epoch)
# Writer will output to ./runs/ directory by default
writer = SummaryWriter(log_dir=(RUNS_PATH.as_posix() + "/" + "run" + str(run) + "/"))
from torch.autograd import Variable
debug_max_size=None
# break # errors. Used to stop "run all"
for epoch in range(start_epoch, N_EPOCHS+1):
since = time.time()
# ### Train ###
# trn_loss, trn_err = train_utils.train(
# model, train_loader, optimizer, criterion, epoch, debug_max_size=debug_max_size)
# print('Epoch {:d}\nTrain - Loss: {:.4f}, Acc: {:.4f}'.format(
# epoch, trn_loss, 1-trn_err))
# time_elapsed = time.time() - since
# print('Train Time {:.0f}m {:.0f}s'.format(
# time_elapsed // 60, time_elapsed % 60))
# ### Validation ###
# val_loss, val_err, val_iou = train_utils.test(model, val_loader, criterion, epoch, debug_max_size=debug_max_size)
# print('Tes - Loss: {:.4f} | Jacc: {:.4f} , Err: {:.4f}'.format(val_loss, val_iou, val_err))
# time_elapsed = time.time() - since
# print('Total Time {:.0f}m {:.0f}s\n'.format(
# time_elapsed // 60, time_elapsed % 60))
### Test ###
test_loss, test_err, test_iou = train_utils.test(model, test_loader, criterion, epoch, debug_max_size=debug_max_size)
print('Tes - Loss: {:.4f} | Jacc: {:.4f} , Err: {:.4f}'.format(test_loss, test_iou, test_err))
time_elapsed = time.time() - since
print('Total Time {:.0f}m {:.0f}s\n'.format(
time_elapsed // 60, time_elapsed % 60))
# ### Checkpoint ###
# train_utils.save_weights(model, epoch, val_loss, val_err, WEIGHTS_PATH=WEIGHTS_PATH)
# # Log on tensorboard
# writer.add_scalar('Loss/train', trn_loss, epoch)
# writer.add_scalar('Loss/val', val_loss, epoch)
# writer.add_scalar('Error/train', trn_err, epoch)
# writer.add_scalar('Error/val', val_err, epoch)
# writer.add_scalar('Accuracy/train', 1-trn_err, epoch)
# writer.add_scalar('Accuracy/val', val_iou, epoch)
# for param_group in optimizer.param_groups:
# writer.add_scalar('Params/learning_rage', param_group['lr'], epoch)
# # writer.add_scalar('params/learning_rate', optimizer.lr, epoch)
# # writer.add_scalar('Params/no_optim', no_optim, epoch)
# # log a sample image
# # sample_images = [0,1030,281,623,636,655,1028,1353,2222,2224]
# sample_images = [0,1030,281,623,636,655,1028,1000,1001,1002]
# for i in sample_images:
# inputs, targets, pred, loss, err, iou = train_utils.get_sample_predictions(model, val_loader, n=1, criterion=criterion, idx=i)
# raw = model(inputs.cuda()).cpu()
# img = torchvision.utils.make_grid(torch.stack([
# inputs[0],
# targets[0].unsqueeze(0).expand(3,-1,-1).float(),
# pred[0].unsqueeze(0).expand(3,-1,-1).float(),
# raw[0].expand(3,-1,-1).float()
# ]), normalize=True)
# writer.add_image('sample_pred/val/' + str(i), img, epoch)
# start_epoch = epoch
###Output
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py:2479: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
###Markdown
Debug
###Code
stats = train_utils.view_sample_predictions(model, val_loader, n=1, criterion=criterion)
print("loss", "error", "jaccard")
print(stats)
# !pip install torchsummary
from torchsummary import summary
summary(model, input_size=inputs[0].shape)
###Output
_____no_output_____ |
EVDemandModel_EVScenarios/RunningModel/supplement_scenarios.ipynb | ###Markdown
Supplementary Paper ScenariosThis notebook runs the scenarios used in the paper's Supplementary Information.Developed by Siobhan Powell, 2021. Updated in 2022.
###Code
import pandas as pd
import matplotlib.pyplot as plt
import boto3
import numpy as np
import pickle
import time
from speech_classes import SPEECh
from speech_classes import SPEEChGeneralConfiguration
from speech_classes import LoadProfile
from speech_classes import Plotting
from speech_classes import DataSetConfigurations
###Output
_____no_output_____
###Markdown
Number of Home Chargers in each scenario:
###Code
home_chargers = pd.DataFrame(np.zeros((4, 11)), index=['UniversalHome', 'HighHome', 'LowHome_HighWork', 'LowHome_LowWork'], columns=['CA', 'OR', 'WA', 'ID', 'MT', 'WY', 'NV', 'UT', 'CO', 'NM', 'AZ'])
for scenario_name in ['UniversalHome', 'HighHome', 'LowHome_HighWork', 'LowHome_LowWork']:
print(scenario_name)
for state in ['CA', 'OR', 'WA', 'ID', 'MT', 'WY', 'NV', 'UT', 'CO', 'NM', 'AZ']:
data = DataSetConfigurations(data_set='CP')
speech = SPEECh(data=data, penetration_level=1.0, outside_california=True, states=[state])
speech.pa_ih(scenario=scenario_name)
speech.pg_multiple_regions(region_type='State', region_value_list=[state])
tot = 0
for key in speech.p_abe_data.index:
if 'home_l2' in key:
tot += speech.p_abe_data.loc[key, 'p_abe']
home_chargers.loc[scenario_name, state] = tot * speech.num_evs
home_chargers
home_chargers['Total'] = home_chargers.sum(axis=1)
home_chargers['Total']
###Output
_____no_output_____
###Markdown
Large Battery Case
###Code
def run_100p_wecc_largebattery(scenario_name, remove_timers, utility_region, save_string, date, tz_aware=True):
for weekday_string in ['weekday', 'weekend']:
wecc_tot_evs = 0
state_list = ['CA', 'OR', 'WA', 'ID', 'MT', 'WY', 'NV', 'UT', 'CO', 'NM', 'AZ']
time_zones = {'CA':0, 'OR':0, 'WA':0, 'ID':1, 'MT':1, 'WY':1, 'NV':0, 'UT':1, 'CO':1, 'NM':1, 'AZ':1}
state_results = {}
total_load_dict = {key:np.zeros((1440,)) for key in ['Residential L1', 'Residential L2', 'MUD L2', 'Workplace L2', 'Public L2', 'Public DCFC']}
total_load_segments = np.zeros((1440, 6))
for state in state_list:
print('----------'+state+'----------')
data = DataSetConfigurations(data_set='CP')
speech = SPEECh(data=data, penetration_level=1.0, outside_california=True, states=[state])
speech.pa_ih(scenario=scenario_name)
# Large Batteries Only:
speech.pb_i(scenario='Equal')
speech.pg_multiple_regions(region_type='State', region_value_list=[state])
config = SPEEChGeneralConfiguration(speech, remove_timers=remove_timers, utility_region=utility_region)
config.run_all(verbose=False, weekday=weekday_string)
state_results[state] = {'Speech':speech, 'Config':config}
if tz_aware:
if time_zones[state] == 0:
for key in total_load_dict.keys():
total_load_dict[key] += config.total_load_dict[key]
total_load_segments += config.total_load_segments
else:
# Put into California time
for key in total_load_dict.keys():
tmp = np.copy(config.total_load_dict[key])
tmp2 = np.zeros((1440,))
tmp2[np.arange(0, 1440-60)] = tmp[np.arange(60, 1440)]
tmp2[np.arange(1440-60, 1440)] = tmp[np.arange(0, 60)]
total_load_dict[key] += np.copy(tmp2)
tmp = np.copy(config.total_load_segments)
tmp2 = np.zeros((1440,6))
tmp2[np.arange(0, 1440-60), :] = tmp[np.arange(60, 1440), :]
tmp2[np.arange(1440-60, 1440), :] = tmp[np.arange(0, 60), :]
total_load_segments += np.copy(tmp2)
else:
for key in total_load_dict.keys():
total_load_dict[key] += config.total_load_dict[key]
total_load_segments += config.total_load_segments
print('Total EVs: ', config.num_total_drivers)
wecc_tot_evs += config.num_total_drivers
if weekday_string == 'weekday':
pd.DataFrame(config.total_load_dict).to_csv('Outputs/Supplement/'+save_string+'_'+str(state)+'_'+date+'.csv')
else:
pd.DataFrame(config.total_load_dict).to_csv('Outputs/Supplement/'+save_string+'_weekend_'+str(state)+'_'+date+'.csv')
if weekday_string == 'weekday':
pd.DataFrame(total_load_dict).to_csv('Outputs/Supplement/'+save_string+'_WECC_'+date+'.csv')
else:
pd.DataFrame(total_load_dict).to_csv('Outputs/Supplement/'+save_string+'_weekend_WECC_'+date+'.csv')
print('Total EVs in WECC: ', wecc_tot_evs)
return
date = '20220506'
run_100p_wecc_largebattery('UniversalHome', True, 'PGE', 'UniversalHome_LargeBatteryOnly_100p_NoTimers', date)
run_100p_wecc_largebattery('HighHome', True, 'PGE', 'HighHome_LargeBatteryOnly_100p_NoTimers', date)
run_100p_wecc_largebattery('LowHome_HighWork', True, 'PGE', 'LowHome_HighWork_LargeBatteryOnly_100p_NoTimers', date)
run_100p_wecc_largebattery('LowHome_LowWork', True, 'PGE', 'LowHome_LowWork_LargeBatteryOnly_100p_NoTimers', date)
###Output
----------CA----------
Total EVs: 24234832
----------OR----------
Total EVs: 2923172
----------WA----------
Total EVs: 5280998
----------ID----------
Total EVs: 1269502
----------MT----------
Total EVs: 863108
----------WY----------
Total EVs: 478703
----------NV----------
Total EVs: 1879178
----------UT----------
Total EVs: 1951222
----------CO----------
Total EVs: 3977177
----------NM----------
Total EVs: 1407939
----------AZ----------
Total EVs: 4374941
Total EVs in WECC: 48640772
----------CA----------
Total EVs: 24234832
----------OR----------
Total EVs: 2923172
----------WA----------
Total EVs: 5280998
----------ID----------
Total EVs: 1269502
----------MT----------
Total EVs: 863108
----------WY----------
Total EVs: 478703
----------NV----------
Total EVs: 1879178
----------UT----------
Total EVs: 1951222
----------CO----------
Total EVs: 3977177
----------NM----------
Total EVs: 1407939
----------AZ----------
Total EVs: 4374941
Total EVs in WECC: 48640772
----------CA----------
###Markdown
Fast Charging Case
###Code
data = DataSetConfigurations(data_set='CP')
speech = SPEECh(data=data, penetration_level=1.0, outside_california=True, states=['OR'])
speech.pa_ih(scenario='HighHome')
speech.pg_multiple_regions(region_type='State', region_value_list=['OR'])
config = SPEEChGeneralConfiguration(speech, remove_timers=True, utility_region='PGE')
config.run_all(verbose=False, weekday='weekday')
plots = Plotting(speech, config)
plots.plot_single(config.total_load_segments, config.total_load_dict, save_str=None)
import copy
data = DataSetConfigurations(data_set='CP')
speech = SPEECh(data=data, penetration_level=1.0, outside_california=True, states=['OR'])
speech.pa_ih(scenario='HighHome')
speech.pg_multiple_regions(region_type='State', region_value_list=['OR'])
config = SPEEChGeneralConfiguration(speech, remove_timers=True, utility_region='PGE')
for g in range(data.ng):
for weekday in ['weekday', 'weekend']:
config.group_configs[g].segment_session_numbers[weekday]['public_l3'] += config.group_configs[g].segment_session_numbers[weekday]['public_l2']
config.group_configs[g].segment_session_numbers[weekday]['public_l2'] = 0
if config.group_configs[g].segment_session_numbers[weekday]['public_l3'] > 0:
if 'public_l3' not in config.group_configs[g].segment_gmms[weekday].keys():
# copy gmm from another group that has most fast charging within the same energy bin
if g <= 20:
target_g = 11
elif g <= 34:
target_g = 27
elif g <= 48:
target_g = 41
elif g <= 68:
target_g = 55
elif g <= 93:
target_g = 71
elif g <= 114:
target_g = 94
else:
target_g = 115
config.group_configs[g].segment_gmms[weekday]['public_l3'] = copy.deepcopy(config.group_configs[target_g].segment_gmms[weekday]['public_l3'])
config.run_all(verbose=False, weekday='weekday')
plots = Plotting(speech, config)
plots.plot_single(config.total_load_segments, config.total_load_dict, save_str=None)
def run_100p_wecc_fastcharging(scenario_name, remove_timers, utility_region, save_string, date, tz_aware=True):
for weekday_string in ['weekday', 'weekend']:
wecc_tot_evs = 0
state_list = ['CA', 'OR', 'WA', 'ID', 'MT', 'WY', 'NV', 'UT', 'CO', 'NM', 'AZ']
time_zones = {'CA':0, 'OR':0, 'WA':0, 'ID':1, 'MT':1, 'WY':1, 'NV':0, 'UT':1, 'CO':1, 'NM':1, 'AZ':1}
state_results = {}
total_load_dict = {key:np.zeros((1440,)) for key in ['Residential L1', 'Residential L2', 'MUD L2', 'Workplace L2', 'Public L2', 'Public DCFC']}
total_load_segments = np.zeros((1440, 6))
for state in state_list:
print('----------'+state+'----------')
data = DataSetConfigurations(data_set='CP')
speech = SPEECh(data=data, penetration_level=1.0, outside_california=True, states=[state])
speech.pa_ih(scenario=scenario_name)
speech.pg_multiple_regions(region_type='State', region_value_list=[state])
config = SPEEChGeneralConfiguration(speech, remove_timers=remove_timers, utility_region=utility_region)
# switch public l2 to fast charging
for g in range(data.ng):
for weekday in ['weekday', 'weekend']:
config.group_configs[g].segment_session_numbers[weekday]['public_l3'] += config.group_configs[g].segment_session_numbers[weekday]['public_l2']
config.group_configs[g].segment_session_numbers[weekday]['public_l2'] = 0
if config.group_configs[g].segment_session_numbers[weekday]['public_l3'] > 0:
if 'public_l3' not in config.group_configs[g].segment_gmms[weekday].keys():
# copy gmm from another group that has most fast charging within the same energy bin
if g <= 20:
target_g = 11
elif g <= 34:
target_g = 27
elif g <= 48:
target_g = 41
elif g <= 68:
target_g = 55
elif g <= 93:
target_g = 71
elif g <= 114:
target_g = 94
else:
target_g = 115
config.group_configs[g].segment_gmms[weekday]['public_l3'] = copy.deepcopy(config.group_configs[target_g].segment_gmms[weekday]['public_l3'])
config.run_all(verbose=False, weekday=weekday_string)
state_results[state] = {'Speech':speech, 'Config':config}
if tz_aware:
if time_zones[state] == 0:
for key in total_load_dict.keys():
total_load_dict[key] += config.total_load_dict[key]
total_load_segments += config.total_load_segments
else:
# Put into California time
for key in total_load_dict.keys():
tmp = np.copy(config.total_load_dict[key])
tmp2 = np.zeros((1440,))
tmp2[np.arange(0, 1440-60)] = tmp[np.arange(60, 1440)]
tmp2[np.arange(1440-60, 1440)] = tmp[np.arange(0, 60)]
total_load_dict[key] += np.copy(tmp2)
tmp = np.copy(config.total_load_segments)
tmp2 = np.zeros((1440,6))
tmp2[np.arange(0, 1440-60), :] = tmp[np.arange(60, 1440), :]
tmp2[np.arange(1440-60, 1440), :] = tmp[np.arange(0, 60), :]
total_load_segments += np.copy(tmp2)
else:
for key in total_load_dict.keys():
total_load_dict[key] += config.total_load_dict[key]
total_load_segments += config.total_load_segments
print('Total EVs: ', config.num_total_drivers)
wecc_tot_evs += config.num_total_drivers
if weekday_string == 'weekday':
pd.DataFrame(config.total_load_dict).to_csv('Outputs/Supplement/'+save_string+'_'+str(state)+'_'+date+'.csv')
else:
pd.DataFrame(config.total_load_dict).to_csv('Outputs/Supplement/'+save_string+'_weekend_'+str(state)+'_'+date+'.csv')
if weekday_string == 'weekday':
pd.DataFrame(total_load_dict).to_csv('Outputs/Supplement/'+save_string+'_WECC_'+date+'.csv')
else:
pd.DataFrame(total_load_dict).to_csv('Outputs/Supplement/'+save_string+'_weekend_WECC_'+date+'.csv')
print('Total EVs in WECC: ', wecc_tot_evs)
return
date = '20220506'
run_100p_wecc_fastcharging('UniversalHome', True, 'PGE', 'UniversalHome_FastCharging_100p_NoTimers', date)
run_100p_wecc_fastcharging('HighHome', True, 'PGE', 'HighHome_FastCharging_100p_NoTimers', date)
run_100p_wecc_fastcharging('LowHome_HighWork', True, 'PGE', 'LowHome_HighWork_FastCharging_100p_NoTimers', date)
run_100p_wecc_fastcharging('LowHome_LowWork', True, 'PGE', 'LowHome_LowWork_FastCharging_100p_NoTimers', date)
###Output
----------CA----------
Total EVs: 24234832
----------OR----------
Total EVs: 2923172
----------WA----------
Total EVs: 5280998
----------ID----------
Total EVs: 1269502
----------MT----------
Total EVs: 863108
----------WY----------
Total EVs: 478703
----------NV----------
Total EVs: 1879178
----------UT----------
Total EVs: 1951222
----------CO----------
Total EVs: 3977177
----------NM----------
Total EVs: 1407939
----------AZ----------
Total EVs: 4374941
Total EVs in WECC: 48640772
----------CA----------
Total EVs: 24234832
----------OR----------
Total EVs: 2923172
----------WA----------
Total EVs: 5280998
----------ID----------
Total EVs: 1269502
----------MT----------
Total EVs: 863108
----------WY----------
Total EVs: 478703
----------NV----------
Total EVs: 1879178
----------UT----------
Total EVs: 1951222
----------CO----------
Total EVs: 3977177
----------NM----------
Total EVs: 1407939
----------AZ----------
Total EVs: 4374941
Total EVs in WECC: 48640772
----------CA----------
Total EVs: 24234832
----------OR----------
Total EVs: 2923172
----------WA----------
Total EVs: 5280998
----------ID----------
Total EVs: 1269502
----------MT----------
Total EVs: 863108
----------WY----------
Total EVs: 478703
----------NV----------
Total EVs: 1879178
----------UT----------
Total EVs: 1951222
----------CO----------
Total EVs: 3977177
----------NM----------
Total EVs: 1407939
----------AZ----------
Total EVs: 4374941
Total EVs in WECC: 48640772
----------CA----------
Total EVs: 24234832
----------OR----------
Total EVs: 2923172
----------WA----------
Total EVs: 5280998
----------ID----------
Total EVs: 1269502
----------MT----------
Total EVs: 863108
----------WY----------
Total EVs: 478703
----------NV----------
Total EVs: 1879178
----------UT----------
Total EVs: 1951222
----------CO----------
Total EVs: 3977177
----------NM----------
Total EVs: 1407939
----------AZ----------
Total EVs: 4374941
Total EVs in WECC: 48640772
----------CA----------
Total EVs: 24234832
----------OR----------
Total EVs: 2923172
----------WA----------
Total EVs: 5280998
----------ID----------
Total EVs: 1269502
----------MT----------
Total EVs: 863108
----------WY----------
Total EVs: 478703
----------NV----------
Total EVs: 1879178
----------UT----------
Total EVs: 1951222
----------CO----------
Total EVs: 3977177
----------NM----------
Total EVs: 1407939
----------AZ----------
Total EVs: 4374941
Total EVs in WECC: 48640772
----------CA----------
Total EVs: 24234832
----------OR----------
Total EVs: 2923172
----------WA----------
Total EVs: 5280998
----------ID----------
Total EVs: 1269502
----------MT----------
Total EVs: 863108
----------WY----------
Total EVs: 478703
----------NV----------
Total EVs: 1879178
----------UT----------
Total EVs: 1951222
----------CO----------
Total EVs: 3977177
----------NM----------
Total EVs: 1407939
----------AZ----------
Total EVs: 4374941
Total EVs in WECC: 48640772
----------CA----------
Total EVs: 24234832
----------OR----------
Total EVs: 2923172
----------WA----------
Total EVs: 5280998
----------ID----------
Total EVs: 1269502
----------MT----------
Total EVs: 863108
----------WY----------
Total EVs: 478703
----------NV----------
Total EVs: 1879178
----------UT----------
Total EVs: 1951222
----------CO----------
Total EVs: 3977177
----------NM----------
Total EVs: 1407939
----------AZ----------
Total EVs: 4374941
Total EVs in WECC: 48640772
----------CA----------
Total EVs: 24234832
----------OR----------
Total EVs: 2923172
----------WA----------
Total EVs: 5280998
----------ID----------
Total EVs: 1269502
----------MT----------
Total EVs: 863108
----------WY----------
Total EVs: 478703
----------NV----------
Total EVs: 1879178
----------UT----------
Total EVs: 1951222
----------CO----------
Total EVs: 3977177
----------NM----------
Total EVs: 1407939
----------AZ----------
Total EVs: 4374941
Total EVs in WECC: 48640772
|
seminar_skript/Regression_Techniques.ipynb | ###Markdown
Lineare RegressionIn der nachfolgenden Zelle werden zuerst Daten geladen, die zur Veranschaulichung der linearen Regression dienen.Anschliessend wird ein lineares Modell mit Hilfe der der Klasse Lineare Regression aus `sklearn.linear_model` gerechnet. Die Vorhersage (d.h. die Geradengleichung) ergibt sich aus den Koeffizienten durch $y = a + bX$.
###Code
from sklearn.linear_model import LinearRegression
import numpy as np
import matplotlib.pyplot as plt
y = np.load('/home/martin/python/fhnw_lecture/scripts/regression_y.pickle.npy')
X = np.load('/home/martin/python/fhnw_lecture/scripts/regression_X.pickle.npy')
model = LinearRegression()
model.fit(X, y)
y_hat = model.coef_ * X + model.intercept_
###Output
_____no_output_____
###Markdown
Warum wird für $\mathbf{X}$ immer ein Grossbuchstabe verwendet und für $\mathbf{y}$ ein kleiner Buchstabe ?Die Matrix der Variablen X wird gross geschrieben, da in Matrix-Notation Matrizen immer mit grossen Buchstaben bezeichnet werden, Vektoren - so wie die abhängige Variable y - werden mit kleinen Buchstaben benannt.
###Code
f = plt.figure(figsize=(4, 4), dpi=120)
plt.title(label='regression line, residues', fontdict={'fontsize':20})
axes = f.add_subplot(111)
axes.plot(X, y, 'ro', X, y_hat)
#axes = plt.gca()
axes.set_ylim([np.min(y)-5, np.max(y) +5])
for i in range(len(y)):
plt.plot((X[i, 0], X[i, 0]), (y[i], y_hat[i]))
axes.set_xlabel('X')
axes.set_ylabel('Y')
axes.annotate('$y$', xy=(X[-3, 0], y[-3, 0]), xycoords='data',
xytext=(X[-3, 0] - 1.5, y[-3, 0] + 1), textcoords='data',
size = 20, arrowprops=dict(arrowstyle="->"))
axes.annotate('$\hat{y}$', xy=(X[-3, 0], y_hat[-3, 0]), xycoords='data',
xytext=(X[-3, 0] - 1.5, y_hat[-3, 0] + 1), textcoords='data',
size = 20, arrowprops=dict(arrowstyle="->"))
axes.annotate('$\hat{y} = a + bX$', xy=(X[3, 0] + 0.5, model.coef_ * (X[3, 0] + 0.5) + model.intercept_),
xycoords='data', xytext=(X[3, 0] + 0.5, 55), textcoords='data',
horizontalalignment = 'center',
size = 20, arrowprops=dict(arrowstyle="->"))
plt.show()
#plt.close('all')
###Output
_____no_output_____
###Markdown
Der Plot zeigt die berechnete Regressionsgerade, sowie die Abweichungen (die Fehler) der wirklichen Messwerte von dieser Geraden. Diese Abweichungen werden als __Residuen__ bezeichnet, weil es der Anteil der gemessenen Werte ist, der “übrig bleibt”, d.h. nicht durch das Modell erklärt werden kann. Vorhergesagte Variablen werden meist mit einem Dach (Hut) bezeichnet, sowie $\hat{y}$. Analytische Herleitung der Parameter der Linearen RegressionAllgemein kann man den Nullpunkt einer quadratischen Funktion bestimmen, indem man ihre erste Ableitung gleich $0$ setzt. Die erste Ableitung gibt die Steigung der Funktion an. In der Physik ist dies of die Beschleunigung. Die Steigung ist am Minimum der Funktion schliesslich $0$. Man beachte, dass quadratische Funktionen immer nur einen Maximalwert haben können.Nachfolgend ist dieser Sachverhalt für die quadratische Funktion $f(x) = (x-1)^2$ dargestellt. Die Ableitung$2x-2$ ist ebenfalls eingetragen. Bei dem Minimum der Funktion ist die erste Ableitung gleich $0$ (die Stelle an der der Funktionsgraph, der der ersten Ableitung und die rote, horizontale Linie sich schneiden).
###Code
Image('../images/first_derivative.png', height= 280, width=280)
# <img alt="taken from homepage of 20 newsgroups" caption="The different categories of the newsgroup posts"
# id="20_newsgroups" src="../images/first_derivative.png" width="320" height="320">
###Output
_____no_output_____
###Markdown
Die Parameter einer linearen Regression können analytisch berechnet werden. Dazu wird der quadrierte Fehler $(y_i-\hat{y}_i)^2$ über alle Messwerte aufsummiert. Diese Summe wird nach den Parametern abgeleitet und gleich $0$ gesetzt. Somit erhält man die Stelle an der die quadratische Funktion keine Steigung (erste Ableitung ist Steigung) hat. Weil eine quadratische Funktion als einzige Nullstelle der Steigung ein Minimum hat, erhalten wir somit die Parameter an dem Minimum unserer quadratischen Fehlerfunktion. derivative of the error term $(y - \hat{y})^2$:* für $\hat{y}$ können wir auch schreiben: $a + b\cdot x$, dies ist die Vorhersage mit Hilfe der Regression-Gerade (der Geraden-Gleichung):$$\sum_i^{n}(y_i - \hat{y_i})^2 = \sum_i^{n}[y_i - (a + b\cdot x_i)]^{2}$$* wir leiten diese Fehler-Funktion nach $a$ ab und setzen diese erste Ableitung gleich $0$ (Hierbei wird die Kettenregel verwendet):\begin{align*}\frac{\delta \sum_i^{n}(y_i - \hat{y_i})^2}{\delta a} = -2\sum_i^{n}y_i + 2b\sum_i^{n}x_i + 2na =& 0\\2na =& 2\sum_i^{n}y_i - 2b\sum_i^{n}x_i\\ a =& \frac{2\sum_i^{n}y_i}{2n} - \frac{2b\sum_i^{n}x_i}{2n}\end{align*}* die Summe über alle $x_i$ geteilt durch $n$ -- die Anzahl aller Beobachtungen -- ergibt den Mittelwert $\bar{x}$, gleiches gilt für $\bar{y}$:$$a = \bar{y} - b\bar{x}$$* die Lösung für $b$ ergibt sich analog; hier ersetzen wir $a$ mit obigen Ergebnis und erhalten:$$ b = \frac{\frac{1}{n}\sum_i^n(x_i - \bar{x})(y_i - \bar{y})}{\frac{1}{n}\sum_i^n (x_i - \bar{x})^2} = \frac{\text{cov}_{xy}}{\text{var}_x}$$* Vereinfacht ist die Former: Kovarianz der beiden Variablen $x$ und $y$ geteilt durch die Varianz von $x$.Nachfolgend wird demonstriert, wie die hergeleiteten Formeln, in python angewendet dieselben Parameter-Schätzer ergeben wie die aus der Klasse `LineareRegression` aus `sklearn.linear_model`. Dies soll einfach nur demonstrieren, dass die alles ganz leicht zu rechnen ist und keiner komplizierten Algorithmen bedarf.
###Code
# we can easily verify these results
print(f'the parameter b is the coefficient of the linear model {model.coef_}')
print(f'the parameter a is called the intercept of the model because it indicates\n where the regression line intercepts the y-axis at x=0 {model.intercept_}')
cov_xy =(1/X.shape[0]) * np.dot((X - np.mean(X)).T,y - np.mean(y))[0][0]
var_x = (1/X.shape[0]) * np.dot((X - np.mean(X)).T,X - np.mean(X))[0][0]
b = cov_xy/var_x
a = np.mean(y)-b*np.mean(X)
print(f'\nour self-computed b parameter is: {b}')
print(f'our self-computed a parameter is: {a}')
###Output
the parameter b is the coefficient of the linear model [[8.07912445]]
the parameter a is called the intercept of the model because it indicates
where the regression line intercepts the y-axis at x=0 [-8.49032154]
our self-computed b parameter is: 8.079124453577005
our self-computed a parameter is: -8.490321540681798
###Markdown
multivariate case: more than one x variableFür Multivariate Lineare Regression kann die Schreibweise mit Matrizen zusammengefasst werden. Dafür kann es lohnend sein, sich die Matrizen-Multiplikation noch einmal kurz anzusehen. \begin{align*} y_1&=a+b_1\cdot x_{11}+b_2\cdot x_{21}+\cdots + b_p\cdot x_{p1}\\ y_2&=a+b_1\cdot x_{12}+b_2\cdot x_{22}+\cdots + b_p\cdot x_{p2}\\ \ldots& \ldots\\ y_i&=a+b_1\cdot x_{1i}+b_2\cdot x_{2i}+\cdots + b_p\cdot x_{pi}\\\end{align*}\begin{equation*} \begin{bmatrix} y_1\\ y_2\\ . \\ . \\ . \\ y_i \end{bmatrix} = a+ \begin{bmatrix} x_{11} & x_{21} & x_{31} & \ldots & x_{p1}\\ x_{12} & x_{22} & x_{32} & \ldots & x_{p2}\\ \ldots&\ldots&\ldots&\ldots&\ldots\\ \ldots&\ldots&\ldots&\ldots&\ldots\\ \ldots&\ldots&\ldots&\ldots&\ldots\\ x_{1i} & x_{2i} & x_{3i} & \ldots & x_{pi}\\ \end{bmatrix} \cdot \begin{bmatrix} b_1\\ b_2\\ .\\ .\\ .\\ b_p \end{bmatrix}\end{equation*} Den konstanten inercept Term ($a$) können wir mit in den Vektor der Parameter $\mathbf{b}$ aufnehmen, indem wir in $\mathbf{X}$ eine Einser-Spalte hinzufügen. Somit wird die Schreibweise sehr kompakt und der intercept $a$ wird nicht mehr explizit aufgeführt: \begin{equation*} \begin{bmatrix} y_1\\ y_2\\ . \\ . \\ . \\ y_i \end{bmatrix} = \begin{bmatrix} 1& x_{11} & x_{21} & x_{31} & \ldots & x_{p1}\\ 1 & x_{12} & x_{22} & x_{32} & \ldots & x_{p2}\\ &\ldots&\ldots&\ldots&\ldots&\ldots\\ &\ldots&\ldots&\ldots&\ldots&\ldots\\ 1& x_{1i} & x_{2i} & x_{3i} & \ldots & x_{pi} \end{bmatrix} \cdot \begin{bmatrix} a\\ b_1\\ b_2\\ .\\ .\\ b_p \end{bmatrix} \end{equation*} In Matrizen-Schreibweise können wir jetzt einfach schreiben:$\mathbf{y} = \mathbf{X}\mathbf{b}$ derivation of $\mathbf{\text{b}}$ for the matrix notationAnschliessend wird die Berechnung der Parameter der Multivariaten Regression in Matrizen-Schreibweise erläutert. Konzeptionell ist dies nicht vom univariaten Fall verschieden. Diese Formel wird nur hergeleitet um demonstrieren zu können, wie das Ergebnis der expliziten Berechnung in Python mit dem aus der sklearn Klasse `LinearRegression` übereinstimmt. * we expand the error term: \begin{align*} \text{min}=&(\mathbf{y}-\hat{\mathbf{y}})^2=(\mathbf{y}-\mathbf{X}\mathbf{b})'(\mathbf{y}-\mathbf{X}\mathbf{b})=\\ &(\mathbf{y}'-\mathbf{b}'\mathbf{X}')(\mathbf{y}-\mathbf{X}\mathbf{b})=\\ &\mathbf{y}'\mathbf{y}-\mathbf{b}'\mathbf{X}'\mathbf{y}-\mathbf{y}' \mathbf{X}\mathbf{b}+\mathbf{b}'\mathbf{X}'\mathbf{X}\mathbf{b}=\\ &\mathbf{y}'\mathbf{y}-2\mathbf{b}'\mathbf{X}'\mathbf{y}+\mathbf{b}'\mathbf{X}' \mathbf{X}\mathbf{b}\\ \end{align*} * derivative of the error term with respect to $\mathbf{b}$* we set the result equal to zero and solve for $\mathbf{b}$ \begin{align*} \frac{\delta}{\delta \mathbf{b}}=&-2\mathbf{X}'\mathbf{y}+2\mathbf{X}'\mathbf{X}\mathbf{b}=0\\ 2\mathbf{X}'\mathbf{X}\mathbf{b}=&2\mathbf{X}'\mathbf{y}\\ \mathbf{b}=&(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'\mathbf{y}\quad \end{align*} Hierbei bedarf es der Inversion des Kreuzproduktes der Variablen-Matrix $(\mathbf{X}'\mathbf{X})^{-1}$. Die Matrizen-Inversion ist für grosse Anzahl von Variablen mathematisch sehr aufwändig und kann unter Umständen zu Ungenauigkeiten führen. In der Vergangenheit wurde viel an Algorithmen geforscht um die Inversion schneller und stabiler zu machen. Oftmals stehen Fehlermeldungen in Zusammenhang mit diesem Berechnungsschritt. Polynomial regression as an example for more than one variableUm einfach Multivariate Lineare Regression an einem Beispiel zeigen zu können wird die quadratische Regression (ein Spezial-Fall der Multivariaten Regression) eingeführt. Eine neue Variable entsteht durch das Quadrieren der bisherigen univiaraten Variable x. Das Praktische ist, dass sich der Sachverhalt der Multivariaten Regression noch immer sehr schön 2-dimensional darstellen lässt. $y = a + b_1 x + b_2 x^2$Hier ist zu beachten:* wir haben jetzt zwei Variablen und können folglich unsere Formel in Matrizen-Schreibweise anwenden* mehr Variablen führen hoffentlich zu einem besseren Modell* durch den quadratischen Term ist die resultierende Regressions-Funktion keine Gerade mehr.__Der Ausdruck "linear" in Linearer Regression bedeutet dass die Funktion linear in den Parametern $a, \mathbf{b}_\mathbf{1}, \mathbf{b}_\mathbf{2}$ ist. Für alle Werte einer Variablen $\mathbf{x_1}$ gilt der gleiche Parameter $\mathbf{b_1}$.Es bedeutet nicht, dass die Regressions-Funktion durch eine gerade Linie gegeben ist!__Nachfolgend fügen wir die weitere Variable durch Quadrieren der bisherigen Variable hinzu und berechnen abermals das Lineare Modell aus `sklearn.linear_model`.
###Code
from numpy.linalg import inv
# polynomial
y = np.load('/home/martin/python/fhnw_lecture/scripts/regression_y.pickle.npy')
X = np.load('/home/martin/python/fhnw_lecture/scripts/regression_X.pickle.npy')
# underdetermined, ill-posed: infinitely many solutions
X = np.c_[X, X**2]
# the x (small x) is just for plotting purpose
model.fit(X, y)
y_hat = np.dot(x , model.coef_.T) + model.intercept_
f = plt.figure(figsize=(5, 5), dpi=100)
plt.title(label='quadratic regression', fontdict={'fontsize':20})
axes = f.add_subplot(111)
axes.plot(X[:,0], y, 'ro', x[:,0], y_hat.reshape((-1,)))
#axes = plt.gca()
axes.set_ylim([np.min(y)-5, np.max(y) +5])
###Output
_____no_output_____
###Markdown
Jetzt berechnen wir die Parameter der Multiplen Linearen Regression mit Hilfe der hergeleiteten Formeln. Hierfür fügen wir zu den bisherigen Variablen $x$ und $x^2$ noch eine Einser-Spalte für den intercpet ein. `np.dot` berechnet das dot-product zweier Variablen. Um das Kreuzprodukt von $\mathbf{X}$ berechnen zu können, muss eine der beiden Matrizen transponiert werden. Dies geschieht durch `.T`.`inv` invertiert das Kreuzprodukt.`coefs = np.dot(np.dot(inv(np.dot(X_intercept.T,X_intercept)),X_intercept.T),y)` ist gleichbedeutend mit:\begin{equation*}\mathbf{b}=(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'\mathbf{y}\end{equation*}
###Code
# again we can compare the parameters of the model with those resulting from
# our derived equation:
# b=(X'X)^{-1} X'y
from numpy.linalg import inv
# first we have to add the intercept into our X-Variable; we rename it X_intercept
X_intercept = np.c_[np.ones(X.shape[0]), X]
coefs = np.dot(np.dot(inv(np.dot(X_intercept.T,X_intercept)),X_intercept.T),y)
print(f'the parameter b is the coefficient of the linear model {model.coef_}')
print(f'the parameter a is called the intercept of the model because it indicates\n where the regression line intercepts the y-axis at x=0 {model.intercept_}')
print(f'our coefs already include the intercept: {coefs}')
###Output
the parameter b is the coefficient of the linear model [[-12.14930516 1.68570247]]
the parameter a is called the intercept of the model because it indicates
where the regression line intercepts the y-axis at x=0 [35.33794262]
our coefs already include the intercept: [[ 35.33794262]
[-12.14930516]
[ 1.68570247]]
###Markdown
OverfittingNun wird diese Vorgehensweise für weitere Terme höherer Ordnung angewendet. Graphisch lässt sich zeigen, dass die Anpassung des Modells an die Daten immer besser wird, die Vorhersage für __neue Datenpunkte__ aber sehr schlecht sein dürfte. Das Polynom hat an vielen Stellen Schlenker und absurde Kurven eingebaut. Dies ist ein erstes Beispiel für __“overfitting”__. Einen ‘perfekten’ fit erhält man, wenn man genausoviele Paramter (10 Steigunskoeffizienten + intercept) hat wie Daten-Messpunkte. The important points to note here:* the fit to our empirical y-values gets better* at the same time, the regression line starts behaving strangly* the predictions made by the regression line in between the empirical y-values are grossly wrong: this is an example of __overfitting__
###Code
y = np.load('/home/martin/python/fhnw_lecture/scripts/regression_y.pickle.npy')
X = np.load('/home/martin/python/fhnw_lecture/scripts/regression_X.pickle.npy')
# underdetermined, ill-posed: infinitely many solutions
X = np.c_[X, X**2, X**3, X**4, X**5, X**6, X**7, X**8, X**9]
x = np.arange(-1, 12, 0.05).reshape((-1, 1))
x = np.c_[x, x**2, x**3, x**4, x**5, x**6, x**7, x**8, x**9]
model.fit(X, y)
y_hat = np.dot(x , model.coef_.T) + model.intercept_
f = plt.figure(figsize=(5, 5), dpi=100)
plt.title(label='regression line for polynome of 9th degree', fontdict={'fontsize':20})
axes = f.add_subplot(111)
axes.plot(X[:,0], y, 'ro', x[:,0], y_hat.reshape((-1,)))
#axes = plt.gca()
axes.set_ylim([np.min(y)-10, np.max(y) +10])
###Output
_____no_output_____
###Markdown
perfect fit: as many variables as data samplesA perfect fit is possible as is demonstrated next. We have as many variables (terms derived from x) as observations (data points). So for each data point we have a variable to accommodate it.__Note__, that a perfect fit is achieved with 10 variables + intercept. The intercept is also a parameter and in this case the number of observations $n$ equals the number of variables $p$, i.e. $p=n$.
###Code
y = np.load('/home/martin/python/fhnw_lecture/scripts/regression_y.pickle.npy')
X = np.load('/home/martin/python/fhnw_lecture/scripts/regression_X.pickle.npy')
# underdetermined, ill-posed: infinitely many solutions
X = np.c_[X, X**2, X**3, X**4, X**5, X**6, X**7, X**8, X**9, X**10]
x = np.arange(-1, 12, 0.05).reshape((-1, 1))
x = np.c_[x, x**2, x**3, x**4, x**5, x**6, x**7, x**8, x**9, x**10]
model.fit(X, y)
y_hat = np.dot(x , model.coef_.T) + model.intercept_
print(f'the intercept and the coefficients are: {model.intercept_}, {model.coef_}')
f = plt.figure(figsize=(5, 5), dpi=100)
plt.title(label='regression line for polynome of 10th degree', fontdict={'fontsize':20})
axes = f.add_subplot(111)
axes.plot(X[:,0], y, 'ro', x[:,0], y_hat.reshape((-1,)))
#axes = plt.gca()
axes.set_ylim([np.min(y)-10, np.max(y) +20])
###Output
_____no_output_____
###Markdown
What happens if we have more variables than data points?Gibt es mehr Parameter als Datenpunkte, existieren unendlich viele Lösungen und das Problem ist nicht mehr eindeutig lösbar. Früher gelang die Inversion des Kreuzproduktes der Variablen $\mathbf{X}'\mathbf{X}$ nicht. Mittlerweile gibt es Näherungsverfahren, die dennoch Ergebnisse liefern - wenn auch sehr Ungenaue.Mittlerweile gibt es aber mathematische Näherungsverfahren die es ermöglichen auch singuläre Matrizen zu invertieren.`numpy` verwendet hierfür die sogenannte LU-decomposition.One way to see in python that the solution is erroneous is to use the `scipy.linalg.solve` package and solve for the matix S that solves $(\mathbf{X}'\mathbf{X})^{-1} \mathbf{S} = \mathbf{I}$. $\mathbf{I}$ is called the eye-matrix wih 1s in the diagonale and zeros otherwise:$$\mathbf{I}=\left[\begin{array}{ccc} 1 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & 1\end{array}\right]$$Die entscheidende Zeile im nachfolgenden Code ist:`S = solve(inv(np.dot(X.T, X)), np.eye(13))`Sie besagt: gib mir die Matrix $\mathbf{S}$, die multipliziert mit $(\mathbf{X}'\mathbf{X})^{-1}$ die Matrix $\mathbf{I}$ gibt.Für unseren Fall von mehr Variablen als Beobachtungspunkten werden wir gewarnt, dass das Ergebnis falsch sein könnte. Mit älteren Mathematik- oder Statistik-Programmen ist dies überhaupt nicht möglich.
###Code
warnings.filterwarnings("default")
from numpy.linalg import inv
from scipy.linalg import solve
model = LinearRegression()
y = np.load('/home/martin/python/fhnw_lecture/scripts/regression_y.pickle.npy')
X = np.load('/home/martin/python/fhnw_lecture/scripts/regression_X.pickle.npy')
# underdetermined, ill-posed: infinitely many solutions
X = np.c_[X, X**2, X**3, X**4, X**5, X**6, X**7, X**8, X**9, X**10, X**11, X**12, X**13]
# this should give at least a warning, because matrix inversion as done above is not possible
# any more, due to singular covariance matrix [X'X]
model.fit(X, y)
#y_hat = np.dot(x , model.coef_.T) + model.intercept_
S = solve(inv(np.dot(X.T, X)), np.eye(13))
###Output
/home/martin/miniconda3/lib/python3.6/site-packages/ipykernel_launcher.py:15: LinAlgWarning: Ill-conditioned matrix (rcond=3.8573e-21): result may not be accurate.
from ipykernel import kernelapp as app
###Markdown
statistical package RIn der statistischen Programmiersprache R wird keine Warnung herausgegeben. Es werden einfach nur soviele Koeffizienten (intercept ist auch ein Koeffizient) berechnet, wie möglich ist. Alle weiteren Koeffizienten sind `NA`.
###Code
warnings.filterwarnings("ignore")
Image("../images/R_inverse_example.png")
# <img alt="taken from homepage of 20 newsgroups" caption="The different categories of the newsgroup posts" id="20_newsgroups" src="../images/R_inverse_example.png" width="640" height="640">
###Output
_____no_output_____
###Markdown
Dealing with overfittingWie wir gesehen haben tendiert klassische Lineare Regression zu 'overfitting' sobald es wenige Datenpunkte gibt und mehrere Koeffizienten berechnet werden. Eine Lösung für dieses Problem ist, die Koeffizienten $b_1, b_2, b_3, \ldots$ kleiner zu machen. Dies kann erreicht werden, wenn der Fehler der Regression mit grösseren Koeffizienten auch grösser wird. Um nun das Minimum der Fehlerfunktion zu finden ist ein probates Mittel, die Koeffizienten kleiner zu machen und somit implizit 'overfitting' zu verhindern.Parameter können jetzt nur noch sehr gross werden, wenn dadurch gleichzeitig der Fehler stark reduziert werden kann.Nachfolgend wird ein Strafterm ('penalty') für grosse Parameter eingeführt. Im Falle der Ridge-Regression gehen die Koeffizienten quadriert in die Fehlerfunktion mit ein. Der Gewichtungsfaktor $\lambda$ bestimmt die Höhe des Strafterms und ist ein zusätzlicher Parameter für den -- je nach Datensatz -- ein optimaler Wert gefunden werden muss. Ridge regressionRemember this formula:\begin{equation*}\sum_i^{n}(y_i - \hat{y_i})^2 = \sum_i^{n}[y_i - (a + b\cdot x_i)]^{2}\end{equation*}To make the error term bigger, we could simply add $\lambda\cdot b^2$ to the error:\begin{equation*}\sum_i^{n}(y_i - \hat{y_i})^2 + \lambda b^2= \sum_i^{n}[y_i - (a + b\cdot x_i)]^{2}+ \lambda b^2\end{equation*}The parameter $\lambda$ is for scaling the amount of shrinkage.Die beiden Ausdrücke \begin{equation}\sum_i^{n}[y_i - (a + b\cdot x_i)]^{2}\label{eq:fehler}\end{equation} und \begin{equation}\lambda b^2\label{eq:ridge_error}\end{equation} sind wie Antagonisten. Der Koeffizient $b$ darf nur gross werden, wenn er es vermag $\eqref{eq:fehler}$ stark zu verkleinern, so dass der Zugewinn in $\eqref{eq:fehler}$ den Strafterm in $\eqref{eq:ridge_error}$ überwiegt.For two variables we can write:\begin{equation*}\sum_i^{n}(y_i - \hat{y_i})^2 + \lambda b_1^2 + \lambda b_2^2= \sum_i^{n}[y_i - (a + b_1\cdot x_{i1} + b_2\cdot x_{i2})]^{2}+ \lambda b_1^2 + \lambda b_2^2\end{equation*}And in matrix notation for an arbitrary number of variables:\begin{align*} \text{min}=&(\mathbf{y}-\hat{\mathbf{y}})^2 + \lambda \mathbf{b}^2=(\mathbf{y}-\mathbf{X}\mathbf{b})'(\mathbf{y}-\mathbf{X}\mathbf{b}) + \lambda \mathbf{b}'\mathbf{b}\end{align*} Interessanterweise gibt es für diesen Fall ebenfalls eine exakte analytische Lösung. Allerdings haben wir den intercept Koeffizienten $a$ mit in $\mathbf{b}$ aufgenommen und die zusätzliche Spalte mit lauter Einsern in $\mathbf{X}$ hinzugefügt. Wenn wir nun $\lambda \mathbf{b}'\mathbf{b}$ berechnen, den quadrierten Strafterm für den Parametervektor, dann würden wir auch $a$ bestrafen. Die Rolle von $a$ ist aber, die Höhenlage der Regressionsfunktion zu definieren (die Stelle an der die Funktion die y-Achse schneidet).Der intercept $a$ kann allerdings aus der Gleichung genommen werden, wenn die Variablen vorher standardisiert werden (Mittelwert $\bar{x} = 0$ und $\bar{y} = 0$). Jetzt verschwindet $a$ von ganz allein, wenn wir die standardisierten Mittelwerte in die Gleichung für $a$ einfügen:\begin{equation*}a=\bar{y} - b\bar{x} = 0 - b\cdot 0 = 0\end{equation*}Nun muss $a$ nicht mehr berücksichtigt werden und die Lösung für $\mathbf{b}$ ergibt sich zu:\begin{equation*}\hat{\mathbf{b}} = (\mathbf{X}'\mathbf{X} + \lambda\mathbf{I})^{-1}\mathbf{X}'\mathbf{y}\end{equation*}Nach Hastie et al., wurde dieses Verfahren ursprünglich verwendet um 'rank deficiency' Probleme zu beheben. Wenn Die Spalten oder Zeilen einer Matrix nicht lineare unabhängig sind, so hat die Matrix nicht vollen Rang. Beispielsweise kann sich eine Spalte durch Addition anderer Spalten ergeben. In diesem Fall funktionierte die Matrix Inversion nicht zufriedenstellend. Als Lösung hat man gefunden, dass es ausreichend ist, einen kleinen positiven Betrag zu den Diagonal-Elementen der Matrix zu addieren.Dies wird nachfolgend in einem numerischen Beispiel gezeigt: - `np.c_` fügt die einzelenne Variablen zu einer Matrix zusammen - `np.dot(X.T, X)` ist das bekannte Kreuzprodukt der transponierten Matrix $\mathbf{X'}$ und $\mathbf{X}$ - `np.linalg.matrix_rank` gibt uns den Rang der Matrix - `np.eye(7) * 2` erstellt eine Diagonal-Matrix mit 2 in der Diagonalen und 0 überall sonst
###Code
warnings.filterwarnings("ignore")
y = np.load('/home/martin/python/fhnw_lecture/scripts/regression_y.pickle.npy')
X = np.load('/home/martin/python/fhnw_lecture/scripts/regression_X.pickle.npy')
X_6 = np.c_[X, X**2, X**3, X**4, X**5, X**6]
print(f'With 6 variables (polynom of 6th degree), the rank of the quare matrix\n is '\
+ f'{np.linalg.matrix_rank(np.dot(X_6.T, X_6))}')
X_7 = np.c_[X, X**2, X**3, X**4, X**5, X**6, X**7]
print(f'With 7 variables (polynom of 7th degree), the rank of the quare matrix\n is '\
+ f'{np.linalg.matrix_rank(np.dot(X_7.T, X_7))}')
print(f'By adding a small amount to the diagonal of the matrix, it is of full rank\n again: '\
+ f'{np.linalg.matrix_rank(np.dot(X_7.T, X_7) + np.eye(7) * 2)}')
## you can see how small this amount is, by having a glimpse on the diagonal elements:
print('\nto see how small the added amount in reality is, we display the diagonal elements:')
np.diag(np.dot(X_7.T, X_7))
###Output
With 6 variables (polynom of 6th degree), the rank of the quare matrix
is 6
With 7 variables (polynom of 7th degree), the rank of the quare matrix
is 6
By adding a small amount to the diagonal of the matrix, it is of full rank
again: 7
to see how small the added amount in reality is, we display the diagonal elements:
###Markdown
example of ridge regressionNext, we will apply ridge regression as implemented in the python `sklearn` library and compare the results to the linear algebra solution. Note, that we have to center the variables.* we can center $\mathbf{X}$ and $\mathbf{y}$ and display the result in the centered coordinate system* or we can center $\mathbf{X}$ and add the mean of $\mathbf{y}$ to the predicted values to display the result in the original coordinate system. This approaches allows for an easy comparison to the overfitted resultDie Zeile `Xc = X - np.mean(X, axis=0)` standardisiert die Variablen auf den Mittelwert von 0
###Code
from sklearn.linear_model import Ridge
y = np.load('/home/martin/python/fhnw_lecture/scripts/regression_y.pickle.npy')
X = np.load('/home/martin/python/fhnw_lecture/scripts/regression_X.pickle.npy')
X = np.c_[X, X**2, X**3, X**4, X**5, X**6, X**7]
# here is the necessary standardization:
Xc = X - np.mean(X, axis=0)
# for plotting purpose
x = np.arange(-1, 12, 0.05).reshape((-1, 1))
x = np.c_[x, x**2, x**3, x**4, x**5, x**6, x**7]
xc = x -np.mean(x, axis = 0)
# the result as obtained from the sklearn library
model = Ridge(alpha=2, fit_intercept=False)
model.fit(Xc, y)
print(f'the parameters from the sklearn library:\n'\
+ f'{model.coef_}')
# the analytical result as discussed above
inverse = np.linalg.inv(np.dot(np.transpose(Xc), Xc) + np.eye(Xc.shape[1]) * 2)
Xy = np.dot(np.transpose(Xc),y)
params = np.dot(inverse, Xy)
print(f'the parameters as obtained from the analytical solution:\n'
+ f'{np.transpose(params)}')
params_ridge = params
# here we add the mean of y to the predictions to display results in original coord. system
y_hat = np.dot(xc , params) + np.mean(y)
f = plt.figure(figsize=(5, 5), dpi=100)
plt.title(label='ridge regression for polynome of 7th degree and $\lambda=2$',
fontdict={'fontsize':15})
axes = f.add_subplot(111)
axes.plot(X[:,0], y, 'ro')
axes.plot( x[:,0], y_hat.reshape((-1,)), 'b-', label='ridge regression')
#axes = plt.gca()
axes.set_ylim([np.min(y)-10, np.max(y) +20])
# now the overfitted solution
from sklearn.linear_model import LinearRegression
modelLR = LinearRegression()
modelLR.fit(X, y)
y_overfitted = np.dot(x , modelLR.coef_.T) + modelLR.intercept_
axes.plot(x[:,0], y_overfitted, 'y--', label='unregularized regression')
leg = axes.legend()
###Output
_____no_output_____
###Markdown
LassoAlternativ zu einem quadratischen Strafterm $b^2$ könnte man auch den absoluten Wert nehmen $|b|$. In diesem Fall erhält man die sog.~Lasso Regression; $\lambda\cdot |b|$ wird zum Vorhersage-Fehler addiert:$$\sum_i^{n}(y_i - \hat{y_i})^2 + \lambda |b|= \sum_i^{n}[y_i - (a + b\cdot x_i)]^{2}+ \lambda |b|$$Für zwei Variablen würde man folglich schreiben:$$\sum_i^{n}(y_i - \hat{y_i})^2 + \lambda |b_1| + \lambda |b_2|= \sum_i^{n}[y_i - (a + b_1\cdot x_{i1} + b_2\cdot x_{i2})]^{2}+ \lambda |b_1| + \lambda |b_2|$$ Leider gibt es im Gegesatz zur Ridge Regression keine eindeutige analytische Lösung um die Koeffizienten der Lasso Regression zu erhalten. Hier kommen iterative Verfahren zum Einsatz, wie wir sie in Session 2 kennen lernen werden. Vergleich der Koeffizienten der Lasso Regression mit denen der Ridge RegressionNext, we will apply lasso regression as implemented in the python sklearn library and compare the results to the unconstraint regression results.As before, we have to center the variables (-> see discussion above)
###Code
import numpy as np
from sklearn.linear_model import Lasso
y = np.load('/home/martin/python/fhnw_lecture/scripts/regression_y.pickle.npy')
X = np.load('/home/martin/python/fhnw_lecture/scripts/regression_X.pickle.npy')
X = np.c_[X, X**2, X**3, X**4, X**5, X**6, X**7]
Xc = X - np.mean(X, axis=0)
# for plotting purpose
x = np.arange(-1, 12, 0.05).reshape((-1, 1))
x = np.c_[x, x**2, x**3, x**4, x**5, x**6, x**7]
xc = x -np.mean(x, axis = 0)
# the result as obtained from the sklearn library
model = Lasso(alpha=2, fit_intercept=False)
model.fit(Xc, y)
params_lasso = model.coef_
# comparison of parameters ridge vs. lasso:
print(f'the parameters of the ridge regression:\n'\
+ f'{np.transpose(params_ridge)}')
print(f'the parameters of the lasso regression:\n'\
+ f'{params_lasso}')
###Output
the parameters of the ridge regression:
[[-1.96523119e-01 -6.47914004e-01 -9.37247118e-01 1.55320112e-01
3.20681203e-02 -6.80277139e-03 3.08899915e-04]]
the parameters of the lasso regression:
[-0.00000000e+00 -1.27169261e+00 2.49755651e-01 7.47152651e-04
-5.77539403e-04 -2.73002774e-05 1.76588437e-06]
###Markdown
Ridge Regression tendiert dazu alle Koeffizienten im gleichen Mass zu verkleinern. Lasso führt oft zu Lösungen, bei denen einige Koeffizienten ganz zu $0$ konvergiert sind. Wenn man die Ergebnisse im obigen Beispiel betrachtet, fällt einem auf dass für Lasso eigentlich nur zwei Koeffizienten verschieden von $0$ sind (for $X^2$ and $X^3$).Die Werte alle anderen Koeffizienten sind kleiner als $0.000747 = 7.47\text{e}-04$.
###Code
y_hat = np.dot(xc, model.coef_.reshape((-1,1))) + np.mean(y)
f = plt.figure(figsize=(5, 5), dpi=100)
plt.title(label='lasso regression for polynome of 7th degree and $\lambda=2$',
fontdict={'fontsize':15})
axes = f.add_subplot(111)
axes.plot(X[:,0], y, 'ro')
axes.plot( x[:,0], y_hat.reshape((-1,)), 'b-', label='lasso regression')
#axes = plt.gca()
axes.set_ylim([np.min(y)-10, np.max(y) +20])
# now the overfitted solution
from sklearn.linear_model import LinearRegression
modelLR = LinearRegression()
modelLR.fit(X, y)
y_overfitted = np.dot(x , modelLR.coef_.T) + modelLR.intercept_
axes.plot(x[:,0], y_overfitted, 'y--', label='unregularized regression')
leg = axes.legend()
###Output
_____no_output_____
###Markdown
the difference between ridge and lassoIn der folgenden graphischen Darstellung haben die __wahren Koeffizienten__ die Werte $b_1=1.5,\quad b_2=0.5$. Für ein grid aus beliebigen Werten für $b_1$ und $b_2$ wird der __mean squared error__ (MSE) berechnet und der Fehler als Kontur graphisch dargestellt. Wie man sieht, wird der Fehler umso geringer, je näher die Koeffizienten im grid an den wahren Koeffizienten liegen.Als nächstes werden alle Koeffizienten-Kombinationen aus $b_1$ und $b_2$ eingetragen, deren Strafterm ($b_1^2 + b_2^2$im Falle von Ridge und $b_1 + b_2$ im Falle von Lasso) den Wert von $1.0$ nicht übersteigt. Die Lösung, die den __wahren Koeffizienten__ am nähesten ist, wird jeweils durch einen Punkt eingezeichnet.Hierbei sieht man, dass sich die besten Lösungen von Ridge auf einem Halbkreis bewegen, die von Lasso auf einem Dreieck. An der Stelle, an der die Lasso-Lösung der eigentlichen Lösung (b=1.5, b2=0.5) am Nähesten ist, ist ein Parameter ($b_2$) fast $0$. Das zeigt die Tendenz von Lasso, einige Parameter gegen $0$ zu schrumpfen. Dieses Verhalten kann man sich zum Beispiel bei Variablen-Selektion zu Nutzen machen.
###Code
# generation of random data set:
X1 = np.random.normal(loc = 1.0, scale = 0.8, size = 100)
X2 = np.random.normal(loc = 0.5, scale = 1.2, size = 100)
beta1 = 1.5
beta2 = 0.5
Y = beta1 * X1 + beta2 * X2
X = np.c_[X1, X2]
# test with linear regression
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X, Y)
model.intercept_ # essentiall zero
model.coef_ # essentially 0.2 and 0.5
#print(f'the model parameters from data generation could be recovered: {model.coef_}')
# make regular grid of values for b_1 and b_2
b1 = np.linspace(beta1 - 0.9, beta1 + 0.9, 100)
b2 = np.linspace(beta2 - 0.9, beta2 + 0.9, 100)
bb1, bb2 = np.meshgrid(b1, b2)
# compute MSE-error
Yhat = bb1.reshape(-1, 1) * X1.reshape(1, -1) + bb2.reshape(-1, 1) * X2.reshape(1, -1)
errors = np.square(Yhat - Y.reshape(1, -1))
error = np.sum(errors, axis = 1)/len(Y)
error_to_plot = error.reshape(bb1.shape)
# plot MSE-error contour
f = plt.figure(figsize=(5, 5), dpi=100)
plt.title(label='minimal errors with penalties', fontdict={'fontsize':13})
axes = f.add_subplot(111)
cp = plt.contour(bb1, bb2, error_to_plot)
plt.clabel(cp, inline=1, fontsize=10)
axes.set_xlabel('b1')
axes.set_ylabel('b2')
axes.set_ylim([np.min(b2)-0.5, np.max(b2) + 0.5])
axes.set_xlim([np.min(b1)-0.5, np.max(b1) + 0.5])
# plot optimal solution
axes.scatter(beta1, beta2, s = 20)
axes.annotate('$\hat{b}$', xy=(beta1 , beta2 + 0.1), xycoords='data',
horizontalalignment = 'center', size = 20)
# all ridge solutions with a penalty budget of 1
constraint_error = 1.0
values = np.linspace(0, 1.0, 100)
constraint_l2 = np.sqrt(constraint_error - values**2)
axes.plot(values, constraint_l2, 'y-', label = 'ridge')
axes.plot(-values, constraint_l2, 'y-')
axes.plot(values, -constraint_l2, 'y-')
# all lasso solutions with a penalty budget of 1
constraint_l1 = constraint_error -values
axes.plot(values, constraint_l1, 'r-', label = 'lasso')
axes.plot(-values, constraint_l1, 'r-')
axes.plot(values, -constraint_l1, 'r-')
# best ridge solution with penalty budget of 1
Yhat_ridge = np.concatenate((values, values)).reshape(-1,1) * X1.reshape(1, -1) + \
np.concatenate((constraint_l2, -constraint_l2)).reshape(-1,1) * X2.reshape(1, -1)
errors_ridge = np.square(Yhat_ridge - Y.reshape(1, -1))
error_ridge = np.sum(errors_ridge, axis = 1)/len(Y)
index_ridge = np.where(error_ridge ==np.amin(error_ridge))[0][0]
axes.scatter(np.concatenate((values, values))[index_ridge],
np.concatenate((constraint_l2, -constraint_l2))[index_ridge],
s=20, c='y')
# best lasso solution with penalty budget of 1
Yhat_lasso = np.concatenate((values, values)).reshape(-1,1) * X1.reshape(1, -1) + \
np.concatenate((constraint_l1, -constraint_l1)).reshape(-1,1) * X2.reshape(1, -1)
errors_lasso = np.square(Yhat_lasso - Y.reshape(1, -1))
error_lasso = np.sum(errors_lasso, axis = 1)/len(Y)
index_lasso = np.where(error_lasso ==np.amin(error_lasso))[0][0]
axes.scatter(np.concatenate((values, values))[index_lasso],
np.concatenate((constraint_l1, -constraint_l1))[index_lasso],
s=20, c='r')
legs = axes.legend()
plt.show()
print(f'optimal coefficients of the ridge solution: {np.concatenate((values, values))[index_ridge]}'\
f' and {np.concatenate((constraint_l2, -constraint_l2))[index_ridge]}')
print(f'optimal coefficients of the lasso solution: {np.concatenate((values, values))[index_lasso]}'\
f' and {np.concatenate((constraint_l1, -constraint_l1))[index_lasso]}')
###Output
_____no_output_____
###Markdown
ElasticNetAus der Physik kommend werden die Strafterme von Ridge und Lasso als $\text{L}_2$ und $\text{L}_1$ bezeichnet. Eigentlich ist die $\text{L}_2$-Norm die Quadratwurzel der Summe der quadrierten Elemente eines Vectors und die $\text{L}_1$-Norm nur die Summe der Vektorelemente.ElasticNet ist ein lineares Regressions-Verfahren, in welches sowohl die regularization-terms von Lasso ($\text{L}_1$), als auch von Ridge ($\text{L}_2$) eingehen. Hier gibt es nicht nur einen $\lambda$-Paramter, der das Ausmass von regularization bestimmt, sondern einen zusätzlichen Parameter $\alpha$, der das Verhältnis von $\text{L}_1$ und $\text{L}_2$ regularization angibt.Weil Ridge Regression und Lasso die Koeffizienten sehr unterschiedlich regulieren, ist als Kompromiss die Kombination aus beiden Methoden sehr beliebt geworden. \begin{equation*}\lambda\sum_j (\alpha b_j^2 + (1-\alpha)|b_j|)\end{equation*}Die Interpretation der beiden paramter $\lambda$ und $\alpha$ ist wie folgt: - $\lambda$ bestimmt das generelle Mass an regularisation - $\alpha$ gibt das Verhältnis an, mit dem diese beiden Strafterme indie regularisation einfliessen sollenIm Übungs-Notebook zu den Boston house-prices werden wir ElasticNet verwenden. InteractionInteraktionen sind ein weiteres wichtiges Konzept in der linearen Regression. Hier ist der Effekt einer Variablen auf die abhängige Variable $y$ abhängig von dem Wert einer anderen Variable. In unterem Beispiel versuchen wir die Wahrscheinlichkeit zu modellieren, dass eine Person ein Haus kauft. Natürlich ist das monatliche Einkommen eine wichtige Variable und desto höher dieses, desto wahrscheinlicher auch, dass besagte Person ein Haus kauft. Eine andere wichtige Variable ist der Zivilstand. Verheiratet Personen mit Kindern im Haushalt tendieren stark zu Hauskauf, besonders wenn das monatliche Einkommen hoch ist. Auf der anderen Seite werden Singles, auch wenn sie ein hohes Einkommen haben, eher nicht zum Hauskauf tendieren.Wir sehen also, die Variable "monatliches Einkommen" __interagiert__ mit der Variable "Zivilstand":
###Code
import numpy as np
from statsmodels.graphics.factorplots import interaction_plot
import pandas as pd
income = np.random.randint(0, 2, size = 80) # low vs high
marital = np.random.randint(1, 4, size = 80) # single, married, married & kids
probability = np.random.rand(80) + income * np.random.rand(80) * marital
probability = (probability - np.min(probability))
probability = probability/np.max(probability)
marital = pd.Series(marital)
marital.replace(to_replace = {1:'single', 2:'married', 3:'marrid w kids'}, inplace =True)
income = pd.Series(income)
income.replace(to_replace = {0:'low', 1:'high'}, inplace = True)
fig = interaction_plot(income, marital, probability,
colors=['mediumorchid', 'cyan', 'fuchsia'], ms=10, xlabel='income',
ylabel='probability of buying a house',
legendtitle='marital status')
###Output
_____no_output_____
###Markdown
Das obige Beispiel beinhaltete kategorielle Variablen. Beispiele wie diese trifft man oft im Bereich der Varianzanalysen (ANOVA) an.Interaktions-Effekte bestehen aber auch für kontinuierliche Variablen. In diesem Fall ist es aber etwas komplizierter die Effekte zu visualisieren.Wir werden jetzt unseren eigenen Datensatz so erzeugen, dass er einen deutlichen Interaktions-Effekt aufweist. Damit der Effekt zwischen 2 kontinuierlichen Variablen überhaut in 2D dargestellt werden kann, musss eine der beiden Variablen wieder diskretisiert werden, d.h. wir müssen für sie wieder Kategorien bilden.Im nächsten Rechenbeispiel versuchen wir dann, die Parameter, die zur Generierung der Daten gedient haben mit einer Linearen-Regressions-Analyse wieder zu finden.Die Daten wurden nach folgendem Modell generiert:\begin{equation*}y = 2\cdot x + -2\cdot m + -7\cdot (x\cdot m) + \text{np.random.normal(loc = 0, scale = 4, size = n)}\end{equation*}`np.random.normal(loc=0, scale=4, size=n)` ist der Random-Error-Term, den wir hinzufügen, damit die Daten nicht alle auf einer Lienie liegen. `loc=0` besagt, dass der Mittelwert unseres zufälligen Fehlers $0$ ist, `scale=4`, dass die Varianz der Werte $4$ ist und `size=n` gibt die Anzahl der zu generierenden zufälligen Werte anFolgliche haben wir also die Koeffizienten: - $b_x = 2$ - $b_m = -2$ - $b_{x\cdot m} = -7$
###Code
import seaborn as sns
n = 500
x = np.random.uniform(size=n)
m = np.random.normal(loc = 0.5, scale = 1, size = n)
# lin effects + interaction + random error
y = 2*x + -2*m + -7*(x*m) + np.random.normal(loc = 0, scale = 4, size = n)
newM = pd.cut(m, bins=3, labels = ['small', 'average', 'large'])
toy = pd.DataFrame({'x' : x, 'y' : y, 'moderator' : newM})
sns.lmplot(x="x", y="y", hue="moderator", data=toy);
###Output
_____no_output_____
###Markdown
Interaktions-Terme können gebildet werden, indem man zwei Variablen elemente-weise miteinander multipliziert.Durch die Hinzuname weiterer Terme sollte die Modell-Anpassung eigentlich besser werden - besonders wenn ein starker Interaktionsterm in den Daten vorliegt, so wie wir ihn eingebaut haben.Vergleichen wir die Koeffizienten, so wie sie im Linearen-Modell gefunden werden mit denen, die zur Erzeugung unseres Datensatzes gedient haben. Gar nicht schlecht, oder? Die zufälligen Fehler mit der grossen Varianz sorgen natürlich dafür, dass sie dennoch von den 'generating parameters' verschieden sind.
###Code
from sklearn.linear_model import LinearRegression
model = LinearRegression()
X = np.c_[x, m]
model.fit(X, y)
y_hat = model.intercept_ + np.dot(X, model.coef_)
print(f'without considering the interaction, the mse is: {np.mean((y-y_hat)**2)}')
X = np.c_[x, m, x * m]
model.fit(X, y)
y_hat = model.intercept_ + np.dot(X, model.coef_)
print(f'considering the interaction, the mse drops to: {np.mean((y-y_hat)**2)}')
print(f'\nthe coefficients are given by {model.coef_}; compare these values\n to the values '\
+ f'we used for generating the data')
###Output
without considering the interaction, the mse is: 20.59039561666012
considering the interaction, the mse drops to: 16.932358091695946
the coefficients are given by [ 1.32323881 -2.15215978 -7.05045543]; compare these values
to the values we used for generating the data
###Markdown
some considerationsDie Überlegung hier veranschaulicht, dass es schon bei moderater Variablen-Anzahl sehr viele mögliche Interaktions-Terme gibt. Für die normale Lineare Regression würde die grosse Anzahl dieser Terme zum Verhängnis werden, weil dann wieder der Fall eintreten könnte indem wir die Daten overfitten oder gar mehr Variablen als Beobachtunge zur Verfügung stehen. Auch in diesem Fall kann auf die vorgestellten^ Regularisierungs-Verfahren (ElasticNet, Ridge und Lasso) zurückgegriffen werden:Nehmen wir an, wir haben ein data-set mit 70 verschiedenen Variablen. Weil wir nichts über die Beziehungen der Variablen zur abhängigen Variable $y$ noch über die Beziehungen der Variablen untereinander wissen, sind wir geneigt eine Menge zusätzlicher 'features' für unser Modell zu erzeugen:* wir können 70 quadratische Terme hinzufügen ($x_j^2$)* wir können 70 kubische Terme aufnehmen ($x_j^3$)* wir können auch $\binom{70}{2} = 2415$ Interaktionen erster Ordnung zwischen den 70 Variablen annehmen* anstatt dessen könnte wir auch die Interaktions-Terme der 210 (70 Variablen + 70 quadratische Terme + 70 kubische Terme) Variablen mit aufnhemne: $\binom{210}{2} = 21945$* neben quadratisch und kubischen Termen gibt es auch viele andere linearisierende Transformation, die unter Umständen zu besseren ergebnissen führen wie beispielsweise die log-Transformation. Im praktischen Beipiel des Bosten house-prices data-Sets werden wir die `box-cox-Transformation` kennen lernen.Wie wir gesehen haben, kann die Anzahl möglicher Variablen sehr schnell wachsen, wenn man alle Effekte berücksichtigt, die ausschlaggebend sein könnten. Manchmal existieren sogar Interaktionseffekte zweiter Ordnund, d.h. drei Variablen sind dann daran beteiligt. Würden wir alle möglichen Variablen berücksichtigen, die sich derart bilden lassen, dann würde dies auch bei grossen Daten-Sets zu ausgeprägten 'overfitting' führen. __Aus diesem Grund wurden die regularization techniques wie das ElasticNet und seine Komponenten, die Ridge Regression und die Lasso Regression eingeführt__. Wie zuversichtlich sind wir hinsichtlich unserer Modell-VorhersagenSelten werden wir mit unserem Modell genau die Koeffizienten schätzen können, die in der gesamten Population (alle Daten, die wir erheben könnten) anzutreffen sind. Viel öfter ist unsere Stichprobe nicht repräsentativ für die gesamte Population oder sie ist schlicht zu klein und zufällige, normalverteilte Fehler in unseren Daten beeinflussen die Schätzung der Koeffizienten. Dies umsomehr, desto mehr Variablen wir in user Modell aufnehmen.Wie können wir nun die Güte unserer Schätzung beurteilen? Hier sind mindestens zwei verschieden Fragen denkbar:* Wie sicher sind wir mit Hinblick auf die geschätzen Koeffizienten $\mathbf{b}$?. Diese Frage ist besonders für Wissenschaftler wichtig, da die Antwort dafür ausschlaggebend ist, ob eine Hypothese beibehalten oder verworfen werden muss.* Wie sicher sind wir uns bezüglich einzelner Vorhersagen. Dies spielt die grösste Rolle im Machine Learning Umfeld, da wir das trainierte Modell gerne in unsere Business-Abläufe integrieren würden.Diese beiden Fragestellungen lassen sich mit Hinblick auf die Regression auch wie folgt formulieren: * Wie sehr ist die 'mean response', die Vorhersage unsere Regressions-Funktion von der Stichprobe abhängig. Variiert Erstere sehr stark und umfasst unter Umständen sogar den Wert $0$, dann können diese Effekte (Koeffizienten) nicht interpretiert werden. * Wie sehr können Beobachtungen $y$ für eine gegebene Kombination von Variablen-Werten in $\mathbf{X}$ variieren? Ist diese Variation sehr gross, so werden wir auch grosse Fehler in unseren Business-Process einbauen Recap of assumptions underlying regressionDies sind Linearität (der Zusammenhang einer Variablen und der abhängigen Variablen ist linear, d.h. der selbe Steigungsparamter gilt für alle Bereiche der Variablen), Homoskedastizität (die Fehler der Regression -- die Residuen -- sind in allen Bereichen von X normal verteilt mit gleicher Varianz) und Normalität der Residuen bei gegebenem Wert von X.Diese Voraussetzungen sind in vielen Fällen nicht erfüllt und auch bekannterweise verletzt. * __Linearity__: Die Regression-Funktion ist eine gute Annäherung für die Beziehung zwischen $\mathbf{X}$ and $\mathbf{y}$, d.h. ist ein quadratischer Trend in den Daten und wir haben keine quadratischen Effekte in das Modell aufgenommen, so sind die Annahmen nicht erfüllt. Die Linearität besagt nämlich, dass für den Zusammenhang einer Variablen $x$ und der abhängigen Variablen $y$ der selbe Steigungs-Koeffizient $b_x$ für all Bereich für $x$ gelten muss. Ansonsten hat das Modell einen __bias__, es schätzt einen Koeffizienten systematisch falsch.* __Homoscedasticity__: Die Varianz unseres Vorhersagefehlers (Residuen) ist für alle Bereiche einer Variablen $x$ identisch.* __Normality__: Die Werte der abhängigen Variablen $\mathbf{y}$ sind für einen gegeben Wert von $\mathbf{x}$ normal verteilt: $\mathbf{y}|\mathbf{x} \sim N(\mu, \sigma)$In der nächsten Graphik werden die Voraussetzungen der linearen Regression veranschaulicht:Image taken from [here](https://janhove.github.io/analysis/2019/04/11/assumptions-relevance)
###Code
Image('../images/homoscedasticity.png')
###Output
_____no_output_____
###Markdown
Now, with respect to our confidence need:1. __Vohersage Intervall (prediction interval)__: Dies ist das Intervall, in welchem mit (1-$\alpha$)% Wahrscheinlichkeit die beobachteten $y$ Werte zu unseren vorhergesagten Werten $\hat{y}$ liegen. Dieses Intervall ist symmetrisch um die Regressionsfunktion - was natürlich aus den Voraussetzungen der linearen Regression folgt. Der Standardfehler der Vorhersage ist gegeben durch:\begin{equation*}\hat{\sigma}_e = \sqrt{\frac{1}{N-(p+1)}\sum_i^N e_i^2}, \end{equation*}hier ist $p$ die Anzahl der Parameter im Modell (der zusätzliche Parameter $+1$ kommt vom intercept); $e_i$ sind die Vorhersage-Fehler, die Residuen, also die Differenz aus unseren vorhergesagten $\hat{y}_i$ und den beobachteten Werten $y_i$. Das Konfidenz-Intervall ergibt sich zu:\begin{equation*} CI_i = y_i \pm t_{1-\alpha/2, N-p} \cdot \hat{\sigma}_e.\end{equation*}Hier ist $t_{1-\alpha/2, N-p}$ der Wert der Student-t-Verteilung für das Konfidenzlevel von $1-\alpha/2$ und $N-p$ Freiheitsgraden. Der Wert von $\alpha$ gibt an, wie sehr wir uns mit dem Konfidenzintervall gegen falsche Entscheidungen absichern wollen. Wollen wir beispielseweise mit 95% Sicherheit den Bereich angeben können, in dem die beobachteten Werte liegen, dann müssen wir als untere Konfidenzgrenze den Wert bestimmen unterhalb dessen nur mit einer Wahrscheinlichkeit von 2.5% die beobachteten Werte liegen und als obere Konfidenzgrenze den Wert unterhalb dessen mit einer Wahrscheinlichkeit von 97.5% die beobachteten Werte liegen. So machen wir nur in 5% aller Fälle einen Fehler, $\alpha = 0.05$ und weil das Konfidenzintervall symmetrisch ist benötigen wir den Wert $1-\alpha/2$ damit wir von beiden Enden 2.5% abschneiden. 2. __Mean Prediction Confidence interval__: In ähnlicher Weise können wir ein Konfidenzintervall für unsere durchschnittliche Vorhersage $\hat{\bar{y}}$ bestimmen. Wir erinnern uns, dass die Regressions-Funktion unsere Vorhersage ist und die Daten um diese normal verteilt sein sollten. Weil unsere Stichprobe aber nur eine Momentaufnahme eines Auschnitts aller möglichen Werte ist, die wir erheben könnten, wird die Regressions-Funktion je nach Stichprobe variieren. Das Konfidenz-Intervall gibt an, in welchem Bereich die Regressions-Funktion mit grosser Wahrscheinlichkeit liegen würde, könnten wir alle Daten erfassen (die gesamte Population). Das Konfidenzintervall ist nicht für alle Werte von $x$ gleich weit. Dort wo wenige Messwerte vorliegen kann der genaue Verlauf schlechter geschätzt werden als dort wo wir eine breitere Datenbasis für die Schätzung haben. Nahe dem Mittelwert von $x$, also bei $\bar{x}$ sollte unsere Schätzung immer genauer sein als nahe den Extremwerten. Natürlich gehen wir wieder von normalverteilten $x$ Werten aus. 3. __CI for regression coefficients__: Auch diese Intervall ist schwierig zu bestimmen. Es gibt die obere und untere Grenze für unsere Regressions-Koeffizienten $\mathbf{b}$. Die Interpretation dieser Koeffizienten findet vor allem in der Wissenschaft statt. Es kann getestet werden, ob ein a priori postulierter Effekt tatsächlich vorliegt oder nicht. Umfasst das Konfidenzintervall für einen Koeffizienten $b$ den Wert Null, so kann nicht ausgeschlossen werden, dass der Effekt in der Stichprobe nur rein zufällig zu Stande kommt. Beispielsweise könnte folgende Fragestellung hiermit untersucht werden: "Hat die Schliessung von Schulen und Universitäten einen signifikanten Einfluss auf die Reproduktions-Zahl $R_0$ oder nicht". Dies ist typerischweise nicht die Art von Fragestellung, mit der sich Data Scientists beschäftigen. Im nachfolgenden Beispiel sehen wir die typische Ausgabe eines klassischen, statistischen Ansatzes. In der Mitte sehen wir die Konfidenz-Intervalle für die Regressions-Koeffizienten, `const` (intercept) und `x1`, d.h. der Koeffizient der Variablen $x_1$, also $b_1$. Der intercept ist nicht signifikant, weil das Konfidenzinervall ($\left[-43.351, 26.370\right]$) den Wert $0$ umfasst. Der Koeffizient $b_1$ für die Variable $x_1$ ist aber signifikant von $0$ verschieden. Sein Konfidenzintervall ist $\left[2.939, 13.219\right]$.Bei grossem Interesse für klassische statistische Modelle kann ich für python diese [Quelle](http://web.vu.lt/mif/a.buteikis/wp-content/uploads/PE_Book/3-7-UnivarPredict.html) empfehlen.
###Code
import statsmodels.api as sm
# data example
y = np.load('/home/martin/python/fhnw_lecture/scripts/regression_y.pickle.npy')
X = np.load('/home/martin/python/fhnw_lecture/scripts/regression_X.pickle.npy')
# the x (small x) is just for plotting purpose
x = np.arange(1, 12, 0.05).reshape((-1, 1))
x_intercept = np.c_[np.ones(x.shape[0]), x]
X_intercept = np.c_[np.ones(X.shape[0]), X]
ols_result_lin = sm.OLS(y, X_intercept).fit()
y_hat_lin = ols_result_lin.get_prediction(x_intercept)
dt_lin = y_hat_lin.summary_frame()
mean_lin = dt_lin['mean']
meanCIs_lin = dt_lin[['mean_ci_lower', 'mean_ci_upper']]
obsCIs_lin = dt_lin[['obs_ci_lower', 'obs_ci_upper']]
print(ols_result_lin.summary()) # beta-coefficients
### figure for linear plot
f = plt.figure(figsize=(5, 5), dpi=100)
plt.title(label='linear regression', fontdict={'fontsize':20})
axes = f.add_subplot(111)
axes.plot(X_intercept[:,1], y, 'ro')
axes.plot(x_intercept[:, 1], mean_lin.values.reshape((-1,)), color = "red", label = "regression line")
axes.plot(x_intercept[:, 1], obsCIs_lin.iloc[:, 0], color = "darkgreen", linestyle = "--",
label = "Predictions interval (1.)")
axes.plot(x_intercept[:, 1], obsCIs_lin.iloc[:, 1], color = "darkgreen", linestyle = "--")
axes.plot(x_intercept[:, 1], meanCIs_lin.iloc[:, 0], color = "blue", linestyle = "--",
label = "Mean Prediction CI (2.)")
axes.plot(x_intercept[:, 1], meanCIs_lin.iloc[:, 1], color = "blue", linestyle = "--")
axes.legend()
axes.set_ylim([np.min(y)-10, np.max(y) +10])
###Output
_____no_output_____
###Markdown
Als nächstes berechnen wir die Konfidenzintervalle für die Regression mit einem quadratischen Term.Hierbei fällt auf, dass zwar jetzt der quadratische Term `x2` signifikant ist (Intervall $\left[0.247, 3.125\right]$), nicht mehr aber der `x1` Term.
###Code
X_intercept_quad = np.c_[X_intercept, X**2]
# for plotting:
x = np.arange(1, 12, 0.05).reshape((-1, 1))
x_intercept_quad = np.c_[np.ones(x.shape[0]), x, x**2]
ols_result_quad = sm.OLS(y, X_intercept_quad).fit()
y_hat_quad = ols_result_quad.get_prediction(x_intercept_quad)
dt_quad = y_hat_quad.summary_frame()
mean_quad = dt_quad['mean']
meanCIs_quad = dt_quad[['mean_ci_lower', 'mean_ci_upper']]
obsCIs_quad = dt_quad[['obs_ci_lower', 'obs_ci_upper']]
print(ols_result_quad.summary())
### figure for linear plot
f = plt.figure(figsize=(5, 5), dpi=100)
plt.title(label='regression with quadratic term', fontdict={'fontsize':20})
axes = f.add_subplot(111)
axes.plot(X_intercept_quad[:,1], y, 'ro')
axes.plot(x_intercept_quad[:, 1], mean_quad.values.reshape((-1,)), color = "red", label = "regression line")
axes.plot(x_intercept_quad[:, 1], obsCIs_quad.iloc[:, 0], color = "darkgreen", linestyle = "--",
label = "Predictions interval (1.)")
axes.plot(x_intercept_quad[:, 1], obsCIs_quad.iloc[:, 1], color = "darkgreen", linestyle = "--")
axes.plot(x_intercept_quad[:, 1], meanCIs_quad.iloc[:, 0], color = "blue", linestyle = "--",
label = "Mean Prediction CI (2.)")
axes.plot(x_intercept[:, 1], meanCIs_quad.iloc[:, 1], color = "blue", linestyle = "--")
axes.legend()
axes.set_ylim([np.min(y)-10, np.max(y) +10])
###Output
_____no_output_____
###Markdown
BootstrapDie Daten, mit denen ein Data Scientist normalerweise arbeitet, erfüllen meist nie die Voraussetzungen der Linearen Regression. Deshalb können wir auch die Theory zu den Konfidenzintervallen nicht anwenden - schliesslich beruht sie auf den Annahmen wie normalverteilte Daten.Eine robuste, parameter-freie Alternative ist der __Bootstrap__. Gewissernahmen ziehen wir uns an den eigenen Haaren aus dem Schlamassel: - Wir betrachten unsere Stichprobe als die Gesamtheit (Population) der Daten. - Nun ziehen wir wiederholt und mit Zurücklegen neue Stichproben aus dieser Stichprobe. - Für jede dieser Stichproben wird das Modell angepasst und die relevanten Statistiken werden gespeichert. - Abschliessend finden wir in unseren gespeicherten Statistiken das 2.5% Quantil (der Wert, unter dem nur 2.5% der Beobachtungen liegen) und das 97.5% Quantil (der Wert über dem nur noch 2.5% der Beobachtungen liegen). Diese Werte teilen wir als untere und obere Grenze des Konfidenz-Intervalls mit, bei einem Konfidenz-Level von $\alpha=0.05$.Nachfolgendes Code-Beispiel veranschaulicht den Vorgang:* `sampler = (choices(indices, k = len(indices)) for i in range(200))` erzeugt einen Generator, der 200 Mal eine Zufallsstichprobe zieht.* `np.percentile(np.array([Lasso(alpha=2, fit_intercept=True).fit(X[drew,:], y[drew, :]).predict(x).tolist() for drew in sampler]), [2.5, 97.5], axis = 0)` iteriert über den Generator und passt insgesammt 200 mal das Modell an und macht eine Vorhersage für kontinuierliche x-Werte im Bereich von 1 bis 12. Diese Vorhersagen werden in einem numpy-array (`np.array`) gespeichert und zu Schluss die Funktion `np.percentile` auf die 200 Vorhersagen angewendet. Somit erhalten wir für den x-Bereich von 1 bis 12 die Intervall-Grenzen für die mean-preditction, d.h. die Regressions-Funktion
###Code
from random import choices
from sklearn.linear_model import Lasso
import warnings
warnings.filterwarnings('ignore')
y = np.load('/home/martin/python/fhnw_lecture/scripts/regression_y.pickle.npy')
X = np.load('/home/martin/python/fhnw_lecture/scripts/regression_X.pickle.npy')
#X = np.c_[np.ones(X.shape[0]), X, X**2, X**3, X**4]
X = np.c_[X, X**2, X**3, X**4]
x = np.arange(1, 12, 0.05).reshape((-1, 1))
#x = np.c_[np.ones(x.shape[0]), x, x**2, x**3, x**4]
x = np.c_[x, x**2, x**3, x**4]
indices = np.arange(0, X.shape[0])
drew = choices(indices, k=len(indices))
sampler = (choices(indices, k = len(indices)) for i in range(200))
CIS = np.percentile(np.array([Lasso(alpha=2, fit_intercept=True).fit(X[drew,:], y[drew, :])\
.predict(x).tolist()
for drew in sampler]), [2.5, 97.5], axis = 0)
# x is 220 long
model = Lasso(alpha=2, fit_intercept=True)
model.fit(X, y)
y_hat = model.predict(x)
f = plt.figure(figsize=(5, 5), dpi=100)
plt.title(label='lasso regression for polynome of 4th degree and $\lambda=2$',
fontdict={'fontsize':15})
axes = f.add_subplot(111)
axes.plot(X[:,0], y, 'ro')
axes.plot( x[:,0], y_hat.reshape((-1,)), 'b-', label='lasso regression')
axes.plot(x[:, 0], CIS[0, :], color = "cyan", linestyle = "--",
label = "Mean Prediction CI")
axes.plot(x[:, 0], CIS[1, :], color = "cyan", linestyle = "--")
axes.legend()
###Output
_____no_output_____
###Markdown
Extension: logistic regression and the GLMEs gibt andere Modelle, die eng verwandt mit der hier besprochenen Linearen Regression sind. Das Prominenteste unter ihnen ist die __Logistische Regression__. Diese Modell gehört zu dem "__Verallgemeinerten Linearen Modell__" (im engl. __generalized lineare model__ (GLM)). Diese Modelle dürfen nicht mit dem "__Allgemeinen Linearen Modell__" (im engl. __general linear model__) verwechselt werden. Letzteres parametrisiert eine Varianzanalyse als ein lineares Modell mit Dummy-Variablen.Das Verallgemeinerte Lineare Modell erweitert die Lineare Regression um Modelle, deren Fehler nicht normalverteilt sind.[Dieser Artikel](https://en.wikipedia.org/wiki/Generalized_linear_modelConfusion_with_general_linear_models) in der Wikipedia gibt weitere Auskunft. exponential family of distributionsAus der Perspektive der Modernen Statistik beinhaltet das Verallgemeinerte Lineare Modell verschiedene Lineare Modelle, unter anderem das der klassischen linearen Regression. Eine Verteilung, die in der "exponential family" von Verteilungen ist, kann immer folgendermassen geschrieben werden:\begin{equation}f(y| \theta) = \exp\left(\frac{y \theta + b(\theta)}{\Phi} + c(y, \Phi)\right),\end{equation}wobei $\theta$ als Kanonischer Parameter bezeichnet wird, welcher eine Funktion von $\mu$ ist dem Mittel. Diese Funktion wird als Kanonische Link-Funktion bezeichnet. Wie wir später an einem Beispiel sehen werden, ist es genau diese Funktion welche die Beziehung zwischen der abhängigen Variablen und den unabhängigen Variablen linearisiert.Der Vollständigkeit halber: $b(\theta)$ ist eine Funktion des Kanonischen Parameters und ist somit ebenfalls von $\mu$ abhängig. $\Phi$ wird als Streuungsparameter bezeichnet und $c(y, \Phi)$ ist eine Funktion, die sowohl von beobachteten Daten wie auch dem Streuungsparameter abhängig ist. Normalverteilung\begin{eqnarray*}f(y| \mu, \sigma) =& (2\pi \sigma^2)^{-\frac{1}{2}} \exp\left(-\frac{1}{2}\frac{y^2 -2y\mu + \mu^2}{\sigma^2}\right) \\ =&\quad \exp \left(\frac{y\mu -\frac{\mu^2}{2}}{\sigma^2} - \frac{1}{2}\left(\frac{y^2}{\sigma^2} + \log(2\pi\sigma^2\right)\right),\quad \text{wobei}\end{eqnarray*}$\mu = \theta(\mu)$, d.h. $\mu$ ist der Kanonische Parameter und die Link-Funktion ist die Identitäts-Funktion. Der Mittelwert kann also ohne weitere Transformation direkt modelliert werden, so wie wir es in der klassischen Linearen Regression machen.Der Streuungsparameter $\Phi$ ist durch $\sigma^2$, die Varianz gegeben. Dies ist die klassische Lineare Regression normalverteilter Variablen Poisson distributionDie Poisson-Verteilung gehört ebenfalls der exponential family von Verteilungen an:\begin{eqnarray*}f(y| \mu) =& \frac{\mu^{y} e^{-\mu}}{y!} = \mu^y e^{-\mu}\frac{1}{y!}\\=& \quad\exp\left(y \log(\mu) - \mu - \log(y!)\right), \quad\text{where}\end{eqnarray*}Die Link-Funktion ist hier $\log(\mu)$. Beachte bitte, dass die Poisson-Verteilung keinen Streuungsparameter besitzt. Bernoulli distribution $\Rightarrow$ logistic regressionZuguter Letzte, die Bernoulli Verteilung, von der wir die Logistische Regression ableiten können.Die Bernoulli Verteilung eignet sich um binäre Ereignisse zu modellieren, die sich gegenseitig ausschliessen. Ein klassisches Beispiel ist der wiederholte Münzwurf. Die Wahrscheinlichkeit für 'Kopf' wird mit $\pi$ bezeichnet, dir für 'Zahl' mit $(1-\pi)$. Hiermit lässt sich die Wahrscheinlichkeit berechnen, mit einer fairen Münze bei 10 Würfen eine bestimmte Sequenz mit genau 7 Mal 'Kopf' zu erhalten:\begin{equation}\pi^7 (1-\pi)^3 = 0.5^7 0.5^3 = 0.5^{10} = 0.0009765625\end{equation}__Vorsicht__, wenn wir die Wahrscheinlichkeit für Sequenzen mit genau 7 Mal Kopf berechnen wollen, benötigen wir noch den Binomial-Koeffizienten, der uns die Anzahl an möglichen Sequenzen mit 7 Mal 'Kopf' angibt.Jetzt zeige ich, wie wir die Bernoulli Verteilung so umschreiben können, dass man ihre Zugehörigkeit zur exponential family von Verteilungen erkennt:\begin{eqnarray*}f(y |\pi) =& \pi^y (1-\pi)^{1-y} = \exp\left(y \log(\pi) + (1-y) \log(1-\pi)\right)\\= & \quad \exp\left(y \log(\pi) + \log(1-\pi) - y\log(1-\pi)\right)\\=&\quad \exp\left(y\log(\frac{\pi}{1-\pi}) + \log(1-\pi)\right),\quad\text{wobei}\end{eqnarray*}sich die Link-Funktion zu $\log(\frac{\pi}{1-\pi})$ ergibt. Diese Funktion wird auch als Logit-Funktion bezeichnet. Die Umkehrfunktion der Logit-Funktion ist die __Logistische Funktion__. This functionis also called the logit function whose reverse function is the logisticfunction. Es ist also die Logit-Funktion, die als lineare Kombination der unabhängigen Variablen modelliert wird.$\log(\frac{\pi}{1-\pi}) = a + b_{1}x_1 + \ldots + b_jx_j$. Wenn wir den rechten Teil dieser Gleichung in die Logistische Funktion einsetzen erhalten wir die geschätzten Wahrscheinlichkeiten:\begin{equation}P(y=1 |x) = \frac{\exp(a + b_{1}x_1 + \ldots + b_jx_j)}{1 + \exp(a + b_{1}x_1 + \ldots + b_jx_j)}.\end{equation} Somit haben wir also gezeigt, dass das klassische Lineare Regressions-Modell nur ein Spezialfall einer grossen Anzahl von Modellen ist, deren Verteilungen alle in der exponential family enthalten sind. (Für eine vollständigere Abhandlung dieses Themas:https://en.wikipedia.org/wiki/Generalized\_linear\_model.) GLMNETIn der statistischen Programmiersprache R gibt es eine library die 'glmnet' genannt ist. Dieses Packet implementiert das ElasticNet für das Verallgemeinerte Lineare Modell und nicht nur für die klassische Lineare Regression.https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.htmlEs gibt auch ein python package welches den exakt gleichen Fortran-Code verwendet: __glmnet-python__.Es gibt ein paar kleine Unterschiede zu der Version von ElasticNet wie sie in `scikit-learn` implementiert isthttps://pypi.org/project/glmnet-python/ Neural NetworkEs ist auch möglich Neuronale Netzwerke unter dem Blickwinkel der Linearen Regression zu betrachten. Ein Netzwerk mit nur einer Eingabe-Schicht und einem Neuron wird als Perceptron bezeichnet. Die Aktivierungs-Funktion dieses Neurons ist entweder die Identitäts-Funktion, so wie in der klassischen Linearen Regression oder die Logistische Funktion wie in der Logistischen Regression. In letzterem Fall soll das Perceptron Wahrscheinlichkeiten für binäre Ereignisse bestimmen.
###Code
# Image('../images/Regression_as_NN.png')
Image("../images/NN_class_reg.png",height=520, width=520)
###Output
_____no_output_____
###Markdown
classical linear regressionIm Jargon der neural network community werden unsere $b$-Koeffizienten als __Gewichte__ bezeichnet. Der intercept $\alpha$ heisst __bias__.Erinnert Euch, dass wir den intercept $\alpha$ in den Vektor $\pmb{\beta}$ der $b$-Koeffizienten aufgenomen haben, indem wir eine Einser-Spalte in die Variablen-Matrix $\mathbf{X}$ eingefügt haben. Wir konnten also Schreiben:\begin{equation*}\mathbf{y} = \mathbf{X} \pmb{\beta}\end{equation*}In der obigen Graphik könnt ihr sehen, dass im Perceptron die Input-Variablen mit den Gewichten der Verbindungen multipliziert werden und dass der konstante Wert $\alpha$ hinzu addiert wird. Wie in der Linearen Regression werden diese Produkte dann aufsummiert.Im Kontext Neuronaler Netzwerke wird der Vektor $\pmb{\beta}$ als Netzwerk-Gewichte bezeichnet und wird mit $\mathbf{W}$ angegeben. Wir hatten gelernt, dass Vektoren mit kleinen Buchstaben bezeichnet werden. In einem richtigen Neuronalen Netz haben wir in einer Schicht viel Perceptrons nebeneinander. Alle erhalten aber den Input aus der darunter liegenden Schicht. Fügt man die Gewichts-Vektoren der einzelnen Neurone in eine Matrix zusammen, erhält man $\mathbf{W}$.Neuronale Netzwerke sind also eigentlich nur viele parallele und hintereinander geschaltete Regressionen, die sehr effizient mit Matrizen-Multiplikation gerechnet werden können.
###Code
Image("../images/NN_logistic_reg.png", height=520, width=520)
###Output
_____no_output_____ |
courses/machine_learning/deepdive/03_tensorflow/diagrams/coretensorflow.ipynb | ###Markdown
Diagrams for Course 3
###Code
import tensorflow as tf
x = tf.constant(3)
print(x)
import tensorflow as tf
x = tf.constant([3, 5, 7])
print(x)
import tensorflow as tf
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
print(x)
import tensorflow as tf
x = tf.constant([[[3, 5, 7],[4, 6, 8]],
[[1, 2, 3],[4, 5, 6]]
])
print(x)
import tensorflow as tf
x1 = tf.constant([2, 3, 4])
x2 = tf.stack([x1, x1])
x3 = tf.stack([x2, x2, x2, x2])
x4 = tf.stack([x3, x3])
print(x1)
print(x2)
print(x3)
print(x4)
import tensorflow as tf
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
y = x[:, 1]
with tf.Session() as sess:
print(y.eval())
import tensorflow as tf
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
y = tf.reshape(x, [3, 2])
with tf.Session() as sess:
print(y.eval())
import tensorflow as tf
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
y = tf.reshape(x, [3, 2])[1, :]
with tf.Session() as sess:
print(y.eval())
import tensorflow as tf
from tensorflow.contrib.eager.python import tfe
tfe.enable_eager_execution()
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
y = tf.reshape(x, [3, 2])[1, :]
print(y)
import tensorflow as tf
from tensorflow.contrib.eager.python import tfe
tfe.enable_eager_execution()
x = tf.constant([3, 5, 7])
y = tf.constant([1, 2, 3])
print(x-y)
import tensorflow as tf
x = tf.constant([3, 5, 7])
y = tf.constant([1, 2, 3])
z = tf.add(x, y)
with tf.Session() as sess:
print(z.eval())
import tensorflow as tf
x = tf.constant([3, 5, 7])
y = tf.constant([1, 2, 3])
z = tf.add(x, y)
with tf.Session() as sess:
print(sess.run(z))
import tensorflow as tf
x = tf.constant([3, 5, 7])
y = tf.constant([1, 2, 3])
z1 = x + y
z2 = x * y
z3 = z2 - z1
with tf.Session() as sess:
a1, a3 = sess.run([z1, z3])
print(a1)
print(a3)
import tensorflow as tf
x = tf.constant([3, 5, 7], name="x")
y = tf.constant([1, 2, 3], name="y")
z1 = tf.add(x, y, name="z1")
z2 = x * y
z3 = z2 - z1
with tf.Session() as sess:
with tf.summary.FileWriter('summaries', sess.graph) as writer:
a1, a3 = sess.run([z1, z3])
!ls summaries
from google.datalab.ml import TensorBoard
TensorBoard().start('./summaries')
from google.datalab.ml import TensorBoard
TensorBoard().stop(13045)
print('stopped TensorBoard')
import tensorflow as tf
def forward_pass(w, x):
return tf.matmul(w, x)
def train_loop(x, niter=5):
with tf.variable_scope("model", reuse=tf.AUTO_REUSE):
w = tf.get_variable("weights",
shape=(1,2), # 1 x 2 matrix
initializer=tf.truncated_normal_initializer(),
trainable=True)
preds = []
for k in range(niter):
preds.append(forward_pass(w, x))
w = w + 0.1 # "gradient update"
return preds
with tf.Session() as sess:
preds = train_loop(tf.constant([[3.2, 5.1, 7.2],[4.3, 6.2, 8.3]])) # 2 x 3 matrix
tf.global_variables_initializer().run()
for i in range(len(preds)):
print("{}:{}".format( i, preds[i].eval() ))
###Output
0:[[ 8.568541 12.702375 17.271353]]
1:[[ 9.318541 13.8323765 18.821354 ]]
2:[[10.068541 14.962376 20.371353]]
3:[[10.818541 16.092377 21.921354]]
4:[[11.56854 17.222376 23.471352]]
###Markdown
Diagrams for Course 3
###Code
import tensorflow as tf
x = tf.constant(3)
print x
import tensorflow as tf
x = tf.constant([3, 5, 7])
print x
import tensorflow as tf
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
print x
import tensorflow as tf
x = tf.constant([[[3, 5, 7],[4, 6, 8]],
[[1, 2, 3],[4, 5, 6]]
])
print x
import tensorflow as tf
x1 = tf.constant([2, 3, 4])
x2 = tf.stack([x1, x1])
x3 = tf.stack([x2, x2, x2, x2])
x4 = tf.stack([x3, x3])
print x1
print x2
print x3
print x4
import tensorflow as tf
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
y = x[:, 1]
with tf.Session() as sess:
print y.eval()
import tensorflow as tf
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
y = tf.reshape(x, [3, 2])
with tf.Session() as sess:
print y.eval()
import tensorflow as tf
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
y = tf.reshape(x, [3, 2])[1, :]
with tf.Session() as sess:
print y.eval()
import tensorflow as tf
from tensorflow.contrib.eager.python import tfe
tfe.enable_eager_execution()
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
y = tf.reshape(x, [3, 2])[1, :]
print y
import tensorflow as tf
from tensorflow.contrib.eager.python import tfe
tfe.enable_eager_execution()
x = tf.constant([3, 5, 7])
y = tf.constant([1, 2, 3])
print (x-y)
import tensorflow as tf
x = tf.constant([3, 5, 7])
y = tf.constant([1, 2, 3])
z = tf.add(x, y)
with tf.Session() as sess:
print z.eval()
import tensorflow as tf
x = tf.constant([3, 5, 7])
y = tf.constant([1, 2, 3])
z = tf.add(x, y)
with tf.Session() as sess:
print sess.run(z)
import tensorflow as tf
x = tf.constant([3, 5, 7])
y = tf.constant([1, 2, 3])
z1 = x + y
z2 = x * y
z3 = z2 - z1
with tf.Session() as sess:
a1, a3 = sess.run([z1, z3])
print a1
print a3
import tensorflow as tf
x = tf.constant([3, 5, 7], name="x")
y = tf.constant([1, 2, 3], name="y")
z1 = tf.add(x, y, name="z1")
z2 = x * y
z3 = z2 - z1
with tf.Session() as sess:
with tf.summary.FileWriter('summaries', sess.graph) as writer:
a1, a3 = sess.run([z1, z3])
!ls summaries
from google.datalab.ml import TensorBoard
TensorBoard().start('./summaries')
from google.datalab.ml import TensorBoard
TensorBoard().stop(13045)
print 'stopped TensorBoard'
import tensorflow as tf
def forward_pass(w, x):
return tf.matmul(w, x)
def train_loop(x, niter=5):
with tf.variable_scope("model", reuse=tf.AUTO_REUSE):
w = tf.get_variable("weights",
shape=(1,2), # 1 x 2 matrix
initializer=tf.truncated_normal_initializer(),
trainable=True)
preds = []
for k in xrange(niter):
preds.append(forward_pass(w, x))
w = w + 0.1 # "gradient update"
return preds
with tf.Session() as sess:
preds = train_loop(tf.constant([[3.2, 5.1, 7.2],[4.3, 6.2, 8.3]])) # 2 x 3 matrix
tf.global_variables_initializer().run()
for i in xrange(len(preds)):
print "{}:{}".format( i, preds[i].eval() )
###Output
0:[[-0.53224635 -1.4080029 -2.3759441 ]]
1:[[ 0.21775389 -0.27800274 -0.82594395]]
2:[[0.96775365 0.8519969 0.72405624]]
3:[[1.7177541 1.981997 2.2740564]]
4:[[2.4677541 3.1119976 3.8240576]]
###Markdown
Diagrams for Course 3
###Code
import tensorflow as tf
x = tf.constant(3)
print(x)
import tensorflow as tf
x = tf.constant([3, 5, 7])
print(x)
import tensorflow as tf
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
print(x)
import tensorflow as tf
x = tf.constant([[[3, 5, 7],[4, 6, 8]],
[[1, 2, 3],[4, 5, 6]]
])
print(x)
import tensorflow as tf
x1 = tf.constant([2, 3, 4])
x2 = tf.stack([x1, x1])
x3 = tf.stack([x2, x2, x2, x2])
x4 = tf.stack([x3, x3])
print(x1)
print(x2)
print(x3)
print(x4)
import tensorflow as tf
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
y = x[:, 1]
with tf.Session() as sess:
print(y.eval())
import tensorflow as tf
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
y = tf.reshape(x, [3, 2])
with tf.Session() as sess:
print(y.eval())
import tensorflow as tf
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
y = tf.reshape(x, [3, 2])[1, :]
with tf.Session() as sess:
print(y.eval())
import tensorflow as tf
from tensorflow.contrib.eager.python import tfe
tfe.enable_eager_execution()
x = tf.constant([[3, 5, 7],
[4, 6, 8]])
y = tf.reshape(x, [3, 2])[1, :]
print(y)
import tensorflow as tf
from tensorflow.contrib.eager.python import tfe
tfe.enable_eager_execution()
x = tf.constant([3, 5, 7])
y = tf.constant([1, 2, 3])
print(x-y)
import tensorflow as tf
x = tf.constant([3, 5, 7])
y = tf.constant([1, 2, 3])
z = tf.add(x, y)
with tf.Session() as sess:
print(z.eval())
import tensorflow as tf
x = tf.constant([3, 5, 7])
y = tf.constant([1, 2, 3])
z = tf.add(x, y)
with tf.Session() as sess:
print(sess.run(z))
import tensorflow as tf
x = tf.constant([3, 5, 7])
y = tf.constant([1, 2, 3])
z1 = x + y
z2 = x * y
z3 = z2 - z1
with tf.Session() as sess:
a1, a3 = sess.run([z1, z3])
print(a1)
print(a3)
import tensorflow as tf
x = tf.constant([3, 5, 7], name="x")
y = tf.constant([1, 2, 3], name="y")
z1 = tf.add(x, y, name="z1")
z2 = x * y
z3 = z2 - z1
with tf.Session() as sess:
with tf.summary.FileWriter('summaries', sess.graph) as writer:
a1, a3 = sess.run([z1, z3])
!ls summaries
from google.datalab.ml import TensorBoard
TensorBoard().start('./summaries')
from google.datalab.ml import TensorBoard
TensorBoard().stop(13045)
print('stopped TensorBoard')
import tensorflow as tf
def forward_pass(w, x):
return tf.matmul(w, x)
def train_loop(x, niter=5):
with tf.variable_scope("model", reuse=tf.AUTO_REUSE):
w = tf.get_variable("weights",
shape=(1,2), # 1 x 2 matrix
initializer=tf.truncated_normal_initializer(),
trainable=True)
preds = []
for k in range(niter):
preds.append(forward_pass(w, x))
w = w + 0.1 # "gradient update"
return preds
with tf.Session() as sess:
preds = train_loop(tf.constant([[3.2, 5.1, 7.2],[4.3, 6.2, 8.3]])) # 2 x 3 matrix
tf.global_variables_initializer().run()
for i in range(len(preds)):
print("{}:{}".format( i, preds[i].eval() ))
###Output
0:[[ 8.568541 12.702375 17.271353]]
1:[[ 9.318541 13.8323765 18.821354 ]]
2:[[10.068541 14.962376 20.371353]]
3:[[10.818541 16.092377 21.921354]]
4:[[11.56854 17.222376 23.471352]]
|
example/ex.ipynb | ###Markdown
Test and Demonstrate the use of lp_solve[click here](http://lpsolve.sourceforge.net/5.5/Python.htm) for more details. import library
###Code
from lp_solve import *
###Output
_____no_output_____
###Markdown
Example 1 from the lp_solve distribution
###Code
f = [-1, 2]
A = [[2, 1], [-4, 4]]
b = [5, 5]
e = [-1, -1]
xint = [1, 2]
[v,x,duals] = lp_solve(f,A,b,e,None,None,xint)
print(v)
print(x)
###Output
_____no_output_____
###Markdown
Example 2
###Code
f = [50, 100]
A = [[10, 5],[4, 10],[1, 1.5]]
b = [2500, 2000, 450]
e = [-1, -1, -1]
[v,x,duals] = lp_solve(f,A,b,e)
print(v)
print(x)
###Output
_____no_output_____
###Markdown
Example 3
###Code
f = [-40, -36]
vub = [8, 10]
A = [[5, 3]]
b = [45]
e = [1]
[v,x,duals] = lp_solve(f,A,b,e,None,vub)
print(v)
print(x)
###Output
_____no_output_____
###Markdown
Example 4
###Code
f = [10, 6, 4]
A = [[1, 1, 1], [10, 4, 5], [2, 2, 6]]
b = [100, 600, 300]
e = [-1, -1, -1]
xint = [2]
[v,x,duals] = lp_solve(f,A,b,e,None,None,xint)
print(v)
print(x)
###Output
_____no_output_____
###Markdown
Example 5 Integer programming example, page 218 of Ecker & Kupferschmid
###Code
f = [-3, 7, 12]
a = [[-3, 6, 8], [6, -3, 7], [-6, 3, 3]]
b = [12, 8, 5]
e = [-1, -1, -1]
xint = [1, 2, 3]
[v,x,duals] = lp_solve(f,a,b,e,None,None,xint)
print(v)
print(x)
###Output
_____no_output_____
###Markdown
Example 6 0-1 programming example, page 228 233 of Ecker & Kupferschmid
###Code
f = [-2, -3, -7, -7]
a = [[1, 1, -2, -5], [-1, 2, 1, 4]]
b = [2, -3]
e = [1, 1]
xint = [1, 2, 3, 4]
vub = [1, 1, 1, 1]
[v,x,duals] = lp_solve(f,a,b,e,None,vub,xint)
print(v)
print(x)
###Output
_____no_output_____
###Markdown
Example 7 0-1 programming example, page 238 of Ecker & Kupferschmid
###Code
f = [-1, -2, -3, -7, -8, -8]
a = [[5, -3, 2, -3, -1, 2], [-1, 0, 2, 1, 3, -3], [1, 2, -1, 0, 5, -1]]
b = [-5, -1, 3]
e = [1, 1, 1]
xint = [1, 2, 3, 4, 5, 6]
vub = [1, 1, 1, 1, 1, 1]
[v,x,duals] = lp_solve(f,a,b,e,None,vub,xint)
print(v)
print(x)
###Output
_____no_output_____
###Markdown
ex2.lp from the lp_solve distribution
###Code
f=[8, 15]
a = [[10, 21], [2, 1]]
b = [156, 22]
e = [-1, -1]
[v,x,duals] = lp_solve(f,a,b,e)
print(v)
print(x)
###Output
_____no_output_____
###Markdown
ex3.lp from the lp_solve distribution
###Code
f=[3, 13]
a = [[2, 9], [11, -8]]
b = [40, 82]
e = [-1, -1]
[v,x,duals] = lp_solve(f,a,b,e)
print(v)
print(x)
###Output
_____no_output_____
###Markdown
ex6.lp from the lp_solve distribution
###Code
f=[592, 381, 273, 55, 48, 37, 23]
a = [[3534, 2356, 1767, 589, 528, 451, 304]]
b = [119567]
e = [-1]
xint = [1, 2, 3, 4, 5, 6, 7]
vub = None
[v,x,duals] = lp_solve(f,a,b,e,None,vub,xint)
print(v)
print(x)
###Output
_____no_output_____ |
recursive_filters/introduction.ipynb | ###Markdown
Realization of Recursive Filters*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* IntroductionComputing the output $y[k] = \mathcal{H} \{ x[k] \}$ of a [linear time-invariant](https://en.wikipedia.org/wiki/LTI_system_theory) (LTI) system is of central importance in digital signal processing. This is often referred to as [*filtering*](https://en.wikipedia.org/wiki/Digital_filter) of the input signal $x[k]$. We already have discussed the realization of [non-recursive filters](../nonrecursive_filters/introduction.ipynb). This section focuses on the realization of recursive filters. Recursive FiltersLinear difference equations with constant coefficients represent linear time-invariant (LTI) systems\begin{equation}\sum_{n=0}^{N} a_n \; y[k-n] = \sum_{m=0}^{M} b_m \; x[k-m]\end{equation}where $y[k] = \mathcal{H} \{ x[k] \}$ denotes the response of the system to the input signal $x[k]$, $N$ the order, $a_n$ and $b_m$ constant coefficients, respectively. Above equation can be rearranged with respect to the output signal $y[k]$ by extracting the first element ($n=0$) of the left-hand sum\begin{equation}y[k] = \frac{1}{a_0} \left( \sum_{m=0}^{M} b_m \; x[k-m] - \sum_{n=1}^{N} a_n \; y[k-n] \right)\end{equation}It is evident that the output signal $y[k]$ at time instant $k$ is given as a linear combination of past output samples $y[k-n]$ superimposed by a linear combination of the actual $x[k]$ and past $x[k-m]$ input samples. Hence, the actual output $y[k]$ is composed from the two contributions1. a [non-recursive part](../nonrecursive_filters/introduction.ipynbNon-Recursive-Filters), and2. a recursive part where a linear combination of past output samples is fed back.The impulse response of the system is given as the response of the system to a Dirac impulse at the input $h[k] = \mathcal{H} \{ \delta[k] \}$. Using above result and the properties of the discrete Dirac impulse we get\begin{equation}h[k] = \frac{1}{a_0} \left( b_k - \sum_{n=1}^{N} a_n \; h[k-n] \right)\end{equation}Due to the feedback, the impulse response will in general be of infinite length. The impulse response is termed as [infinite impulse response](https://en.wikipedia.org/wiki/Infinite_impulse_response) (IIR) and the system as recursive system/filter. Transfer FunctionApplying a $z$-transform to the left- and right-hand side of the difference equation and rearranging terms yields the transfer function $H(z)$ of the system\begin{equation}H(z) = \frac{Y(z)}{X(z)} = \frac{\sum_{m=0}^{M} b_m \; z^{-m}}{\sum_{n=0}^{N} a_n \; z^{-n}}\end{equation}The transfer function is given as a [rational function](https://en.wikipedia.org/wiki/Rational_function) in $z$. The polynominals of the numerator and denominator can be expressed alternatively by their roots as\begin{equation}H(z) = \frac{b_M}{a_N} \cdot \frac{\prod_{\mu=1}^{P} (z - z_{0\mu})^{m_\mu}}{\prod_{\nu=1}^{Q} (z - z_{\infty\nu})^{n_\nu}}\end{equation}where $z_{0\mu}$ and $z_{\infty\nu}$ denote the $\mu$-th zero and $\nu$-th pole of degree $m_\mu$ and $n_\nu$ of $H(z)$, respectively. The total number of zeros and poles is denoted by $P$ and $Q$. Due to the symmetries of the $z$-transform, the transfer function of a real-valued system $h[k] \in \mathbb{R}$ exhibits complex conjugate symmetry\begin{equation}H(z) = H^*(z^*)\end{equation}Poles and zeros are either real valued or complex conjugate pairs for real-valued systems ($b_m\in\mathbb{R}$, $a_n\in\mathbb{R}$). For the poles of a causal and stable system $H(z)$ the following condition has to hold\begin{equation}\max_{\nu} | z_{\infty\nu} | < 1\end{equation}Hence, all poles have to be located inside the unit circle $|z| = 1$. Amongst others, this implies that $M \leq N$. ExampleThe following example shows the pole/zero diagram, the magnitude and phase response, and impulse response of a recursive filter with so-called [Butterworth](https://en.wikipedia.org/wiki/Butterworth_filter) lowpass characteristic.
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.markers import MarkerStyle
from matplotlib.patches import Circle
import scipy.signal as sig
N = 5 # order of recursive filter
L = 128 # number of computed samples
def zplane(z, p, title='Poles and Zeros'):
"Plots zero and pole locations in the complex z-plane"
ax = plt.gca()
ax.plot(np.real(z), np.imag(z), 'bo', fillstyle='none', ms=10)
ax.plot(np.real(p), np.imag(p), 'rx', fillstyle='none', ms=10)
unit_circle = Circle((0, 0), radius=1, fill=False,
color='black', ls='solid', alpha=0.9)
ax.add_patch(unit_circle)
ax.axvline(0, color='0.7')
ax.axhline(0, color='0.7')
plt.title(title)
plt.xlabel(r'Re{$z$}')
plt.ylabel(r'Im{$z$}')
plt.axis('equal')
plt.xlim((-2, 2))
plt.ylim((-2, 2))
plt.grid()
# compute coefficients of recursive filter
b, a = sig.butter(N, 0.2, 'low')
# compute transfer function
Om, H = sig.freqz(b, a)
# compute impulse response
k = np.arange(L)
x = np.where(k == 0, 1.0, 0)
h = sig.lfilter(b, a, x)
# plot pole/zero-diagram
plt.figure(figsize=(5, 5))
zplane(np.roots(b), np.roots(a))
# plot magnitude response
plt.figure(figsize=(10, 3))
plt.plot(Om, 20 * np.log10(abs(H)))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$|H(e^{j \Omega})|$ in dB')
plt.grid()
plt.title('Magnitude response')
# plot phase response
plt.figure(figsize=(10, 3))
plt.plot(Om, np.unwrap(np.angle(H)))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$\varphi (\Omega)$ in rad')
plt.grid()
plt.title('Phase response')
# plot impulse response (magnitude)
plt.figure(figsize=(10, 3))
plt.stem(20*np.log10(np.abs(np.squeeze(h))), use_line_collection=True)
plt.xlabel(r'$k$')
plt.ylabel(r'$|h[k]|$ in dB')
plt.grid()
plt.title('Impulse response (magnitude)')
###Output
_____no_output_____
###Markdown
Realization of Recursive Filters*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* IntroductionComputing the output $y[k] = \mathcal{H} \{ x[k] \}$ of a [linear time-invariant](https://en.wikipedia.org/wiki/LTI_system_theory) (LTI) system is of central importance in digital signal processing. This is often referred to as [*filtering*](https://en.wikipedia.org/wiki/Digital_filter) of the input signal $x[k]$. We already have discussed the realization of [non-recursive filters](../nonrecursive_filters/introduction.ipynb). This section focuses on the realization of recursive filters. Recursive FiltersLinear difference equations with constant coefficients represent linear time-invariant (LTI) systems\begin{equation}\sum_{n=0}^{N} a_n \; y[k-n] = \sum_{m=0}^{M} b_m \; x[k-m]\end{equation}where $y[k] = \mathcal{H} \{ x[k] \}$ denotes the response of the system to the input signal $x[k]$, $N$ the order, $a_n$ and $b_m$ constant coefficients, respectively. Above equation can be rearranged with respect to the output signal $y[k]$ by extracting the first element ($n=0$) of the left-hand sum\begin{equation}y[k] = \frac{1}{a_0} \left( \sum_{m=0}^{M} b_m \; x[k-m] - \sum_{n=1}^{N} a_n \; y[k-n] \right)\end{equation}It is evident that the output signal $y[k]$ at time instant $k$ is given as a linear combination of past output samples $y[k-n]$ superimposed by a linear combination of the actual $x[k]$ and past $x[k-m]$ input samples. Hence, the actual output $y[k]$ is composed from the two contributions1. a [non-recursive part](../nonrecursive_filters/introduction.ipynbNon-Recursive-Filters), and2. a recursive part where a linear combination of past output samples is fed back.The impulse response of the system is given as the response of the system to a Dirac impulse at the input $h[k] = \mathcal{H} \{ \delta[k] \}$. Using above result and the properties of the discrete Dirac impulse we get\begin{equation}h[k] = \frac{1}{a_0} \left( b_k - \sum_{n=1}^{N} a_n \; h[k-n] \right)\end{equation}Due to the feedback, the impulse response will in general be of infinite length. The impulse response is termed as [infinite impulse response](https://en.wikipedia.org/wiki/Infinite_impulse_response) (IIR) and the system as recursive system/filter. Transfer FunctionApplying a $z$-transform to the left- and right-hand side of the difference equation and rearranging terms yields the transfer function $H(z)$ of the system\begin{equation}H(z) = \frac{Y(z)}{X(z)} = \frac{\sum_{m=0}^{M} b_m \; z^{-m}}{\sum_{n=0}^{N} a_n \; z^{-n}}\end{equation}The transfer function is given as a [rational function](https://en.wikipedia.org/wiki/Rational_function) in $z$. The polynominals of the numerator and denominator can be expressed alternatively by their roots as\begin{equation}H(z) = \frac{b_M}{a_N} \cdot \frac{\prod_{\mu=1}^{P} (z - z_{0\mu})^{m_\mu}}{\prod_{\nu=1}^{Q} (z - z_{\infty\nu})^{n_\nu}}\end{equation}where $z_{0\mu}$ and $z_{\infty\nu}$ denote the $\mu$-th zero and $\nu$-th pole of degree $m_\mu$ and $n_\nu$ of $H(z)$, respectively. The total number of zeros and poles is denoted by $P$ and $Q$. Due to the symmetries of the $z$-transform, the transfer function of a real-valued system $h[k] \in \mathbb{R}$ exhibits complex conjugate symmetry\begin{equation}H(z) = H^*(z^*)\end{equation}Poles and zeros are either real valued or complex conjugate pairs for real-valued systems ($b_m\in\mathbb{R}$, $a_n\in\mathbb{R}$). For the poles of a causal and stable system $H(z)$ the following condition has to hold\begin{equation}\max_{\nu} | z_{\infty\nu} | < 1\end{equation}Hence, all poles have to be located inside the unit circle $|z| = 1$. Amongst others, this implies that $M \leq N$. ExampleThe following example shows the pole/zero diagram, the magnitude and phase response, and impulse response of a recursive filter with so-called [Butterworth](https://en.wikipedia.org/wiki/Butterworth_filter) lowpass characteristic.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.markers import MarkerStyle
from matplotlib.patches import Circle
import scipy.signal as sig
N = 5 # order of recursive filter
L = 128 # number of computed samples
def zplane(z, p, title='Poles and Zeros'):
"Plots zero and pole locations in the complex z-plane"
ax = plt.gca()
ax.plot(np.real(z), np.imag(z), 'bo', fillstyle='none', ms = 10)
ax.plot(np.real(p), np.imag(p), 'rx', fillstyle='none', ms = 10)
unit_circle = Circle((0,0), radius=1, fill=False,
color='black', ls='solid', alpha=0.9)
ax.add_patch(unit_circle)
ax.axvline(0, color='0.7')
ax.axhline(0, color='0.7')
plt.title(title)
plt.xlabel(r'Re{$z$}')
plt.ylabel(r'Im{$z$}')
plt.axis('equal')
plt.xlim((-2, 2))
plt.ylim((-2, 2))
plt.grid()
# compute coefficients of recursive filter
b, a = sig.butter(N, 0.2, 'low')
# compute transfer function
Om, H = sig.freqz(b, a)
# compute impulse response
k = np.arange(L)
x = np.where(k==0, 1.0, 0)
h = sig.lfilter(b, a, x)
# plot pole/zero-diagram
plt.figure(figsize=(5, 5))
zplane(np.roots(b), np.roots(a))
# plot magnitude response
plt.figure(figsize=(10, 3))
plt.plot(Om, 20 * np.log10(abs(H)))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$|H(e^{j \Omega})|$ in dB')
plt.grid()
plt.title('Magnitude response')
# plot phase response
plt.figure(figsize=(10, 3))
plt.plot(Om, np.unwrap(np.angle(H)))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$\varphi (\Omega)$ in rad')
plt.grid()
plt.title('Phase response')
# plot impulse response (magnitude)
plt.figure(figsize=(10, 3))
plt.stem(20*np.log10(np.abs(np.squeeze(h))))
plt.xlabel(r'$k$')
plt.ylabel(r'$|h[k]|$ in dB')
plt.grid()
plt.title('Impulse response (magnitude)');
###Output
_____no_output_____
###Markdown
Realization of Recursive Filters*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* IntroductionComputing the output $y[k] = \mathcal{H} \{ x[k] \}$ of a [linear time-invariant](https://en.wikipedia.org/wiki/LTI_system_theory) (LTI) system is of central importance in digital signal processing. This is often referred to as [*filtering*](https://en.wikipedia.org/wiki/Digital_filter) of the input signal $x[k]$. We already have discussed the realization of [non-recursive filters](../nonrecursive_filters/introduction.ipynb). This section focuses on the realization of recursive filters. Recursive FiltersLinear difference equations with constant coefficients represent linear-time invariant (LTI) systems\begin{equation}\sum_{n=0}^{N} a_n \; y[k-n] = \sum_{m=0}^{M} b_m \; x[k-m]\end{equation}where $y[k] = \mathcal{H} \{ x[k] \}$ denotes the response of the system to the input signal $x[k]$, $N$ the order, $a_n$ and $b_m$ constant coefficients, respectively. Above equation can be rearranged with respect to the output signal $y[k]$ by extracting the first element ($n=0$) of the left hand sum\begin{equation}y[k] = \frac{1}{a_0} \left( \sum_{m=0}^{M} b_m \; x[k-m] - \sum_{n=1}^{N} a_n \; y[k-n] \right)\end{equation}It is evident that the output signal $y[k]$ at time instant $k$ is given as a linear combination of past output samples $y[k-n]$ superimposed by a linear combination of the actual $x[k]$ and past $x[k-m]$ input samples. Hence, the actual output $y[k]$ is composed from the two contributions1. a [non-recursive part](../nonrecursive_filters/introduction.ipynbNon-Recursive-Filters), and2. a recursive part where a linear combination of past output samples is fed back.The impulse response of the system is given as the response of the system to a Dirac impulse at the input $h[k] = \mathcal{H} \{ \delta[k] \}$. Using above result and the properties of the discrete Dirac impulse we get\begin{equation}h[k] = \frac{1}{a_0} \left( b_k - \sum_{n=1}^{N} a_n \; h[k-n] \right)\end{equation}Due to the feedback, the impulse response will in general be of infinite length. The impulse response is termed as [infinite impulse response](https://en.wikipedia.org/wiki/Infinite_impulse_response) (IIR) and the system as recursive system/filter. Transfer FunctionApplying a $z$-transform to the left and right hand side of the difference equation and rearranging terms yields the transfer function $H(z)$ of the system\begin{equation}H(z) = \frac{Y(z)}{X(z)} = \frac{\sum_{m=0}^{M} b_m \; z^{-m}}{\sum_{n=0}^{N} a_n \; z^{-n}}\end{equation}The transfer function is given as a [rational function](https://en.wikipedia.org/wiki/Rational_function) in $z$. The polynominals of the numerator and denominator can expressed alternatively by their roots as\begin{equation}H(z) = \frac{b_M}{a_N} \cdot \frac{\prod_{\mu=1}^{P} (z - z_{0\mu})^{m_\mu}}{\prod_{\nu=1}^{Q} (z - z_{\infty\nu})^{n_\nu}}\end{equation}where $z_{0\mu}$ and $z_{\infty\nu}$ denote the $\mu$-th zero and $\nu$-th pole of degree $m_\mu$ and $n_\nu$ of $H(z)$, respectively. The total number of zeros and poles is denoted by $P$ and $Q$. Due to the symmetries of the $z$-transform, the transfer function of a real-valued system $h[k] \in \mathbb{R}$ exhibits complex conjugate symmetry\begin{equation}H(z) = H^*(z^*)\end{equation}Poles and zeros are either real valued or conjugate complex pairs for real-valued systems ($b_m\in\mathbb{R}$, $a_n\in\mathbb{R}$). For the poles of a causal and stable system $H(z)$ the following condition has to hold\begin{equation}\max_{\nu} | z_{\infty\nu} | < 1\end{equation}Hence all poles have to be located inside the unit circle $|z| = 1$. Amongst others, this implies that $M \leq N$. ExampleThe following example shows the pole/zero diagram, the magnitude and phase response, and impulse response of a recursive filter with so called [Butterworth](https://en.wikipedia.org/wiki/Butterworth_filter) lowpass characteristic.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.markers import MarkerStyle
from matplotlib.patches import Circle
import scipy.signal as sig
N = 5 # order of recursive filter
def zplane(z, p):
fig = plt.figure(figsize=(5,5))
ax = fig.gca()
plt.hold(True)
unit_circle = Circle((0,0), radius=1, fill=False,
color='black', ls='solid', alpha=0.9)
ax.add_patch(unit_circle)
ax.axvline(0, color='0.7')
ax.axhline(0, color='0.7')
plt.axis('equal')
plt.xlim((-2, 2))
plt.ylim((-2, 2))
plt.grid()
plt.title('Poles and Zeros')
plt.xlabel(r'Re{$z$}')
plt.ylabel(r'Im{$z$}')
ax.plot(np.real(z), np.imag(z), 'bo', fillstyle='none', ms = 10)
ax.plot(np.real(p), np.imag(p), 'rx', fillstyle='none', ms = 10)
plt.hold(False)
# coefficients of recursive filter
b, a = sig.butter(N, 0.2, 'low')
# compute transfer function of filter
Om, H = sig.freqz(b, a)
# compute impulse response
k = np.arange(128)
x = np.where(k==0, 1.0, 0)
h = sig.lfilter(b, a, x)
# plot pole/zero-diagram
zplane(np.roots(b), np.roots(a))
# plot magnitude response
plt.figure(figsize=(10, 3))
plt.plot(Om, 20 * np.log10(abs(H)))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$|H(e^{j \Omega})|$ in dB')
plt.grid()
plt.title('Magnitude response')
# plot phase response
plt.figure(figsize=(10, 3))
plt.plot(Om, np.unwrap(np.angle(H)))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$\varphi (\Omega)$ in rad')
plt.grid()
plt.title('Phase')
# plot impulse response
plt.figure(figsize=(10, 3))
plt.stem(20*np.log10(np.abs(np.squeeze(h))))
plt.xlabel(r'$k$')
plt.ylabel(r'$|h[k]|$ in dB')
plt.grid()
plt.title('Impulse response');
###Output
_____no_output_____
###Markdown
Realization of Recursive Filters*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* IntroductionComputing the output $y[k] = \mathcal{H} \{ x[k] \}$ of a [linear time-invariant](https://en.wikipedia.org/wiki/LTI_system_theory) (LTI) system is of central importance in digital signal processing. This is often referred to as [*filtering*](https://en.wikipedia.org/wiki/Digital_filter) of the input signal $x[k]$. We already have discussed the realization of [non-recursive filters](../nonrecursive_filters/introduction.ipynb). This section focuses on the realization of recursive filters. Recursive FiltersLinear difference equations with constant coefficients represent linear time-invariant (LTI) systems\begin{equation}\sum_{n=0}^{N} a_n \; y[k-n] = \sum_{m=0}^{M} b_m \; x[k-m]\end{equation}where $y[k] = \mathcal{H} \{ x[k] \}$ denotes the response of the system to the input signal $x[k]$, $N$ the order, $a_n$ and $b_m$ constant coefficients, respectively. Above equation can be rearranged with respect to the output signal $y[k]$ by extracting the first element ($n=0$) of the left-hand sum\begin{equation}y[k] = \frac{1}{a_0} \left( \sum_{m=0}^{M} b_m \; x[k-m] - \sum_{n=1}^{N} a_n \; y[k-n] \right)\end{equation}It is evident that the output signal $y[k]$ at time instant $k$ is given as a linear combination of past output samples $y[k-n]$ superimposed by a linear combination of the actual $x[k]$ and past $x[k-m]$ input samples. Hence, the actual output $y[k]$ is composed from the two contributions1. a [non-recursive part](../nonrecursive_filters/introduction.ipynbNon-Recursive-Filters), and2. a recursive part where a linear combination of past output samples is fed back.The impulse response of the system is given as the response of the system to a Dirac impulse at the input $h[k] = \mathcal{H} \{ \delta[k] \}$. Using above result and the properties of the discrete Dirac impulse we get\begin{equation}h[k] = \frac{1}{a_0} \left( b_k - \sum_{n=1}^{N} a_n \; h[k-n] \right)\end{equation}Due to the feedback, the impulse response will in general be of infinite length. The impulse response is termed as [infinite impulse response](https://en.wikipedia.org/wiki/Infinite_impulse_response) (IIR) and the system as recursive system/filter. Transfer FunctionApplying a $z$-transform to the left- and right-hand side of the difference equation and rearranging terms yields the transfer function $H(z)$ of the system\begin{equation}H(z) = \frac{Y(z)}{X(z)} = \frac{\sum_{m=0}^{M} b_m \; z^{-m}}{\sum_{n=0}^{N} a_n \; z^{-n}}\end{equation}The transfer function is given as a [rational function](https://en.wikipedia.org/wiki/Rational_function) in $z$. The polynominals of the numerator and denominator can be expressed alternatively by their roots as\begin{equation}H(z) = \frac{b_M}{a_N} \cdot \frac{\prod_{\mu=1}^{P} (z - z_{0\mu})^{m_\mu}}{\prod_{\nu=1}^{Q} (z - z_{\infty\nu})^{n_\nu}}\end{equation}where $z_{0\mu}$ and $z_{\infty\nu}$ denote the $\mu$-th zero and $\nu$-th pole of degree $m_\mu$ and $n_\nu$ of $H(z)$, respectively. The total number of zeros and poles is denoted by $P$ and $Q$. Due to the symmetries of the $z$-transform, the transfer function of a real-valued system $h[k] \in \mathbb{R}$ exhibits complex conjugate symmetry\begin{equation}H(z) = H^*(z^*)\end{equation}Poles and zeros are either real valued or complex conjugate pairs for real-valued systems ($b_m\in\mathbb{R}$, $a_n\in\mathbb{R}$). For the poles of a causal and stable system $H(z)$ the following condition has to hold\begin{equation}\max_{\nu} | z_{\infty\nu} | < 1\end{equation}Hence, all poles have to be located inside the unit circle $|z| = 1$. Amongst others, this implies that $M \leq N$. ExampleThe following example shows the pole/zero diagram, the magnitude and phase response, and impulse response of a recursive filter with so-called [Butterworth](https://en.wikipedia.org/wiki/Butterworth_filter) lowpass characteristic.
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.markers import MarkerStyle
from matplotlib.patches import Circle
import scipy.signal as sig
%matplotlib inline
N = 5 # order of recursive filter
L = 128 # number of computed samples
def zplane(z, p, title='Poles and Zeros'):
"Plots zero and pole locations in the complex z-plane"
ax = plt.gca()
ax.plot(np.real(z), np.imag(z), 'bo', fillstyle='none', ms=10)
ax.plot(np.real(p), np.imag(p), 'rx', fillstyle='none', ms=10)
unit_circle = Circle((0, 0), radius=1, fill=False,
color='black', ls='solid', alpha=0.9)
ax.add_patch(unit_circle)
ax.axvline(0, color='0.7')
ax.axhline(0, color='0.7')
plt.title(title)
plt.xlabel(r'Re{$z$}')
plt.ylabel(r'Im{$z$}')
plt.axis('equal')
plt.xlim((-2, 2))
plt.ylim((-2, 2))
plt.grid()
# compute coefficients of recursive filter
b, a = sig.butter(N, 0.2, 'low')
# compute transfer function
Om, H = sig.freqz(b, a)
# compute impulse response
k = np.arange(L)
x = np.where(k == 0, 1.0, 0)
h = sig.lfilter(b, a, x)
# plot pole/zero-diagram
plt.figure(figsize=(5, 5))
zplane(np.roots(b), np.roots(a))
# plot magnitude response
plt.figure(figsize=(10, 3))
plt.plot(Om, 20 * np.log10(abs(H)))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$|H(e^{j \Omega})|$ in dB')
plt.grid()
plt.title('Magnitude response')
# plot phase response
plt.figure(figsize=(10, 3))
plt.plot(Om, np.unwrap(np.angle(H)))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$\varphi (\Omega)$ in rad')
plt.grid()
plt.title('Phase response')
# plot impulse response (magnitude)
plt.figure(figsize=(10, 3))
plt.stem(20*np.log10(np.abs(np.squeeze(h))), use_line_collection=True)
plt.xlabel(r'$k$')
plt.ylabel(r'$|h[k]|$ in dB')
plt.grid()
plt.title('Impulse response (magnitude)')
###Output
_____no_output_____
###Markdown
Realization of Recursive Filters*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* IntroductionComputing the output $y[k] = \mathcal{H} \{ x[k] \}$ of a [linear time-invariant](https://en.wikipedia.org/wiki/LTI_system_theory) (LTI) system is of central importance in digital signal processing. This is often referred to as [*filtering*](https://en.wikipedia.org/wiki/Digital_filter) of the input signal $x[k]$. We already have discussed the realization of [non-recursive filters](../nonrecursive_filters/introduction.ipynb). This section focuses on the realization of recursive filters. Recursive FiltersLinear difference equations with constant coefficients represent linear time-invariant (LTI) systems\begin{equation}\sum_{n=0}^{N} a_n \; y[k-n] = \sum_{m=0}^{M} b_m \; x[k-m]\end{equation}where $y[k] = \mathcal{H} \{ x[k] \}$ denotes the response of the system to the input signal $x[k]$, $N$ the order, $a_n$ and $b_m$ constant coefficients, respectively. Above equation can be rearranged with respect to the output signal $y[k]$ by extracting the first element ($n=0$) of the left-hand sum\begin{equation}y[k] = \frac{1}{a_0} \left( \sum_{m=0}^{M} b_m \; x[k-m] - \sum_{n=1}^{N} a_n \; y[k-n] \right)\end{equation}It is evident that the output signal $y[k]$ at time instant $k$ is given as a linear combination of past output samples $y[k-n]$ superimposed by a linear combination of the actual $x[k]$ and past $x[k-m]$ input samples. Hence, the actual output $y[k]$ is composed from the two contributions1. a [non-recursive part](../nonrecursive_filters/introduction.ipynbNon-Recursive-Filters), and2. a recursive part where a linear combination of past output samples is fed back.The impulse response of the system is given as the response of the system to a Dirac impulse at the input $h[k] = \mathcal{H} \{ \delta[k] \}$. Using above result and the properties of the discrete Dirac impulse we get\begin{equation}h[k] = \frac{1}{a_0} \left( b_k - \sum_{n=1}^{N} a_n \; h[k-n] \right)\end{equation}Due to the feedback, the impulse response will in general be of infinite length. The impulse response is termed as [infinite impulse response](https://en.wikipedia.org/wiki/Infinite_impulse_response) (IIR) and the system as recursive system/filter. Transfer FunctionApplying a $z$-transform to the left- and right-hand side of the difference equation and rearranging terms yields the transfer function $H(z)$ of the system\begin{equation}H(z) = \frac{Y(z)}{X(z)} = \frac{\sum_{m=0}^{M} b_m \; z^{-m}}{\sum_{n=0}^{N} a_n \; z^{-n}}\end{equation}The transfer function is given as a [rational function](https://en.wikipedia.org/wiki/Rational_function) in $z$. The polynominals of the numerator and denominator can be expressed alternatively by their roots as\begin{equation}H(z) = \frac{b_M}{a_N} \cdot \frac{\prod_{\mu=1}^{P} (z - z_{0\mu})^{m_\mu}}{\prod_{\nu=1}^{Q} (z - z_{\infty\nu})^{n_\nu}}\end{equation}where $z_{0\mu}$ and $z_{\infty\nu}$ denote the $\mu$-th zero and $\nu$-th pole of degree $m_\mu$ and $n_\nu$ of $H(z)$, respectively. The total number of zeros and poles is denoted by $P$ and $Q$. Due to the symmetries of the $z$-transform, the transfer function of a real-valued system $h[k] \in \mathbb{R}$ exhibits complex conjugate symmetry\begin{equation}H(z) = H^*(z^*)\end{equation}Poles and zeros are either real valued or complex conjugate pairs for real-valued systems ($b_m\in\mathbb{R}$, $a_n\in\mathbb{R}$). For the poles of a causal and stable system $H(z)$ the following condition has to hold\begin{equation}\max_{\nu} | z_{\infty\nu} | < 1\end{equation}Hence, all poles have to be located inside the unit circle $|z| = 1$. Amongst others, this implies that $M \leq N$. ExampleThe following example shows the pole/zero diagram, the magnitude and phase response, and impulse response of a recursive filter with so-called [Butterworth](https://en.wikipedia.org/wiki/Butterworth_filter) lowpass characteristic.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.markers import MarkerStyle
from matplotlib.patches import Circle
import scipy.signal as sig
N = 5 # order of recursive filter
L = 128 # number of computed samples
def zplane(z, p, title='Poles and Zeros'):
"Plots zero and pole locations in the complex z-plane"
ax = plt.gca()
ax.plot(np.real(z), np.imag(z), 'bo', fillstyle='none', ms = 10)
ax.plot(np.real(p), np.imag(p), 'rx', fillstyle='none', ms = 10)
unit_circle = Circle((0,0), radius=1, fill=False,
color='black', ls='solid', alpha=0.9)
ax.add_patch(unit_circle)
ax.axvline(0, color='0.7')
ax.axhline(0, color='0.7')
plt.title(title)
plt.xlabel(r'Re{$z$}')
plt.ylabel(r'Im{$z$}')
plt.axis('equal')
plt.xlim((-2, 2))
plt.ylim((-2, 2))
plt.grid()
# compute coefficients of recursive filter
b, a = sig.butter(N, 0.2, 'low')
# compute transfer function
Om, H = sig.freqz(b, a)
# compute impulse response
k = np.arange(L)
x = np.where(k==0, 1.0, 0)
h = sig.lfilter(b, a, x)
# plot pole/zero-diagram
plt.figure(figsize=(5, 5))
zplane(np.roots(b), np.roots(a))
# plot magnitude response
plt.figure(figsize=(10, 3))
plt.plot(Om, 20 * np.log10(abs(H)))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$|H(e^{j \Omega})|$ in dB')
plt.grid()
plt.title('Magnitude response')
# plot phase response
plt.figure(figsize=(10, 3))
plt.plot(Om, np.unwrap(np.angle(H)))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$\varphi (\Omega)$ in rad')
plt.grid()
plt.title('Phase response')
# plot impulse response (magnitude)
plt.figure(figsize=(10, 3))
plt.stem(20*np.log10(np.abs(np.squeeze(h))))
plt.xlabel(r'$k$')
plt.ylabel(r'$|h[k]|$ in dB')
plt.grid()
plt.title('Impulse response (magnitude)');
###Output
_____no_output_____ |
notebooks/wandb/run-20200211_133637-lr9ii429/code/notebooks/SiameseNetChannelCharting-v1.ipynb | ###Markdown
We observe that a lot of information is contained on the imaginary part of the impulse. So the 16 antennas, we are gong to have 32 'Channels' for our dataset. So we will have a training batch of shape [batch_size, 32 , 100]. Siamese Neural Network
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Setting up the Custom Dataset
###Code
# undersampling
idces = np.random.randint(0, data.shape[0], int(0.3*data.shape[0]))
data_undersampled = data[idces]
data_undersampled.shape
data_undersampled.shape
# train test split
train, test= train_test_split(data_undersampled)
train_dataset = data_preparation.SiameseDataset(train)
scaler = train_dataset.scaler_real, train_dataset.scaler_imag
test_dataset = data_preparation.SiameseDataset(test, scaler)
plt.figure(figsize=(20,20))
for i in range(1, 17):
plt.subplot(4,4,i)
plt.plot(train_dataset[0][0][i-1, :], label='1_sample')
#plt.plot(train_dataset[0][1][i-1, :], label='2_sample')
plt.legend()
train_dataset.nb_channels()
class SimpleNN(nn.Module):
def __init__(self):
super(SimpleNN, self).__init__()
self.conv1 = nn.Conv1d(in_channels=train_dataset.nb_channels(),
out_channels=128,
kernel_size=16)
self.conv2 = nn.Conv1d(in_channels=128,
out_channels=64,
kernel_size=8)
self.conv3 = nn.Conv1d(in_channels=64,
out_channels=16,
kernel_size=4)
f = data_preparation.conv1d_output_size
self.features = f(f(f(train_dataset.nb_samples(),kernel_size=16),
kernel_size=8),
kernel_size=4)
self.lin1 = nn.Linear(in_features= 16 * self.features, out_features=128)
self.lin2 = nn.Linear(in_features=128, out_features=32)
self.lin3 = nn.Linear(in_features=32, out_features=8)
self.lin4 = nn.Linear(in_features=8, out_features=3)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
x = torch.flatten(x, 1)
x = F.relu(self.lin1(x))
x = F.relu(self.lin2(x))
x = F.relu(self.lin3(x))
out = self.lin4(x)
return out
model = SimpleNN()
wandb.watch(model)
def loss_function(x1, x2, y1, y2):
x_difference = torch.sum(torch.abs(x1 - x2), dim=[1,2])
print(x_difference)
y_difference = torch.sum(torch.abs(y1 - y2), dim=[1])
print(y_difference)
return torch.sum(torch.pow(x_difference - y_difference, 2)/x_difference)
#x1, x2 = train_dataset[0:10][0], train_dataset[0:10][1]
#y1, y2 = model(x1), model(x2)
###Output
_____no_output_____
###Markdown
Training
###Code
a = len(test_dataset)/len(train_dataset)
batch_size = 64
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
criterion = loss_function
optimizer = optim.Adam(model.parameters())
wandb.log({"Batch_size": batch_size})
for e in range(80):
# train
loss = 0
for x1, x2 in train_loader:
optimizer.zero_grad()
y1, y2 = model(x1), model(x2)
batch_loss = criterion(x1, x2, y1 ,y2)
batch_loss.backward()
optimizer.step()
loss+=batch_loss
#validation
model.eval()
val_loss = 0
for x1, x2 in test_loader:
y1, y2 = model(x1), model(x2)
val_loss += criterion(x1, x2, y1 ,y2)
wandb.log({
"Training Loss": loss,
"Validation Loss": a*val_loss,
})
print(f"Epoch {e+1}, Training Loss: {a*loss}, Validation Loss: {val_loss}")
###Output
_____no_output_____
###Markdown
Evaluate results
###Code
example_1= test_dataset[0:1][0]
example_2= test_dataset[3:4][0]
example_1.shape, example_2.shape
example_1_mapping, example_2_mapping = model(example_1), model(example_2)
example_1_mapping, example_2_mapping
loss_function(example_1, example_2, example_1_mapping, example_2_mapping)
###Output
_____no_output_____ |
CVND_Exercises-master/1_1_Image_Representation/6_3. Average Brightness.ipynb | ###Markdown
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
###Code
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
###Code
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
###Output
_____no_output_____
###Markdown
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
###Code
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
###Output
_____no_output_____
###Markdown
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
###Code
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
###Output
_____no_output_____
###Markdown
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
###Code
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
###Output
Shape: (600, 1100, 3)
Label [1 = day, 0 = night]: 1
###Markdown
Feature ExtractionCreate a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. RGB to HSV conversionBelow, a test image is converted from RGB to HSV colorspace and each component is displayed in an image.
###Code
# Convert and image to HSV colorspace
# Visualize the individual color channels
image_num = 0
test_im = STANDARDIZED_LIST[image_num][0]
test_label = STANDARDIZED_LIST[image_num][1]
# Convert to HSV
hsv = cv2.cvtColor(test_im, cv2.COLOR_RGB2HSV)
# Print image label
print('Label: ' + str(test_label))
# HSV channels
h = hsv[:,:,0]
s = hsv[:,:,1]
v = hsv[:,:,2]
# Plot the original image and the three channels
f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(20,10))
ax1.set_title('Standardized image')
ax1.imshow(test_im)
ax2.set_title('H channel')
ax2.imshow(h, cmap='gray')
ax3.set_title('S channel')
ax3.imshow(s, cmap='gray')
ax4.set_title('V channel')
ax4.imshow(v, cmap='gray')
###Output
_____no_output_____
###Markdown
--- Find the average brightness using the V channelThis function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
###Code
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
sum_brightness = np.sum(hsv[:,:,2])
## TODO: Calculate the average brightness using the area of the image
# and the sum calculated above
avg = 0
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
###Output
_____no_output_____ |
gallery/general/plot_lineplot_with_legend.ipynb | ###Markdown
Multi-line temperature profile plot^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
###Code
import matplotlib.pyplot as plt
import iris
import iris.plot as iplt
import iris.quickplot as qplt
def main():
fname = iris.sample_data_path("air_temp.pp")
# Load exactly one cube from the given file.
temperature = iris.load_cube(fname)
# We only want a small number of latitudes, so filter some out
# using "extract".
temperature = temperature.extract(
iris.Constraint(latitude=lambda cell: 68 <= cell < 78)
)
for cube in temperature.slices("longitude"):
# Create a string label to identify this cube (i.e. latitude: value).
cube_label = "latitude: %s" % cube.coord("latitude").points[0]
# Plot the cube, and associate it with a label.
qplt.plot(cube, label=cube_label)
# Add the legend with 2 columns.
plt.legend(ncol=2)
# Put a grid on the plot.
plt.grid(True)
# Tell matplotlib not to extend the plot axes range to nicely
# rounded numbers.
plt.axis("tight")
# Finally, show it.
iplt.show()
if __name__ == "__main__":
main()
###Output
_____no_output_____ |
Model Selection _ Boosting/Regression/Adjusted R2.ipynb | ###Markdown
Adjusted R2    No formula for Adjusted R2??? Why??
###Code
from sklearn import metrics
Adj R2 = 1 - (1-metrics.r2_score(y_test, y_pred))*(len(y)-1)/(len(y)-X.shape[1]-1)
###Output
_____no_output_____ |
wrangling/.ipynb_checkpoints/Fifa19_notebook-checkpoint.ipynb | ###Markdown
Getting Club players
###Code
def get_club_players(df_teams):
df_club_list = df_teams[['Name','Position','OvervalueRatio','Overall','Potential','Wage']]
sort_club_list = df_club_list.sort_values(by='OvervalueRatio', ascending=False)
df_top_2_rated_players = sort_club_list.head(2)
df_bottom_2_rated_players = sort_club_list.tail(2)
return df_club_list, df_top_2_rated_players, df_bottom_2_rated_players
df_attributes = data[['FieldPositionNum', 'Overall', 'Potential', 'Crossing', 'Finishing',
'HeadingAccuracy', 'ShortPassing', 'Volleys', 'Dribbling', 'Curve',
'FKAccuracy', 'LongPassing', 'BallControl', 'Acceleration', 'SprintSpeed',
'Agility', 'Reactions', 'Balance', 'ShotPower', 'Jumping', 'Stamina',
'Strength', 'LongShots', 'Aggression', 'Interceptions', 'Positioning',
'Vision', 'Penalties', 'Composure', 'Marking']]
###Output
_____no_output_____
###Markdown
Recommender System
###Code
# Create function that uses above 'suggested' variables to output x players to potentially obtain by trade
# for df_attributes dataframe, see above cells
def get_suggested_trades(df_teams): # player argument changed to 'club', after get_club_players refactored
trades_p1 = [] # this will be the output object that club_suggested_changes receives/uses
trades_p2 = []
players_wages = []
all_players, top_2, bottom_2 = get_club_players(df_teams) # df_club_list, df_top_2_players, df_bottom_2_players
# looping throught 2 player names in 'top_2'
for idx, player in enumerate(top_2.Name):
# getting 'index' for player in 'df_teams' DF
input_player_index = df_teams[df_teams['Name']==player].index.values[0]
# getting the 'Overall', 'Potential', and 'Field Position Num'
p_overall = df_teams.iloc[input_player_index]['Overall']
p_potential = df_teams.iloc[input_player_index]['Potential']
p_position = df_teams.iloc[input_player_index]['FieldPositionNum']
# getting 'Wage' for player in 'df_teams' DF
# to be used later for 'Post-trade Leftover Wage' in returned DF
input_player_wage = df_teams.iloc[input_player_index]['Wage']
players_wages.append(input_player_wage)
# getting 'row' for same player in 'df_attributes' using index (No 'Name' col in 'df_attributes')
player_attributes = df_attributes.iloc[input_player_index]
# filtering attributes logic:
filtered_attributes = df_attributes[(df_attributes['Overall'] > p_overall-10)
& (df_attributes['Potential'] > p_potential-10)
& (df_attributes['FieldPositionNum'] == p_position)]
# use filter logic to suggest replacement players - top 5 suggested
# gives DF of with all indexes and correlation ratio
suggested_players = filtered_attributes.corrwith(player_attributes, axis=1)
# Top 2 suggested players (most positively correlated)
suggested_players = suggested_players.sort_values(ascending=False).head(6)
cols = ['Name', 'Position', 'OvervalueRatio', 'Overall', 'Potential', 'Wage']
for i, corr in enumerate(suggested_players):
if idx == 0:
# player 1 - suggested trades
trades_p1.append(data[data.index==suggested_players.index[i]][cols].values)
else:
# player 2 - suggested trades
trades_p2.append(data[data.index==suggested_players.index[i]][cols].values)
cols1 = ['Name', 'Position', 'OvervalueRatio', 'Overall', 'Potential', 'Wage']
# suggested trades DF for player 1 - dropping 1st row (most positively correlated = same as player 1)
trades_p1_df = pd.DataFrame(np.row_stack(trades_p1), columns=cols1)
trades_p1_df = trades_p1_df.drop(trades_p1_df.index[0]).reset_index(drop=True)
# suggested trades DF for player 2 - dropping 1st row (most positively correlated = same as player 2)
trades_p2_df = pd.DataFrame(np.row_stack(trades_p2), columns=cols1)
trades_p2_df = trades_p2_df.drop(trades_p2_df.index[0]).reset_index(drop=True)
#adding 'Post-trade Leftover Wage' column to each returned DF
trades_p1_df['Post-tradeLeftoverWage'] = players_wages[0] - trades_p1_df['Wage']
trades_p2_df['Post-tradeLeftoverWage'] = players_wages[1] - trades_p2_df['Wage']
return top_2, bottom_2, trades_p1_df, trades_p2_df
# See comment line inside of function just below
def get_replacement_players(df_teams):
'''Gets 2 lowest-rated players, and suggests four possible replacements.'''
replacements_p1 = [] # this will be the output object that club_suggested_changes receives/uses
replacements_p2 = []
players_wages = []
all_players, top_2, bottom_2 = get_club_players(df_teams) # df_club_list, df_top_2_players, df_bottom_2_players
# looping throught 2 player names in 'top_2'
for idx, player in enumerate(bottom_2.Name):
# getting 'index' for player in 'df_teams' DF
input_player_index = df_teams[df_teams['Name']==player].index.values[0]
# getting the 'Overall', 'Potential', and 'Field Position Num'
p_overall = df_teams.iloc[input_player_index]['Overall']
p_potential = df_teams.iloc[input_player_index]['Potential']
p_position = df_teams.iloc[input_player_index]['FieldPositionNum']
# getting 'Wage' for player in 'df_teams' DF
# to be used later for 'Post-trade Leftover Wage' in returned DF
input_player_wage = df_teams.iloc[input_player_index]['Wage']
players_wages.append(input_player_wage)
# getting 'row' for same player in 'df_attributes' using index (No 'Name' col in 'df_attributes')
player_attributes = df_attributes.iloc[input_player_index]
# filtering weak attributes logic:
filtered_weak_attributes = df_attributes[(df_attributes['Overall'] < 90)
& (df_attributes['Potential'] > p_potential)
& (df_attributes['Potential'] < 89)
& (df_attributes['FieldPositionNum'] == p_position)]
suggested_players = filtered_weak_attributes.corrwith(player_attributes, axis=1)
suggested_players = suggested_players.sort_values(ascending=False).head(3)
cols = ['Name', 'Position', 'OvervalueRatio', 'Overall', 'Potential', 'Wage']
for i, corr in enumerate(suggested_players):
if idx == 0:
# player 1 - suggested replacements
replacements_p1.append(data[data.index==suggested_players.index[i]][cols].values)
else:
# player 2 - suggested replacements
replacements_p2.append(data[data.index==suggested_players.index[i]][cols].values)
cols1 = ['Name', 'Position', 'OvervalueRatio', 'OverallRating', 'PotentialRating', 'Wage']
# suggested replacements DF for player 1 - dropping 1st row (most positively correlated = same as player 1)
replacements_p1_df = pd.DataFrame(np.row_stack(replacements_p1), columns=cols1)
replacements_p1_df = replacements_p1_df.drop(replacements_p1_df.index[0]).reset_index(drop=True)
# suggested replacements DF for player 2 - dropping 1st row (most positively correlated = same as player 2)
replacements_p2_df = pd.DataFrame(np.row_stack(replacements_p2), columns=cols1)
replacements_p2_df = replacements_p2_df.drop(replacements_p2_df.index[0]).reset_index(drop=True)
#adding 'Post-trade Leftover Wage' column to each returned DF
replacements_p1_df['Post-tradeLeftoverWage'] = players_wages[0] - replacements_p1_df['Wage']
replacements_p2_df['Post-tradeLeftoverWage'] = players_wages[1] - replacements_p2_df['Wage']
#print(replacements_p1_df, '\n')
#print(replacements_p2_df)
return replacements_p1_df, replacements_p2_df
# All tables
allplayers, top2overvalued, bottom2weak = get_club_players(df_teams)
top_2, bottom_2, trades_p1_df, trades_p2_df = get_suggested_trades(df_teams)
replacements_p1_df, replacements_p2_df = get_replacement_players(df_teams)
# turning all tables into JSON
top_2 = top_2.to_json(orient='records')
bottom_2 = bottom_2.to_json(orient='records')
trades_p1_df = trades_p1_df.to_json(orient='records')
trades_p2_df = trades_p2_df.to_json(orient='records')
replacements_p1_df = replacements_p1_df.to_json(orient='records')
replacements_p2_df = replacements_p2_df.to_json(orient='records')
def all_dfs_json():
json_dict = dict({'top2overvalued': top_2,
'suggestedtrades' : [trades_p1_df, trades_p2_df],
'bottom2weak': bottom_2,
'suggestedreplacements': [replacements_p1_df, replacements_p2_df]})
return json.dumps(json_dict)
###Output
_____no_output_____ |
old/python_ver0.1/demo.ipynb | ###Markdown
Demo of NMF-SO and NMF-ARD-SO [1] Motoki Shiga, Kazuyoshi Tatsumi, Shunsuke Muto, Koji Tsuda, Yuta Yamamoto, Toshiyuki Mori, Takayoshi Tanji, "Sparse Modeling of EELS and EDX Spectral Imaging Data by Nonnegative Matrix Factorization", Ultramicroscopy, Vol.170, p.43-59, 2016.
###Code
%matplotlib inline
import numpy as np
import scipy.io as sio
from libnmf import NMF, NMF_SO, NMF_ARD_SO
###Output
_____no_output_____
###Markdown
Generate a synthetic dataset with noise
###Code
#load theoretical data of Mn3O4 without noise
mat_dict = sio.loadmat('mn3o4_f2.mat')
ximage = mat_dict['datar']
# focusing channel
n_ch = np.arange(37-1,116);
ximage = ximage[:,:,n_ch];
# # of pixels along x and y axis, # of EELS channels
xdim,ydim,Nch = ximage.shape
# generating pahtom data by adding gaussian noise
X = np.reshape(ximage, (xdim*ydim, Nch))
scale_spect = np.max(X)
s2_noise = 0.1 #noise variance
X = X + np.random.randn(xdim*ydim, Nch) * s2_noise * scale_spect;
X = (X + np.abs(X))/2;
scale_X = np.mean(X)
X = X / scale_X
###Output
_____no_output_____
###Markdown
NMF-SO
###Code
# define and training model
nmf_so = NMF_SO(n_components=2, wo=0.05, reps=3, max_itr=100)
nmf_so.fit(X, num_xy=(xdim,ydim), channel_vals=n_ch)
nmf_so.imshow_component(figsize=(6, 3)) # for 2D spactrum (Spectrum Imaging) dataset
# nmf.plot_component() # for 1D spactrum dataset
nmf_so.plot_spectra(figsize=(6,3)) # plot decomposed spectra
nmf_so.plot_object_fun(figsize=(6,3)) # plot learnig curve (object function)
###Output
_____no_output_____
###Markdown
NMF-ARD-SO
###Code
# define and training model
nmf_ard = NMF_ARD_SO(n_components=9, wo=0.05, reps=3, max_itr=100)
nmf_ard.fit(X, num_xy=(xdim,ydim), channel_vals=n_ch)
nmf_ard.plot_ard() # plot learning curve with component intensities
nmf_ard.imshow_component(figsize=(6, 3)) # for 2D spactrum (Spectrum Imaging) dataset
nmf_ard.plot_spectra(figsize=(6,3)) # plot decomposed spectra
nmf_ard.plot_object_fun(figsize=(6,3)) # #plot learnig curve (object function)
###Output
_____no_output_____ |
section4/4-2.ipynb | ###Markdown
4-2. 離散値データと分散表現 シェイクスピア・データセット> この節と次節は[tensorflowのチュートリアル](https://www.tensorflow.org/tutorials/text/text_generation?hl=ja)では同じデータを使って、以下の説明と似たことが説明されていますが、このように台詞に分けずに学習を行っています。そのようにするとGPUの処理などのコーディングがこのノートに比べ楽になりますが、このノートでは確率論的な説明を優先するためこのようなデータ形式を採用します。tensorflowを使って、シェイクスピアのデータをダウンロードできます:
###Code
''' This code is derived from https://www.tensorflow.org/tutorials/text/text_generation
which is licensed under Apache 2.0 License. '''
path = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
###Output
_____no_output_____
###Markdown
`get_corpus`という関数を定義しておきました。これを使うと、以下のようにデータを「戯曲の各登場人物+台詞」のリスト=`corpus`に分解できます:
###Code
corpus = get_corpus(path)
N = len(corpus)
print("台詞数 N=%d\n"%N)
for _ in range(5):
i = np.random.randint(len(corpus))
print(corpus[i])
###Output
台詞数 N=3461
Boatswain:
What, must our mouths be cold?
**************************************
Nurse:
She's dead, deceased, she's dead; alack the day!
************************
BRUTUS:
He's poor in no one fault, but stored with all.
************************
KING EDWARD IV:
Huntsman, what say'st thou? wilt thou go along?
****************
Clown:
Fear not thou, man, thou shalt lose nothing here.
***********************
###Markdown
それぞれ- `corpus = [台詞1, 台詞2, ... ]`- `台詞 = "xxxxx***"` (*は終わりを表すトークンで、文字数を 80 に揃えるために挿入、tensorflowの関数でも処理できますが、ここではpythonで書きました。)のような構成になっています。`N`はこのデータに含まれる全台詞の数です。今回はこれをデータセットとします。 この中で出現する文字を重複を除き数えると
###Code
''' This code is derived from https://www.tensorflow.org/tutorials/text/text_generation
which is licensed under Apache 2.0 License. '''
chars = sorted(set(str("".join(corpus))))
print(chars); print(len(chars), "種類")
###Output
['\n', ' ', '!', '&', "'", '*', ',', '-', '.', '3', ':', ';', '?', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
65 種類
###Markdown
のように13種類の(!, ?, ;や半角スペースや改行\nなどの)特殊文字+アルファベット26種類 $\times$ (大文字, 小文字)=52種類の計65種類からなる文字の集まりから構成されているのがわかります。これを$$\mathcal{S}_{chars} = \{\text{'$\backslash$n', ' ', '!', ..., 'A', 'B', 'C', ..., 'a', 'b', 'c', ...} \}$$と呼びましょう。各データはさっきも表示しましたが、改行コードなども明示的に表示させると:
###Code
i = np.random.randint(len(corpus))
print(list(corpus[i]))
###Output
['F', 'i', 'r', 's', 't', ' ', 'W', 'a', 't', 'c', 'h', 'm', 'a', 'n', ':', '\n', "'", 'T', 'i', 's', ' ', 't', 'h', 'e', ' ', 'L', 'o', 'r', 'd', ' ', 'H', 'a', 's', 't', 'i', 'n', 'g', 's', ',', ' ', 't', 'h', 'e', ' ', 'k', 'i', 'n', 'g', "'", 's', ' ', 'c', 'h', 'i', 'e', 'f', 'e', 's', 't', ' ', 'f', 'r', 'i', 'e', 'n', 'd', '.', '\n', '*', '*', '*', '*', '*', '*', '*', '*', '*', '*', '*', '*']
###Markdown
のように、一文字ずつのリスト=各データ、となっています。 離散値データの表現 インデックスによる表現(one-hot表現)コンピュータ上で文字がどのように処理されているかですが、例えば [ASCII](https://ja.wikipedia.org/wiki/ASCII) と呼ばれる規格では、整数値にそれぞれの文字を割り振っています:
###Code
ord('A'), ord('B'), ord('C'), ord('D')
###Output
_____no_output_____
###Markdown
今回はASCIIと似た戦法で、 $\mathcal{S}_{chars}$ の頭から順に 0,1,2,...,64 と呼ぶことにします:
###Code
''' This code is derived from https://www.tensorflow.org/tutorials/text/text_generation
which is licensed under Apache 2.0 License. '''
char2idx = {u:i for i, u in enumerate(chars)}
char2idx['A'], char2idx['B'], char2idx['C'], char2idx['D']
###Output
_____no_output_____
###Markdown
この写像の逆変換も後から使うので作っておきます。単にインデックス入力に対応した配列として $\mathcal{S}_{chars}$ を実装すればよいだけです:
###Code
''' This code is derived from https://www.tensorflow.org/tutorials/text/text_generation
which is licensed under Apache 2.0 License. '''
idx2char = np.array(chars)
idx2char[13], idx2char[14], idx2char[15], idx2char[16]
###Output
_____no_output_____
###Markdown
$$\mathcal{S}_{chars} = \{\underbrace{\text{'$\backslash$n'}}_0, \underbrace{\text{' '}}_1, \underbrace{\text{'!'}}_2, \dots, \underbrace{\text{'A'}}_{13}, \underbrace{\text{'B'}}_{14}, \underbrace{\text{'C'}}_{15}, \dots, \underbrace{\text{'a'}}_{39}, \underbrace{\text{'b'}}_{40}, \underbrace{\text{'c'}}_{41}, \dots \}$$以後、$\mathcal{S}_{chars}$ の要素は整数とし、 $n$ で表すことにします。すると各セリフは **数値 $n$ の集まり** $${\bf n} = [n_0, n_1, \dots, n_{T-1}]$$で表すことができます。
###Code
i = np.random.randint(len(corpus))
print(corpus[i])
print([char2idx[c] for c in corpus[i]])
###Output
BRAKENBURY:
Why looks your grace so heavily today?
*****************************
[14, 30, 13, 23, 17, 26, 14, 33, 30, 37, 10, 0, 35, 46, 63, 1, 50, 53, 53, 49, 57, 1, 63, 53, 59, 56, 1, 45, 56, 39, 41, 43, 1, 57, 53, 1, 46, 43, 39, 60, 47, 50, 63, 1, 58, 53, 42, 39, 63, 12, 0, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5]
###Markdown
コーパスを数値化した`corpus_num`を作っておきます:
###Code
corpus_num = np.array([[char2idx[c] for c in n] for n in corpus])
print(corpus[0], '\n', corpus_num[0])
###Output
First Citizen:
Before we proceed any further, hear me speak.
*******************
[18 47 56 57 58 1 15 47 58 47 64 43 52 10 0 14 43 44 53 56 43 1 61 43
1 54 56 53 41 43 43 42 1 39 52 63 1 44 59 56 58 46 43 56 6 1 46 43
39 56 1 51 43 1 57 54 43 39 49 8 0 5 5 5 5 5 5 5 5 5 5 5
5 5 5 5 5 5 5 5]
###Markdown
尚、同値な表現として、**one-hot表現** というものがあります。これは$$n \leftrightarrow \vec{n} = [0, \dots, 0, \overbrace{1}^{n-th}, 0, \dots, 0]$$のように、$n$ 番目の成分のみ1であるようなベクトル表現のことを指します。 2進数表現整数をベクトルとして表現したい場合、one-hot表現だけしか無いわけではありません。例えば 10進数 $n$ を 2進数表示して、桁ごとに区切る方法も考えられます。(10進数のまま桁ごとに区切ってもベクトル化できますが、あまり使われない気がします。)2進数表現はまた後ほど使うので、そこで詳しく説明します。 分散表現分散表現は、もっと幾何的にベクトル化する方法です。深層学習では分散表現は訓練対象である場合が多く、離散値から何らかの次元のベクトルへの埋め込み方$$n_t \to {\bf e}[n_t]$$が訓練対象になります。one-hot表現を使うと単なる線形変換で表現できます: ${\bf e}[n] = W \vec{n}= (W_{i n})_{i=1, 2, \dots}$ が、専用のコマンドがあります:
###Code
emb = tf.keras.layers.Embedding(len(chars), 2)
###Output
_____no_output_____
###Markdown
これで適当に初期化された2次元の分散表現を得ることができます:
###Code
plt.figure(figsize=(10,10))
for nt in range(len(chars)):
xn = emb(nt).numpy()
plt.scatter(xn[0], xn[1])
plt.annotate(r'${\bf e}[%d]$=(%s)'%(nt,idx2char[nt]), (xn[0], xn[1]), size=12)
plt.show()
###Output
_____no_output_____
###Markdown
ここでは説明のために2次元に取りましたが、もっと高次元に埋め込む場合が多いようです。以下でニューラルネットワークを構成する際、一層目にこの分散表現を使います。 無限の猿定理[無限の猿定理](https://ja.wikipedia.org/wiki/無限の猿定理) という話があります。これは **猿が(ランダムに)タイプすることでシェイクスピアの作品を生成する** という「ゼロではない可能性」が無限回試行すれば起こり得る、という類の無限に関する話です。今回は規模を縮小して- 猿がシェイクスピアのような台詞 ${\bf n}$ を書く確率を考えてみます。これまでの理論部分で、データ生成確率 $p$ とモデル $q$ を考えていましたが、ここでも同じことで ${\bf n}=[n_0, n_1, \dots, n_{T-1}]$ を生成する確率をそれぞれ- シェイクスピア:$p({\bf n})$- 猿:$q_{monkey}({\bf n})$としましょう。更に、猿はとりあえず文脈を把握しておらず、各時刻でどの文字をタイプするかは独立だとします:時刻 $t$ で1文字 $n_t$ をタイプする確率 $q_{monkey}(n_t)$ があるとして$$q_{monkey}({\bf n}) = \prod_{t=0}^{T-1} q_{monkey}(n_t)$$ということにします(猿には少々失礼な仮定かもしれませんが)。例えば1文字打つ確率がすべての文字を当確率(つまりランダム)だとすると$$q_{monkey}(n_t) = \frac{1}{|\mathcal{S}_{chars}|} = \frac{1}{65}$$です。すると、$q_{monkey}({\bf n}) = \Big( \frac{1}{65}\Big)^{T}$です。確かに厳密に言えばゼロではないですが、これはほぼ無いと言えるでしょう。一応この **猿=完全ランダムモデル** で生成してみると:
###Code
T = 100
n = np.random.choice(chars, T)
print("".join([nt for nt in n]))
###Output
l
pq
lYsLn:RmvlxPC!TFW,E;XI!'
faPkOFk YUjHquVPNg.Elvwwlorq&VDvkZJN&q;lq'WVeiC&*XdBdREd&IOL.YkBvMuWS
###Markdown
のようになり、シェイクスピアの戯曲を書くにはかなり遠そうなことがわかります。 ニューラルネットワークでまねてみるここでは`corpus`または`corpus_num`を、一つの台詞 ${\bf n}$ をシェイクスピアが書く確率を $p({\bf n})$ として、その確率分布から独立に $N$ 個のサンプリング$$\text{corpus}=\mathcal{D}_N = \{ {\bf n}_1, {\bf n}_2, ..., {\bf n}_N\} \sim p({\bf n})^N$$だと思っているわけですが、$p({\bf n})$を真似るモデル $q({\bf n})$ を考えるには、まず $p$ がどのような構造を持つべきか考えると良さそうです。そもそも、台詞は$${\bf n} = [n_0, n_1, \dots, n_{T-1}]$$の順に意味があるため、単に1文字出る確率 $q(n_t)$ を考えるのでは順序を捉えられそうにないのでだめでしょう。とりあえず台詞の生成は**マルコフ性**を持つ、と考えてみます。つまり$$q_{Markov}({\bf n}) = q(n_0)\prod_{t=0}^{T-2} q_{Markov}(n_{t+1}\mid n_t) $$だと考えてみます。最初の確率 $q(n_0)$ としては例えば、台詞は役名から始まるので、大文字から始まるでしょうから、大文字26種類からランダムに取るとしましょう。ここでは$q_{Markov}(n_t\mid n_{t-1})$ に注目してください。これは条件付き確率で、**丁度ニューラルネットワーク** を導入できる形式になっています。そこで$$q_{Markov}(n_{t+1}\mid n_t) = q_{\theta}(n_{t+1}\mid n_t) $$として、機械学習にかけてみましょう。ここでは分散表現を用いた以下のような構造を持つネットワーク$$\left\{ \begin{array}{ll}{\bf e}_t={\color{red}{l_{emb}}}(n_t) & \color{red}{\text{Embedding}}\\{\bf h}_t = \tanh({\color{red}{l_h}}({\bf e}_t))= \tanh({\color{red}{W_h}}{\bf e}_t + {\color{red}{b_h}}) & \color{red}{\text{Dense}}\\{\bf z}_t = {\color{red}{l_z}}({\bf h}_t)={\color{red}{W_z}} {\bf h}_t + {\color{red}{b_z}} & \color{red}{\text{Dense}}\\q_{\color{red}{\theta}}(n\mid n_t) = [{\bf \color{blue}{\sigma}}({\bf z}_t)]_{n\text{-th component}} & \color{blue}{\text{Softmax}}\end{array} \right.$$を使ってみます:
###Code
class Markov(Generator): # モデル設計
def __init__(self, emb_dim, hidden_dim):
super(Markov, self).__init__()
self.l_emb= tf.keras.layers.Embedding(len(chars), emb_dim)
self.l_h = tf.keras.layers.Dense(units=hidden_dim)
self.l_z = tf.keras.layers.Dense(units=len(chars))
def call(self, nt):
e = self.l_emb(nt)
h = tf.keras.activations.tanh(self.l_h(e))
z = self.l_z(h)
return z
###Output
_____no_output_____
###Markdown
これを使えば、自身のsoftmax出力=確率からのサンプリングを次の文字と思って、次々に文字が生成できます:モデルを実際に適当な初期化の元、オブジェクトとして作って文字列を生成してみると以下のような感じです:
###Code
model= Markov(1,1)
model.sample_from("S", num_string=100) # * が生成されたら止まります。
###Output
SOHc*
###Markdown
このモデルを訓練することを考えてみましょう。 汎化誤差、経験誤差これまで通り汎化誤差は $D_{KL}(p\|q)$ として、経験誤差は$$L(\theta; \mathcal{D}_N) = \frac{1}{N} \sum_{i=1}^N (- \log q_\theta({\bf n}_i))$$ですので、1台詞サンプル ${\bf n}_i=[n_0, n_1, \dots, n_{T-1}]$ あたりの負の対数尤度を考えればそれの $N$ 平均が経験誤差となります。1サンプルあたりの負の対数尤度 $L(\theta; \{{\bf n}\})$ は$$ - \log q_\theta({\bf n}) = - \log q(n_0) \prod_{t=0}^{T-2}q_\theta(n_{t+1} \mid n_t) = {\color{blue}{L(\theta; \{{\bf n}\})}}$$です。$q(x_0)$ を後で固定した確率分布に取るので、学習パラメータ $\theta$ に依存しないため、結局 $L(\theta; \{{\bf n}\})$ を下げることが、データ ${\bf n}$ に含まれる文字の順でサンプルが生成される確率を上げるという意味になっています。$-\log$ を先に取ると積は和になりますから、$q_\theta$ が最後はsoftmax関数でモデル化していたことを思い出すと、$$ {\color{blue}{L(\theta;\{ {\bf n} \})}} = - \sum_{t=0}^{T-2} \log q_\theta(n_{t+1} \mid n_t)= - \sum_{t=0}^{T-2} \Big({\bf \log} {\bf \sigma}_{softmax}({\bf z}_t )\Big)_{{n_t}\text{-th comp}} =\sum_{t=0}^{T-2} L_{softmax}\big(n_{t+1}, {\bf \sigma}_{softmax}({\bf z}_t ) \big) $$と、各時刻毎の「次の文字とその予測の間のsoftmaxクロスエントロピーを時間で足し上げたもの」が誤差関数であることがわかります。すると、$n_{T-1}$ だけ除いた文字列 ${\bf x}$ を入力信号、 $n_0$だけ除いた文字列 ${\bf y}$ を教師信号と思って、あたかも教師あり学習かのように書くことができます: データから誤差関数の実装${\bf x}$:入力データ($n_{T-1}$ だけ除いた文字列)|${\bf y}$:教師データ($n_{0}$ だけ除いた文字列):---:|:---:$[n_0, n_1, \dots, n_{T-2}]$|$[n_1, n_2, \dots, n_{T-1}]$と見なして教師あり学習できることが分かったので、あらかじめデータを$$\mathcal{D}_N = \Big\{ {\bf n}^{(1)} , \quad {\bf n}^{(2)} , \quad\dots \Big\}$$と考えるのではなく、これと同型な$$\mathcal{D}_N = \Big\{ (\bf x^{(1)}, {\bf y}^{(1)}), \quad (\bf x^{(2)}, {\bf y}^{(2)}), \quad \dots \Big\}$$だと見なしましょう。これは以下のようなコマンドで作ることができます。
###Code
''' This code is derived from functions prepareing `dataset` object in
https://www.tensorflow.org/tutorials/text/text_generation which is licensed under Apache 2.0 License. '''
D = tf.data.Dataset.from_tensor_slices(corpus_num)
f = lambda n: (n[:-1], n[1:])
D = D.map(f)
for (x, y) in D.take(1):
print("x=", x)
print("y=", y)
###Output
x= tf.Tensor(
[18 47 56 57 58 1 15 47 58 47 64 43 52 10 0 14 43 44 53 56 43 1 61 43
1 54 56 53 41 43 43 42 1 39 52 63 1 44 59 56 58 46 43 56 6 1 46 43
39 56 1 51 43 1 57 54 43 39 49 8 0 5 5 5 5 5 5 5 5 5 5 5
5 5 5 5 5 5 5], shape=(79,), dtype=int64)
y= tf.Tensor(
[47 56 57 58 1 15 47 58 47 64 43 52 10 0 14 43 44 53 56 43 1 61 43 1
54 56 53 41 43 43 42 1 39 52 63 1 44 59 56 58 46 43 56 6 1 46 43 39
56 1 51 43 1 57 54 43 39 49 8 0 5 5 5 5 5 5 5 5 5 5 5 5
5 5 5 5 5 5 5], shape=(79,), dtype=int64)
###Markdown
誤差関数は上で説明したように、各 $t$ の誤差の和$$L({\bf y}, model({\bf x})) = \sum_{t=0}^{T-2} L_{softmax}(n_{t+1}, \underbrace{model(n_t)}_{{\bf \sigma}({\bf z}_t)})$$なのですが、今のデータ形式では
###Code
model = Markov(emb_dim=256, hidden_dim=1024)
for (x, y) in D.take(1):
print(tf.keras.losses.sparse_categorical_crossentropy(y, model(x), from_logits=True)) # z=model(x) は logit
###Output
tf.Tensor(
[4.194728 4.1714563 4.168303 4.1747737 4.1414847 4.1660385 4.1127143
4.1524386 4.1641912 4.14454 4.1425385 4.1772213 4.172122 4.209669
4.2049527 4.160085 4.2067733 4.1902823 4.2153435 4.19617 4.136768
4.1741676 4.1675963 4.136768 4.18212 4.148957 4.144289 4.1870117
4.1680865 4.211338 4.1836004 4.149681 4.1699243 4.2101874 4.1624427
4.1753488 4.168456 4.1514997 4.1580896 4.1771946 4.147566 4.154824
4.12127 4.1632433 4.1716886 4.191299 4.154824 4.21504 4.113487
4.135355 4.202867 4.216572 4.136768 4.173488 4.1893177 4.1805544
4.21504 4.2035832 4.1819816 4.1828103 4.1720543 4.1366353 4.1366353
4.1366353 4.1366353 4.1366353 4.1366353 4.1366353 4.1366353 4.1366353
4.1366353 4.1366353 4.1366353 4.1366353 4.1366353 4.1366353 4.1366353
4.1366353 4.1366353], shape=(79,), dtype=float32)
###Markdown
のように、各 $t$ での $L_{softmax}$ の値を並べてベクトル化してしまいます。これの和を取る操作は、`tf.reduce_mean`を使うと良いです。(これは和ではなく平均値を取る操作ですが、同じことです):
###Code
def loss_sum(y, z):
return tf.reduce_mean(tf.keras.losses.sparse_categorical_crossentropy(y, z, from_logits=True)) # z=model(x) は logit
model = Markov(emb_dim=256, hidden_dim=1024)
for (x, y) in D.take(1):
print(loss_sum(y, model(x)))
###Output
tf.Tensor(4.1836224, shape=(), dtype=float32)
###Markdown
学習させる今まで同様SGDさせてみます。ミニバッチ `(X, Y)` が与えられたときのモデルの学習ステップを
###Code
@tf.function
def update(X, Y, model, optimizer): # 学習ステップ
with tf.GradientTape() as tape:
Z = model(X)
loss_value = loss_sum(Y, Z)
grads = tape.gradient(loss_value, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss_value
###Output
_____no_output_____
###Markdown
として、今までと同じようにSGDでパラメータ更新してみます:
###Code
%%time
model = Markov(emb_dim=256, hidden_dim=1024)
optimizer=tf.keras.optimizers.Adam()
batch_size = 32
epoch_size = 15
loss_averages = []
for epoch in range(epoch_size):
batch = D.shuffle(5000).batch(batch_size, drop_remainder=True)
loss_values = []
for (X,Y) in batch:
loss_value = update(X, Y, model, optimizer)
loss_values.append(loss_value)
loss_averages.append(np.average(loss_values))
###Output
CPU times: user 8.62 s, sys: 1.82 s, total: 10.4 s
Wall time: 5.78 s
###Markdown
更新後のモデルで台詞生成を試みてみます。ここで $q(n_0)$ の設定に対応するのが、はじめの文字を指定することです:
###Code
for _ in range(5):
model.sample_from("A", num_string=100)
###Output
AS:
Whe bat me.
Whr, yeawayo ds t'dareloree t nothet mes t d.
What t JUMARONurcotho lliear:
I r stowi
ADY:
*
Ayod, w, w mith!
Se,-is bery dsthisepl----
NGowsesangr?
*
AMERUMathel m ll; d aknIOLUMI rr t ncy om!
*
APONGEE izepru b?
*
|
transfer_learning/(model)evaluation.ipynb | ###Markdown
1. 모델 성능 평가
###Code
import glob
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing.image import load_img, img_to_array, array_to_img
from tensorflow.keras.models import load_model
import model_evaluation_utils as meu
img_aug_cnn = load_model('cats_dogs_cnn_img_aug.h5')
tl_img_aug_finetune_cnn = load_model('10-06-me.h5')
## 이미지 기본 모양
IMG_DIM = (150, 150)
input_shape = (150, 150, 3)
num2class_label_transformer = lambda l: ['cat' if x == 0 else 'dog' for x in l]
class2num_label_transformer = lambda l: [0 if x == 'cat' else 1 for x in l]
###Output
_____no_output_____
###Markdown
샘플 테스트 이미지로 모델 예측
###Code
#sample_img_path = 'my_cat.jpg'
#sample_img_path = "C:\\Users\\user\\Documents\\한국선급\\CNG_P1\\2_104449_11_11.png"
#sample_img_path = 'dog_my.jpg'
sample_img_path = 'tiger.jpg'
sample_img = load_img(sample_img_path, target_size=IMG_DIM)
sample_img_tensor = img_to_array(sample_img)
sample_img_tensor = np.expand_dims(sample_img_tensor, axis=0)
sample_img_tensor /= 255.
print(sample_img_tensor.shape)
plt.imshow(sample_img_tensor[0])
cnn_img_aug_prediction = num2class_label_transformer(img_aug_cnn.predict_classes(sample_img_tensor, verbose=0))
tlearn_cnn_finetune_img_aug_prediction = num2class_label_transformer(tl_img_aug_finetune_cnn.predict_classes(sample_img_tensor, verbose=0))
print('Predictions for our sample image:\n',
'\nCNN with Img Augmentation:', cnn_img_aug_prediction,
'\nPre-trained CNN with Fine-tuning & Img Augmentation (Transfer Learning):', tlearn_cnn_finetune_img_aug_prediction)
img_aug_cnn.predict_proba(sample_img_tensor, verbose=0)
tl_img_aug_finetune_cnn.predict_proba(sample_img_tensor, verbose=0)
###Output
_____no_output_____ |
scripts/nrw.ipynb | ###Markdown
Covid-19 Sim provides two populations:"current" : The population is based on the current population (2019) "household" : The population is bases on a subsample in 2010 but with household numbers and additional persons per household
###Code
age, agegroup, gender, contacts, drate, hnr, persons = cl.makepop("household",17900000)
# Einlesen der realisierten Daten
nrw = pd.read_excel("./nrw_dat.xlsx")
nrw["Datum"] = nrw["Datum"].dt.date
nrw["Meldedatum"] = nrw["Meldedatum"].dt.date
###Output
_____no_output_____
###Markdown
Szenarien mit Community Attack Basis Szenario NRW mit Community AttackDie gesamte Repdouktionsetzt sich zusammen aus Infektionen innerhalb der Haushalte und Kontakt alle Haushaltsmitglieder mit der Aussenwelt. r_change ist hier jeweils der Kontaktwert mit der AussenweltDie Kontakte sind proportional zur den Kontaktraten der Altersgruppen ohne Beschränkung.
###Code
day0date = datetime.date(2020, 3, 8)
r_change = {}
r_change['2020-01-01'] = 3 * contacts/np.mean(contacts)
contacts_new = np.where(age < 20, contacts, contacts)
r_change['2020-03-08'] = 1.0 * contacts/np.mean(contacts)
r_change['2020-03-16'] = 0.3 * contacts/np.mean(contacts)
r_change['2020-03-23'] = 0.2 * contacts/np.mean(contacts)
com_attack_rate = {}
com_attack_rate["2020-01-1"] = 0.5
com_attack_rate["2020-05-4"] = 0.5
state, statesum, infections, day0, rnow, args, gr = cl.sim(
age, drate, nday=180, prob_icu=0.009, day0cumrep=450,
mean_days_to_icu=16, mean_duration_icu=14,
mean_time_to_death=21,
mean_serial=7.5, std_serial=3.0, immunt0=0.0, ifr=0.003,
long_term_death=False, hnr=hnr, com_attack_rate=com_attack_rate,
r_change=r_change, simname="NRW Basis",
datadir=".",
realized=nrw, rep_delay=13, alpha=0.125, day0date=day0date)
cl.plotoverview(gr, args)
persgroup = np.where(persons>5,">5",persons)
cl.groupresults({"Personen":persgroup},state)
###Output
_____no_output_____
###Markdown
Der Anteil der Infizierten steigt deutlich mit der Haushaltsgröße. Dieses Bild sollte qualitativ recht realistisch sein. Es bestehen deutliche Hinweise, dass die Community Attack Rate 50% oder höher ist.
###Code
aux = cl.groupresults({"Alter":agegroup},state)
# Die CFR aus der IFR über die Dunkelziffer berechnen
aux["CFR"] = aux.IFR / args["alpha"]
display(aux)
## Öffnung von Kitas
day0date = datetime.date(2020, 3, 8)
r_change = {}
r_change['2020-01-01'] = 3 * contacts/np.mean(contacts)
contacts_new = np.where(age < 20, contacts, contacts)
r_change['2020-03-08'] = 1.0 * contacts/np.mean(contacts)
r_change['2020-03-16'] = 0.3 * contacts/np.mean(contacts)
r_change['2020-03-23'] = 0.2 * contacts/np.mean(contacts)
r_change['2020-04-20'] = 0.5 * contacts/np.mean(contacts)
r_change['2020-05-04'] = np.where(age<6,5*r_change['2020-04-20'],r_change['2020-04-20'])
com_attack_rate = {}
com_attack_rate["2020-01-1"] = 0.5
com_attack_rate["2020-05-4"] = 0.5
state, statesum, infections, day0, rnow, args, gr = cl.sim(
age, drate, nday=180, prob_icu=0.009, day0cumrep=450,
mean_days_to_icu=16, mean_duration_icu=14,
mean_time_to_death=21,
mean_serial=7.5, std_serial=3.0, immunt0=0.0, ifr=0.003,
long_term_death=False, hnr=hnr, com_attack_rate=com_attack_rate,
r_change=r_change, simname="NRW Basis",
datadir=".",
realized=nrw, rep_delay=13, alpha=0.125, day0date=day0date)
cl.plotoverview(gr, args)
###Output
_____no_output_____
###Markdown
Der Anteil der Infizierten ist hier in den jungen Altersgruppen höher. Für die Meldefälle könnte man eine Annahme treffen, dass die Wahrscheinlichkeit einer "Meldung" ebenfalls proportional zur statistischen Sterbewahrscheinlichkeit ist. Die CFR ist hier einfach als "Dunkeziffer" * IFR berechnet. Der Alterstrend sieht plausibel aus. Lockerung in allen Bereichen ausser bei den über 60 Jährigen
###Code
day0date = datetime.date(2020, 3, 8)
r_change = {}
r_change['2020-01-01'] = 3 * contacts/np.mean(contacts)
contacts_new = np.where(age < 20, contacts, contacts)
r_change['2020-03-08'] = 1.0 * contacts/np.mean(contacts)
r_change['2020-03-16'] = 0.3 * contacts/np.mean(contacts)
r_change['2020-03-23'] = 0.2 * contacts/np.mean(contacts)
r_change['2020-05-04'] = 0.2 * np.where(age < 60, 3*contacts, contacts)/np.mean(contacts)
com_attack_rate = {}
com_attack_rate["2020-01-1"] = 0.5
com_attack_rate["2020-05-4"] = 0.5
state, statesum, infections, day0, rnow, args, gr = cl.sim(
age, drate, nday=180, prob_icu=0.009, day0cumrep=450,
mean_days_to_icu=16, mean_duration_icu=14,
mean_time_to_death=21,
mean_serial=7.5, std_serial=3.0, immunt0=0.0, ifr=0.003,
long_term_death=False, hnr=hnr, com_attack_rate=com_attack_rate,
r_change=r_change, simname="Lockerung U60",
datadir=".",
realized=nrw, rep_delay=13, alpha=0.125, day0date=day0date)
cl.plotoverview(gr, args)
###Output
_____no_output_____
###Markdown
Das effektive R steigt zwar leicht über 1 aber die Intesivbelastung ist weiter fallend, da die Maßnahmen für die ältere Bevölkerung weiter wirken. Es könnte jedoch sein, dass diese Annahme zu opimistisch ist.
###Code
# Lockerung in allen Bereichen
day0date = datetime.date(2020, 3, 8)
r_change = {}
r_change['2020-01-01'] = 3 * contacts/np.mean(contacts)
contacts_new = np.where(age < 20, contacts, contacts)
r_change['2020-03-08'] = 1.0 * contacts/np.mean(contacts)
r_change['2020-03-16'] = 0.3 * contacts/np.mean(contacts)
r_change['2020-03-23'] = 0.2 * contacts/np.mean(contacts)
r_change['2020-05-04'] = 0.6 * contacts/np.mean(contacts)
com_attack_rate = {}
com_attack_rate["2020-01-1"] = 0.5
com_attack_rate["2020-05-4"] = 0.5
state, statesum, infections, day0, rnow, args, gr = cl.sim(
age, drate, nday=180, prob_icu=0.009, day0cumrep=450,
mean_days_to_icu=16, mean_duration_icu=14,
mean_time_to_death=21,
mean_serial=7.5, std_serial=3.0, immunt0=0.0, ifr=0.003,
long_term_death=False, hnr=hnr, com_attack_rate=com_attack_rate,
r_change=r_change, simname="Lockerung Alle",
datadir=".",
realized=nrw, rep_delay=13, alpha=0.125, day0date=day0date)
cl.plotoverview(gr, args)
###Output
_____no_output_____
###Markdown
Das effektive R ist ebenfalls leicht über 1, aber die Anzahl der ICU-Fälle steigt wieder stetig an. Starke Lockerung aber Reduzierung der Community Attack Rate durch frühe Tests und Quarantäne
###Code
day0date = datetime.date(2020, 3, 8)
r_change = {}
r_change['2020-01-01'] = 3 * contacts/np.mean(contacts)
contacts_new = np.where(age < 20, contacts, contacts)
r_change['2020-03-08'] = 1.0 * contacts/np.mean(contacts)
r_change['2020-03-16'] = 0.3 * contacts/np.mean(contacts)
r_change['2020-03-23'] = 0.2 * contacts/np.mean(contacts)
r_change['2020-05-04'] = 1.0 * contacts/np.mean(contacts)
com_attack_rate = {}
com_attack_rate["2020-01-1"] = 0.5
com_attack_rate["2020-05-4"] = 0.25
state, statesum, infections, day0, rnow, args, gr = cl.sim(
age, drate, nday=180, prob_icu=0.009, day0cumrep=450,
mean_days_to_icu=16, mean_duration_icu=14,
mean_time_to_death=21,
mean_serial=7.5, std_serial=3.0, immunt0=0.0, ifr=0.003,
long_term_death=False, hnr=hnr, com_attack_rate=com_attack_rate,
r_change=r_change, simname="Lockerung Alle kleinere CAR",
datadir=".",
realized=nrw, rep_delay=13, alpha=0.125, day0date=day0date)
cl.plotoverview(gr, args)
###Output
_____no_output_____
###Markdown
Test der Community Attack Rate
###Code
day0date = datetime.date(2020, 3, 8)
r_change = {}
r_change['2020-01-01'] = 3 * contacts/np.mean(contacts)
contacts_new = np.where(age < 20, contacts, contacts)
r_change['2020-03-08'] = 1.0 * contacts/np.mean(contacts)
r_change['2020-03-16'] = 0.3 * contacts/np.mean(contacts)
r_change['2020-03-23'] = 0.2 * contacts/np.mean(contacts)
r_change['2020-05-04'] = 0.2 * contacts/np.mean(contacts)
com_attack_rate = {}
com_attack_rate["2020-01-1"] = 0.5
com_attack_rate["2020-05-4"] = 0.1
state, statesum, infections, day0, rnow, args, gr = cl.sim(
age, drate, nday=180, prob_icu=0.009, day0cumrep=450,
mean_days_to_icu=16, mean_duration_icu=14,
mean_time_to_death=21,
mean_serial=7.5, std_serial=3.0, immunt0=0.0, ifr=0.003,
long_term_death=False, hnr=hnr, com_attack_rate=com_attack_rate,
r_change=r_change, simname="Test Community Attack",
datadir=".",
realized=nrw, rep_delay=13, alpha=0.125, day0date=day0date)
cl.plotoverview(gr, args)
###Output
_____no_output_____
###Markdown
Szenarien ohne Community Attack Basisszenario NRW
###Code
age, agegroup, gender, contacts, drate, hnr, persons = cl.makepop("current",17900000)
day0date = datetime.date(2020, 3, 8)
r_change = {}
r_change['2020-01-01'] = 4.0 * contacts/np.mean(contacts)
r_change['2020-03-08'] = 0.24*8 * contacts/np.mean(contacts)
r_change['2020-03-16'] = 0.15*8 * contacts/np.mean(contacts)
r_change['2020-03-23'] = 0.10*6 * contacts/np.mean(contacts)
r_change['2020-05-04'] = 0.6 * contacts/np.mean(contacts)
com_attack_rate = {}
com_attack_rate["2020-01-1"] = 0.0
state, statesum, infections, day0, rnow, args, gr = cl.sim(
age, drate, nday=180, prob_icu=0.009, day0cumrep=450,
mean_days_to_icu=16, mean_duration_icu=14,
mean_time_to_death=21,
mean_serial=7.5, std_serial=3.0, immunt0=0.0, ifr=0.003,
long_term_death=False, hnr=None, com_attack_rate=com_attack_rate,
r_change=r_change, simname="Basis NRW ohne Community Attack",
datadir=".",
realized=nrw, rep_delay=13, alpha=0.125, day0date=day0date)
cl.plotoverview(gr, args)
import pkg_resources
pkg_resources.get_distribution("covid19sim").version
cl.plotoverview(gr, args)
gr["Wochentag"] = [x.weekday() for x in gr.Datum]
gr["WE"] = np.where(gr.Wochentag > 4, "WE", "WT")
fig = make_subplots(rows=2, cols=2)
fig.add_trace(go.Scatter(x=gr["Datum"], y=gr["Erwartete Neu-Meldefälle"],
mode="lines", name="Erwartete Neu-Meldefälle"),
row=1, col=1)
fig.add_trace(go.Scatter(x=gr[gr.WE == "WE"]["Datum"],
y=gr[gr.WE == "WE"]["RKI Neu-Meldefälle"],
name="RKI Neu-Meldefälle (WE)",
mode="markers"), row=1, col=1)
fig.add_trace(go.Scatter(x=gr[gr.WE == "WT"]["Datum"],
y=gr[gr.WE == "WT"]["RKI Neu-Meldefälle"],
name="RKI Neu-Meldefälle (WT)",
mode="markers"), row=1, col=1)
fig.add_trace(go.Scatter(x=gr["Datum"], y=gr["Erwartete Gesamt-Meldefälle"],
name="Erwartete Gesamt-Meldefälle",
mode="lines"), row=2, col=1)
fig.add_trace(go.Scatter(x=gr["Datum"], y=gr["RKI Gesamt-Meldefälle"],
name="RKI Gesamt-Meldefälle",
mode="lines"), row=2, col=1)
fig.add_trace(go.Scatter(x=gr["Datum"], y=gr["Erwartete Tote"],
name="Erwartete Tote",
mode="lines"), row=1, col=2)
fig.add_trace(go.Scatter(x=gr["Datum"], y=gr["IST Tote gesamt"],
name="Ist Tote gesamt",
mode="lines"), row=1, col=2)
fig.add_trace(go.Scatter(x=gr["Datum"], y=gr["ICU"],
name="Erwartete Intensiv",
mode="lines"), row=2, col=2)
fig.add_trace(go.Scatter(x=gr["Datum"], y=gr["Ist Intensiv"],
name="IST Intensiv",
mode="lines"), row=2, col=2)
fig.update_layout(legend_orientation="h", title=args["simname"])
plot(fig, filename=os.path.join(args["datadir"], args["simname"] +
"_overview.html"),
auto_open=False, auto_play=False)
fig.show()
fig = make_subplots(rows=1, cols=1)
fig.add_trace(go.Scatter(x=gr["Datum"], y=gr["Reproduktionszahl"],
name="R effektiv",
mode="lines"), row=1, col=1)
fig.add_trace(go.Scatter(x=gr["Datum"], y=gr["R extern"],
name="R extern",
mode="lines"), row=1, col=1)
plot(fig, filename=os.path.join(args["datadir"], args["simname"] +
"_reproduction.html"),
auto_open=False, auto_play=False)
plot(fig)
day0date = datetime.date(2020, 3, 8)
r_change = {}
r_change['2020-01-01'] = 4.0 * contacts/np.mean(contacts)
r_change['2020-03-08'] = 0.24*8 * contacts/np.mean(contacts)
r_change['2020-03-16'] = 0.15*8 * contacts/np.mean(contacts)
r_change['2020-03-23'] = 0.9 * contacts/np.mean(contacts)
r_change['2020-05-04'] = 0.9 * contacts/np.mean(contacts)
com_attack_rate = {}
com_attack_rate["2020-01-1"] = 0.0
state, statesum, infections, day0, rnow, args, gr = cl.sim(
age, drate, nday=180, prob_icu=0.009, day0cumrep=450,
mean_days_to_icu=16, mean_duration_icu=14,
mean_time_to_death=21,
mean_serial=7.5, std_serial=3.0, immunt0=0.0, ifr=0.003,
long_term_death=False, hnr=None, com_attack_rate=com_attack_rate,
r_change=r_change, simname="Basis NRW ohne Community Attack",
datadir=".",
realized=nrw, rep_delay=13, alpha=0.125, day0date=day0date)
cl.plotoverview(gr, args)
###Output
_____no_output_____ |
Drawing/OldDrawing/Shapes-Copy5.ipynb | ###Markdown
Draw a ShapesAttempt at programmatically drawing shapes.All units in mm. ```1``` = ```1 mm```.
###Code
s = GCode.Shapes.Square()
s
p = GCode.Program()
p
p.generate_gcode()
p
cnc = GRBL.GRBL(port="/dev/cnc_3018")
cnc.reset()
cnc.status
cnc.home()
cnc.status
o0 = 10
def origin_calc(lines):
if len(lines) == 0:
o = o0
else:
o = np.sum(list(map(lambda s: s.len_side, lines))) + o0
return np.array([o, o])
sides = [0.1, 0.25, 0.5, 0.75, 1, 2.5, 5, 10, 20, 25.4, 25.4 / 4]
np.sum(sides) + o0
lines = list()
for side in sides:
_ = GCode.Shapes.Square(len_side=side, origin=origin_calc(lines))
lines.append(_)
lines
prog = GCode.Program()
prog.generate_gcode()
prog.buffer
cnc.run(prog)
for line in lines:
break
line.__repr__()
line.buffer
cnc.status
cnc.run(line)
###Output
_____no_output_____ |
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Start_Here.ipynb | ###Markdown
Welcome to AI for Science BootcampThe objective of this bootcamp is to give an introduction to application of Artificial Intelligence (AI) algorithms in Science ( High Performance Computing(HPC) Simulations ). This bootcamp will introduce participants to fundamentals of AI and how those can be applied to different HPC simulation domains. The following contents will be covered during the Bootcamp :- [CNN Primer and Keras 101](Intro_to_DL/Part_2.ipynb)- [Tropical Cyclone Intensity Estimation using Deep Convolution Neural Networks.](Tropical_Cyclone_Intensity_Estimation/The_Problem_Statement.ipynb) Quick GPU CheckBefore moving forward let us check if Tensorflow backend is able to see and use GPU
###Code
# Import Necessary Libraries
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
tf.test.gpu_device_name()
###Output
_____no_output_____ |
Programming Assignments/Course 5: Sequence Models/Emojify_v2a.ipynb | ###Markdown
Emojify! Welcome to the second assignment of Week 2. You are going to use word vector representations to build an Emojifier. Have you ever wanted to make your text messages more expressive? Your emojifier app will help you do that. So rather than writing:>"Congratulations on the promotion! Let's get coffee and talk. Love you!" The emojifier can automatically turn this into:>"Congratulations on the promotion! 👍 Let's get coffee and talk. ☕️ Love you! ❤️"* You will implement a model which inputs a sentence (such as "Let's go see the baseball game tonight!") and finds the most appropriate emoji to be used with this sentence (⚾️). Using word vectors to improve emoji lookups* In many emoji interfaces, you need to remember that ❤️ is the "heart" symbol rather than the "love" symbol. * In other words, you'll have to remember to type "heart" to find the desired emoji, and typing "love" won't bring up that symbol.* We can make a more flexible emoji interface by using word vectors!* When using word vectors, you'll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate additional words in the test set to the same emoji. * This works even if those additional words don't even appear in the training set. * This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set. What you'll build1. In this exercise, you'll start with a baseline model (Emojifier-V1) using word embeddings.2. Then you will build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM. Updates If you were working on the notebook before this update...* The current notebook is version "2a".* You can find your original work saved in the notebook with the previous version name ("v2") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* sentence_to_avg * Updated instructions. * Use separate variables to store the total and the average (instead of just `avg`). * Additional hint about how to initialize the shape of `avg` vector.* sentences_to_indices * Updated preceding text and instructions, added additional hints.* pretrained_embedding_layer * Additional instructions to explain how to implement each step.* Emoify_V2 * Modifies instructions to specify which parameters are needed for each Keras layer. * Remind users of Keras syntax. * Explanation of how to use the layer object that is returned by `pretrained_embedding_layer`. * Provides sample Keras code.* Spelling, grammar and wording corrections. Let's get started! Run the following cell to load the package you are going to use.
###Code
import numpy as np
from emo_utils import *
import emoji
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 - Baseline model: Emojifier-V1 1.1 - Dataset EMOJISETLet's start by building a simple baseline classifier. You have a tiny dataset (X, Y) where:- X contains 127 sentences (strings).- Y contains an integer label between 0 and 4 corresponding to an emoji for each sentence. **Figure 1**: EMOJISET - a classification problem with 5 classes. A few examples of sentences are given here. Let's load the dataset using the code below. We split the dataset between training (127 examples) and testing (56 examples).
###Code
X_train, Y_train = read_csv('data/train_emoji.csv')
X_test, Y_test = read_csv('data/tesss.csv')
maxLen = len(max(X_train, key=len).split())
###Output
_____no_output_____
###Markdown
Run the following cell to print sentences from X_train and corresponding labels from Y_train. * Change `idx` to see different examples. * Note that due to the font used by iPython notebook, the heart emoji may be colored black rather than red.
###Code
for idx in range(10):
print(X_train[idx], label_to_emoji(Y_train[idx]))
###Output
never talk to me again 😞
I am proud of your achievements 😄
It is the worst day in my life 😞
Miss you so much ❤️
food is life 🍴
I love you mum ❤️
Stop saying bullshit 😞
congratulations on your acceptance 😄
The assignment is too long 😞
I want to go play ⚾
###Markdown
1.2 - Overview of the Emojifier-V1In this part, you are going to implement a baseline model called "Emojifier-v1". **Figure 2**: Baseline model (Emojifier-V1). Inputs and outputs* The input of the model is a string corresponding to a sentence (e.g. "I love you). * The output will be a probability vector of shape (1,5), (there are 5 emojis to choose from).* The (1,5) probability vector is passed to an argmax layer, which extracts the index of the emoji with the highest probability. One-hot encoding* To get our labels into a format suitable for training a softmax classifier, lets convert $Y$ from its current shape $(m, 1)$ into a "one-hot representation" $(m, 5)$, * Each row is a one-hot vector giving the label of one example. * Here, `Y_oh` stands for "Y-one-hot" in the variable names `Y_oh_train` and `Y_oh_test`:
###Code
Y_oh_train = convert_to_one_hot(Y_train, C = 5)
Y_oh_test = convert_to_one_hot(Y_test, C = 5)
###Output
_____no_output_____
###Markdown
Let's see what `convert_to_one_hot()` did. Feel free to change `index` to print out different values.
###Code
idx = 50
print(f"Sentence '{X_train[50]}' has label index {Y_train[idx]}, which is emoji {label_to_emoji(Y_train[idx])}", )
print(f"Label index {Y_train[idx]} in one-hot encoding format is {Y_oh_train[idx]}")
###Output
Sentence 'I missed you' has label index 0, which is emoji ❤️
Label index 0 in one-hot encoding format is [ 1. 0. 0. 0. 0.]
###Markdown
All the data is now ready to be fed into the Emojify-V1 model. Let's implement the model! 1.3 - Implementing Emojifier-V1As shown in Figure 2 (above), the first step is to:* Convert each word in the input sentence into their word vector representations.* Then take an average of the word vectors. * Similar to the previous exercise, we will use pre-trained 50-dimensional GloVe embeddings. Run the following cell to load the `word_to_vec_map`, which contains all the vector representations.
###Code
word_to_index, index_to_word, word_to_vec_map = read_glove_vecs('../../readonly/glove.6B.50d.txt')
###Output
_____no_output_____
###Markdown
You've loaded:- `word_to_index`: dictionary mapping from words to their indices in the vocabulary - (400,001 words, with the valid indices ranging from 0 to 400,000)- `index_to_word`: dictionary mapping from indices to their corresponding words in the vocabulary- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.Run the following cell to check if it works.
###Code
word = "cucumber"
idx = 289846
print("the index of", word, "in the vocabulary is", word_to_index[word])
print("the", str(idx) + "th word in the vocabulary is", index_to_word[idx])
###Output
the index of cucumber in the vocabulary is 113317
the 289846th word in the vocabulary is potatos
###Markdown
**Exercise**: Implement `sentence_to_avg()`. You will need to carry out two steps:1. Convert every sentence to lower-case, then split the sentence into a list of words. * `X.lower()` and `X.split()` might be useful. 2. For each word in the sentence, access its GloVe representation. * Then take the average of all of these word vectors. * You might use `numpy.zeros()`. Additional Hints* When creating the `avg` array of zeros, you'll want it to be a vector of the same shape as the other word vectors in the `word_to_vec_map`. * You can choose a word that exists in the `word_to_vec_map` and access its `.shape` field. * Be careful not to hard code the word that you access. In other words, don't assume that if you see the word 'the' in the `word_to_vec_map` within this notebook, that this word will be in the `word_to_vec_map` when the function is being called by the automatic grader. * Hint: you can use any one of the word vectors that you retrieved from the input `sentence` to find the shape of a word vector.
###Code
# GRADED FUNCTION: sentence_to_avg
def sentence_to_avg(sentence, word_to_vec_map):
"""
Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each word
and averages its value into a single vector encoding the meaning of the sentence.
Arguments:
sentence -- string, one training example from X
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
Returns:
avg -- average vector encoding information about the sentence, numpy-array of shape (50,)
"""
### START CODE HERE ###
# Step 1: Split sentence into list of lower case words (≈ 1 line)
#print("sentence = " + sentence)
words = sentence.lower().split()
#words = [i.lower() for i in sentence.split()]
#print("words lenght = " + str(len(words)))
# Initialize the average word vector, should have the same shape as your word vectors.
avg = np.zeros((50,)) # <Gema> one dimensional array with 50 elements (we use pretrained 50-dimensional GloVe embeddings)
# Step 2: average the word vectors. You can loop over the words in the list "words".
for w in words:
avg += word_to_vec_map[w]
avg = avg/len(words)
### END CODE HERE ###
return avg
avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map)
print("avg = \n", avg)
###Output
avg =
[-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983
-0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867
0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767
0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061
0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265
1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925
-0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333
-0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433
0.1445417 0.09808667]
###Markdown
**Expected Output**:```Pythonavg =[-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983 -0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867 0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767 0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061 0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265 1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925 -0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333 -0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433 0.1445417 0.09808667]``` ModelYou now have all the pieces to finish implementing the `model()` function. After using `sentence_to_avg()` you need to:* Pass the average through forward propagation* Compute the cost* Backpropagate to update the softmax parameters**Exercise**: Implement the `model()` function described in Figure (2). * The equations you need to implement in the forward pass and to compute the cross-entropy cost are below:* The variable $Y_{oh}$ ("Y one hot") is the one-hot encoding of the output labels. $$ z^{(i)} = W . avg^{(i)} + b$$$$ a^{(i)} = softmax(z^{(i)})$$$$ \mathcal{L}^{(i)} = - \sum_{k = 0}^{n_y - 1} Y_{oh,k}^{(i)} * log(a^{(i)}_k)$$**Note** It is possible to come up with a more efficient vectorized implementation. For now, let's use nested for loops to better understand the algorithm, and for easier debugging.We provided the function `softmax()`, which was imported earlier.
###Code
# GRADED FUNCTION: model
def model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400):
"""
Model to train word vector representations in numpy.
Arguments:
X -- input data, numpy array of sentences as strings, of shape (m, 1)
Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
learning_rate -- learning_rate for the stochastic gradient descent algorithm
num_iterations -- number of iterations
Returns:
pred -- vector of predictions, numpy-array of shape (m, 1)
W -- weight matrix of the softmax layer, of shape (n_y, n_h)
b -- bias of the softmax layer, of shape (n_y,)
"""
np.random.seed(1)
# Define number of training examples
m = Y.shape[0] # number of training examples
n_y = 5 # number of classes
n_h = 50 # dimensions of the GloVe vectors
# Initialize parameters using Xavier initialization
W = np.random.randn(n_y, n_h) / np.sqrt(n_h)
b = np.zeros((n_y,))
# Convert Y to Y_onehot with n_y classes
Y_oh = convert_to_one_hot(Y, C = n_y)
# Optimization loop
for t in range(num_iterations): # Loop over the number of iterations
for i in range(m): # Loop over the training examples
### START CODE HERE ### (≈ 4 lines of code)
# Average the word vectors of the words from the i'th training example
avg = sentence_to_avg(X[i], word_to_vec_map)
# Forward propagate the avg through the softmax layer
z = np.dot(W,avg)+b
a = softmax(z)
# Compute cost using the i'th training label's one hot representation and "A" (the output of the softmax)
cost = -np.sum(Y_oh[i]*np.log(a))
#cost = -np.sum(np.multiply(Y_oh[i], np.log(a)))
### END CODE HERE ###
# Compute gradients
dz = a - Y_oh[i]
dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h))
db = dz
# Update parameters with Stochastic Gradient Descent
W = W - learning_rate * dW
b = b - learning_rate * db
if t % 100 == 0:
print("Epoch: " + str(t) + " --- cost = " + str(cost))
pred = predict(X, Y, W, b, word_to_vec_map)
return pred, W, b
print(X_train.shape)
print(Y_train.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(X_train[0])
print(type(X_train))
Y = np.asarray([5,0,0,5, 4, 4, 4, 6, 6, 4, 1, 1, 5, 6, 6, 3, 6, 3, 4, 4])
print(Y.shape)
X = np.asarray(['I am going to the bar tonight', 'I love you', 'miss you my dear',
'Lets go party and drinks','Congrats on the new job','Congratulations',
'I am so happy for you', 'Why are you feeling bad', 'What is wrong with you',
'You totally deserve this prize', 'Let us go play football',
'Are you down for football this afternoon', 'Work hard play harder',
'It is suprising how people can be dumb sometimes',
'I am very disappointed','It is the best day in my life',
'I think I will end up alone','My life is so boring','Good job',
'Great so awesome'])
print(X.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(type(X_train))
###Output
(132,)
(132,)
(132, 5)
never talk to me again
<class 'numpy.ndarray'>
(20,)
(20,)
(132, 5)
<class 'numpy.ndarray'>
###Markdown
Run the next cell to train your model and learn the softmax parameters (W,b).
###Code
pred, W, b = model(X_train, Y_train, word_to_vec_map)
print(pred)
###Output
Epoch: 0 --- cost = 1.95204988128
Accuracy: 0.348484848485
Epoch: 100 --- cost = 0.0797181872601
Accuracy: 0.931818181818
Epoch: 200 --- cost = 0.0445636924368
Accuracy: 0.954545454545
Epoch: 300 --- cost = 0.0343226737879
Accuracy: 0.969696969697
[[ 3.]
[ 2.]
[ 3.]
[ 0.]
[ 4.]
[ 0.]
[ 3.]
[ 2.]
[ 3.]
[ 1.]
[ 3.]
[ 3.]
[ 1.]
[ 3.]
[ 2.]
[ 3.]
[ 2.]
[ 3.]
[ 1.]
[ 2.]
[ 3.]
[ 0.]
[ 2.]
[ 2.]
[ 2.]
[ 1.]
[ 4.]
[ 3.]
[ 3.]
[ 4.]
[ 0.]
[ 3.]
[ 4.]
[ 2.]
[ 0.]
[ 3.]
[ 2.]
[ 2.]
[ 3.]
[ 4.]
[ 2.]
[ 2.]
[ 0.]
[ 2.]
[ 3.]
[ 0.]
[ 3.]
[ 2.]
[ 4.]
[ 3.]
[ 0.]
[ 3.]
[ 3.]
[ 3.]
[ 4.]
[ 2.]
[ 1.]
[ 1.]
[ 1.]
[ 2.]
[ 3.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 3.]
[ 4.]
[ 4.]
[ 2.]
[ 2.]
[ 1.]
[ 2.]
[ 0.]
[ 3.]
[ 2.]
[ 2.]
[ 0.]
[ 3.]
[ 3.]
[ 1.]
[ 2.]
[ 1.]
[ 2.]
[ 2.]
[ 4.]
[ 3.]
[ 3.]
[ 2.]
[ 4.]
[ 0.]
[ 0.]
[ 3.]
[ 3.]
[ 3.]
[ 3.]
[ 2.]
[ 0.]
[ 1.]
[ 2.]
[ 3.]
[ 0.]
[ 2.]
[ 2.]
[ 2.]
[ 3.]
[ 2.]
[ 2.]
[ 2.]
[ 4.]
[ 1.]
[ 1.]
[ 3.]
[ 3.]
[ 4.]
[ 1.]
[ 2.]
[ 1.]
[ 1.]
[ 3.]
[ 1.]
[ 0.]
[ 4.]
[ 0.]
[ 3.]
[ 3.]
[ 4.]
[ 4.]
[ 1.]
[ 4.]
[ 3.]
[ 0.]
[ 2.]]
###Markdown
**Expected Output** (on a subset of iterations): **Epoch: 0** cost = 1.95204988128 Accuracy: 0.348484848485 **Epoch: 100** cost = 0.0797181872601 Accuracy: 0.931818181818 **Epoch: 200** cost = 0.0445636924368 Accuracy: 0.954545454545 **Epoch: 300** cost = 0.0343226737879 Accuracy: 0.969696969697 Great! Your model has pretty high accuracy on the training set. Lets now see how it does on the test set. 1.4 - Examining test set performance * Note that the `predict` function used here is defined in emo_util.spy.
###Code
print("Training set:")
pred_train = predict(X_train, Y_train, W, b, word_to_vec_map)
print('Test set:')
pred_test = predict(X_test, Y_test, W, b, word_to_vec_map)
###Output
Training set:
Accuracy: 0.977272727273
Test set:
Accuracy: 0.857142857143
###Markdown
**Expected Output**: **Train set accuracy** 97.7 **Test set accuracy** 85.7 * Random guessing would have had 20% accuracy given that there are 5 classes. (1/5 = 20%).* This is pretty good performance after training on only 127 examples. The model matches emojis to relevant wordsIn the training set, the algorithm saw the sentence >"*I love you*" with the label ❤️. * You can check that the word "adore" does not appear in the training set. * Nonetheless, lets see what happens if you write "*I adore you*."
###Code
X_my_sentences = np.array(["i adore you", "i love you", "funny lol", "lets play with a ball", "food is ready", "not feeling happy"])
Y_my_labels = np.array([[0], [0], [2], [1], [4],[3]])
pred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map)
print_predictions(X_my_sentences, pred)
###Output
Accuracy: 0.833333333333
i adore you ❤️
i love you ❤️
funny lol 😄
lets play with a ball ⚾
food is ready 🍴
not feeling happy 😄
###Markdown
Amazing! * Because *adore* has a similar embedding as *love*, the algorithm has generalized correctly even to a word it has never seen before. * Words such as *heart*, *dear*, *beloved* or *adore* have embedding vectors similar to *love*. * Feel free to modify the inputs above and try out a variety of input sentences. * How well does it work? Word ordering isn't considered in this model* Note that the model doesn't get the following sentence correct:>"not feeling happy" * This algorithm ignores word ordering, so is not good at understanding phrases like "not happy." Confusion matrix* Printing the confusion matrix can also help understand which classes are more difficult for your model. * A confusion matrix shows how often an example whose label is one class ("actual" class) is mislabeled by the algorithm with a different class ("predicted" class).
###Code
print(Y_test.shape)
print(' '+ label_to_emoji(0)+ ' ' + label_to_emoji(1) + ' ' + label_to_emoji(2)+ ' ' + label_to_emoji(3)+' ' + label_to_emoji(4))
print(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True))
plot_confusion_matrix(Y_test, pred_test)
###Output
(56,)
❤️ ⚾ 😄 😞 🍴
Predicted 0.0 1.0 2.0 3.0 4.0 All
Actual
0 6 0 0 1 0 7
1 0 8 0 0 0 8
2 2 0 16 0 0 18
3 1 1 2 12 0 16
4 0 0 1 0 6 7
All 9 9 19 13 6 56
###Markdown
What you should remember from this section- Even with a 127 training examples, you can get a reasonably good model for Emojifying. - This is due to the generalization power word vectors gives you. - Emojify-V1 will perform poorly on sentences such as *"This movie is not good and not enjoyable"* - It doesn't understand combinations of words. - It just averages all the words' embedding vectors together, without considering the ordering of words. **You will build a better algorithm in the next section!** 2 - Emojifier-V2: Using LSTMs in Keras: Let's build an LSTM model that takes word **sequences** as input!* This model will be able to account for the word ordering. * Emojifier-V2 will continue to use pre-trained word embeddings to represent words.* We will feed word embeddings into an LSTM.* The LSTM will learn to predict the most appropriate emoji. Run the following cell to load the Keras packages.
###Code
import numpy as np
np.random.seed(0)
from keras.models import Model
from keras.layers import Dense, Input, Dropout, LSTM, Activation
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.initializers import glorot_uniform
np.random.seed(1)
###Output
Using TensorFlow backend.
###Markdown
2.1 - Overview of the modelHere is the Emojifier-v2 you will implement: **Figure 3**: Emojifier-V2. A 2-layer LSTM sequence classifier. 2.2 Keras and mini-batching * In this exercise, we want to train Keras using mini-batches. * However, most deep learning frameworks require that all sequences in the same mini-batch have the **same length**. * This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it's just not possible to do them both at the same time. Padding handles sequences of varying length* The common solution to handling sequences of **different length** is to use padding. Specifically: * Set a maximum sequence length * Pad all sequences to have the same length. Example of padding* Given a maximum sequence length of 20, we could pad every sentence with "0"s so that each input sentence is of length 20. * Thus, the sentence "I love you" would be represented as $(e_{I}, e_{love}, e_{you}, \vec{0}, \vec{0}, \ldots, \vec{0})$. * In this example, any sentences longer than 20 words would have to be truncated. * One way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set. 2.3 - The Embedding layer* In Keras, the embedding matrix is represented as a "layer".* The embedding matrix maps word indices to embedding vectors. * The word indices are positive integers. * The embedding vectors are dense vectors of fixed size. * When we say a vector is "dense", in this context, it means that most of the values are non-zero. As a counter-example, a one-hot encoded vector is not "dense."* The embedding matrix can be derived in two ways: * Training a model to derive the embeddings from scratch. * Using a pretrained embedding Using and updating pre-trained embeddings* In this part, you will learn how to create an [Embedding()](https://keras.io/layers/embeddings/) layer in Keras* You will initialize the Embedding layer with the GloVe 50-dimensional vectors. * In the code below, we'll show you how Keras allows you to either train or leave fixed this layer. * Because our training set is quite small, we will leave the GloVe embeddings fixed instead of updating them. Inputs and outputs to the embedding layer* The `Embedding()` layer's input is an integer matrix of size **(batch size, max input length)**. * This input corresponds to sentences converted into lists of indices (integers). * The largest integer (the highest word index) in the input should be no larger than the vocabulary size.* The embedding layer outputs an array of shape (batch size, max input length, dimension of word vectors).* The figure shows the propagation of two example sentences through the embedding layer. * Both examples have been zero-padded to a length of `max_len=5`. * The word embeddings are 50 units in length. * The final dimension of the representation is `(2,max_len,50)`. **Figure 4**: Embedding layer Prepare the input sentences**Exercise**: * Implement `sentences_to_indices`, which processes an array of sentences (X) and returns inputs to the embedding layer: * Convert each training sentences into a list of indices (the indices correspond to each word in the sentence) * Zero-pad all these lists so that their length is the length of the longest sentence. Additional Hints* Note that you may have considered using the `enumerate()` function in the for loop, but for the purposes of passing the autograder, please follow the starter code by initializing and incrementing `j` explicitly.
###Code
for idx, val in enumerate(["I", "like", "learning"]):
print(idx,val)
# GRADED FUNCTION: sentences_to_indices
def sentences_to_indices(X, word_to_index, max_len):
"""
Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences.
The output shape should be such that it can be given to `Embedding()` (described in Figure 4).
Arguments:
X -- array of sentences (strings), of shape (m, 1)
word_to_index -- a dictionary containing the each word mapped to its index
max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this.
Returns:
X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len)
"""
m = X.shape[0] # number of training examples
### START CODE HERE ###
# Initialize X_indices as a numpy matrix of zeros and the correct shape (≈ 1 line)
X_indices = np.zeros((m, max_len))
for i in range(m): # loop over training examples
# Convert the ith training sentence in lower case and split is into words. You should get a list of words.
#print("X[i] = " + X[i])
sentence_words =X[i].lower().split()
#print("sentence_words lenght = " + str(len(sentence_words)))
# Initialize j to 0
j = 0
# Loop over the words of sentence_words
for w in sentence_words:
# Set the (i,j)th entry of X_indices to the index of the correct word.
#print("w = " + w)
X_indices[i, j] = word_to_index[w]
# Increment j to j + 1
j = j+1
#j += 1
### END CODE HERE ###
return X_indices
###Output
_____no_output_____
###Markdown
Run the following cell to check what `sentences_to_indices()` does, and check your results.
###Code
X1 = np.array(["funny lol", "lets play baseball", "food is ready for you"])
X1_indices = sentences_to_indices(X1,word_to_index, max_len = 5)
print("X1 =", X1)
print("X1_indices =\n", X1_indices)
###Output
X1 = ['funny lol' 'lets play baseball' 'food is ready for you']
X1_indices =
[[ 155345. 225122. 0. 0. 0.]
[ 220930. 286375. 69714. 0. 0.]
[ 151204. 192973. 302254. 151349. 394475.]]
###Markdown
**Expected Output**:```PythonX1 = ['funny lol' 'lets play baseball' 'food is ready for you']X1_indices = [[ 155345. 225122. 0. 0. 0.] [ 220930. 286375. 69714. 0. 0.] [ 151204. 192973. 302254. 151349. 394475.]]``` Build embedding layer* Let's build the `Embedding()` layer in Keras, using pre-trained word vectors. * The embedding layer takes as input a list of word indices. * `sentences_to_indices()` creates these word indices.* The embedding layer will return the word embeddings for a sentence. **Exercise**: Implement `pretrained_embedding_layer()` with these steps:1. Initialize the embedding matrix as a numpy array of zeros. * The embedding matrix has a row for each unique word in the vocabulary. * There is one additional row to handle "unknown" words. * So vocab_len is the number of unique words plus one. * Each row will store the vector representation of one word. * For example, one row may be 50 positions long if using GloVe word vectors. * In the code below, `emb_dim` represents the length of a word embedding.2. Fill in each row of the embedding matrix with the vector representation of a word * Each word in `word_to_index` is a string. * word_to_vec_map is a dictionary where the keys are strings and the values are the word vectors.3. Define the Keras embedding layer. * Use [Embedding()](https://keras.io/layers/embeddings/). * The input dimension is equal to the vocabulary length (number of unique words plus one). * The output dimension is equal to the number of positions in a word embedding. * Make this layer's embeddings fixed. * If you were to set `trainable = True`, then it will allow the optimization algorithm to modify the values of the word embeddings. * In this case, we don't want the model to modify the word embeddings.4. Set the embedding weights to be equal to the embedding matrix. * Note that this is part of the code is already completed for you and does not need to be modified.
###Code
# GRADED FUNCTION: pretrained_embedding_layer
def pretrained_embedding_layer(word_to_vec_map, word_to_index):
"""
Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors.
Arguments:
word_to_vec_map -- dictionary mapping words to their GloVe vector representation.
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
embedding_layer -- pretrained layer Keras instance
"""
vocab_len = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement)
emb_dim = word_to_vec_map["cucumber"].shape[0] # define dimensionality of your GloVe word vectors (= 50)
### START CODE HERE ###
# Initialize the embedding matrix as a numpy array of zeros of shape (vocab_len, dimensions of word vectors = emb_dim)
emb_matrix =np.zeros((vocab_len,emb_dim))
# Set each row "index" of the embedding matrix to be the word vector representation of the "index"th word of the vocabulary
for word, index in word_to_index.items():
emb_matrix[index, :] = word_to_vec_map[word]
# Define Keras embedding layer with the correct output/input sizes, make it trainable. Use Embedding(...). Make sure to set trainable=False.
# <Gema> The Embedding layer is defined as the first hidden layer of a network.
#It must specify 3 arguments:
#input_dim: This is the size of the vocabulary in the text data. For example, if your data is integer encoded to values between 0-10, then the size of the vocabulary would be 11 words.
#output_dim: This is the size of the vector space in which words will be embedded. It defines the size of the output vectors from this layer for each word.
embedding_layer = Embedding(vocab_len,emb_dim,trainable=False)
# <Gema>The output of the Embedding layer is a 2D vector with one embedding for each word in the input sequence of words (input document).
### END CODE HERE ###
# Build the embedding layer, it is required before setting the weights of the embedding layer. Do not modify the "None".
embedding_layer.build((None,))
# Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained.
embedding_layer.set_weights([emb_matrix])
return embedding_layer
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
print("weights[0][1][3] =", embedding_layer.get_weights()[0][1][3])
###Output
weights[0][1][3] = -0.3403
###Markdown
**Expected Output**:```Pythonweights[0][1][3] = -0.3403``` 2.3 Building the Emojifier-V2Lets now build the Emojifier-V2 model. * You feed the embedding layer's output to an LSTM network. **Figure 3**: Emojifier-v2. A 2-layer LSTM sequence classifier. **Exercise:** Implement `Emojify_V2()`, which builds a Keras graph of the architecture shown in Figure 3. * The model takes as input an array of sentences of shape (`m`, `max_len`, ) defined by `input_shape`. * The model outputs a softmax probability vector of shape (`m`, `C = 5`). * You may need to use the following Keras layers: * [Input()](https://keras.io/layers/core/input) * Set the `shape` and `dtype` parameters. * The inputs are integers, so you can specify the data type as a string, 'int32'. * [LSTM()](https://keras.io/layers/recurrent/lstm) * Set the `units` and `return_sequences` parameters. * [Dropout()](https://keras.io/layers/core/dropout) * Set the `rate` parameter. * [Dense()](https://keras.io/layers/core/dense) * Set the `units`, * Note that `Dense()` has an `activation` parameter. For the purposes of passing the autograder, please do not set the activation within `Dense()`. Use the separate `Activation` layer to do so. * [Activation()](https://keras.io/activations/). * You can pass in the activation of your choice as a lowercase string. * [Model](https://keras.io/models/model/) Set `inputs` and `outputs`. Additional Hints* Remember that these Keras layers return an object, and you will feed in the outputs of the previous layer as the input arguments to that object. The returned object can be created and called in the same line.```Python How to use Keras layers in two lines of codedense_object = Dense(units = ...)X = dense_object(inputs) How to use Keras layers in one line of codeX = Dense(units = ...)(inputs)```* The `embedding_layer` that is returned by `pretrained_embedding_layer` is a layer object that can be called as a function, passing in a single argument (sentence indices).* Here is some sample code in case you're stuck```Pythonraw_inputs = Input(shape=(maxLen,), dtype='int32')preprocessed_inputs = ... some pre-processingX = LSTM(units = ..., return_sequences= ...)(processed_inputs)X = Dropout(rate = ..., )(X)...X = Dense(units = ...)(X)X = Activation(...)(X)model = Model(inputs=..., outputs=...)...```
###Code
# GRADED FUNCTION: Emojify_V2
def Emojify_V2(input_shape, word_to_vec_map, word_to_index):
"""
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
"""
### START CODE HERE ###
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
# <Gema>first, let's define an string model that will encode sentences into 50-dimensional vectors.
sentence_indices = Input(shape=input_shape,dtype='int32')
# <Gema> The model takes as input an array of sentences of shape (m, max_len, ) defined by input_shape
# Create the embedding layer pretrained with GloVe Vectors (≈1 line)
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
# Propagate sentence_indices through your embedding layer, you get back the embeddings
embeddings = embedding_layer(sentence_indices)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a batch of sequences.
# <Gema> https://machinelearningmastery.com/return-sequences-and-return-states-for-lstms-in-keras/
X = LSTM(units=128, return_sequences=True)(embeddings)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X trough another LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a single hidden state, not a batch of sequences.
X = LSTM(units=128,return_sequences=False)(X)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors.
X = Dense(5)(X)
# Add a softmax activation
X = Activation('softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = Model(inputs=sentence_indices, outputs=X)
### END CODE HERE ###
return model
###Output
_____no_output_____
###Markdown
Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose `max_len = 10`. You should see your architecture, it uses "20,223,927" parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001\*50 = 20,000,050 non-trainable parameters.
###Code
model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index)
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 10) 0
_________________________________________________________________
embedding_2 (Embedding) (None, 10, 50) 20000050
_________________________________________________________________
lstm_1 (LSTM) (None, 10, 128) 91648
_________________________________________________________________
dropout_1 (Dropout) (None, 10, 128) 0
_________________________________________________________________
lstm_2 (LSTM) (None, 128) 131584
_________________________________________________________________
dropout_2 (Dropout) (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 5) 645
_________________________________________________________________
activation_1 (Activation) (None, 5) 0
=================================================================
Total params: 20,223,927
Trainable params: 223,877
Non-trainable params: 20,000,050
_________________________________________________________________
###Markdown
As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using `categorical_crossentropy` loss, `adam` optimizer and `['accuracy']` metrics:
###Code
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
It's time to train your model. Your Emojifier-V2 `model` takes as input an array of shape (`m`, `max_len`) and outputs probability vectors of shape (`m`, `number of classes`). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors).
###Code
X_train_indices = sentences_to_indices(X_train, word_to_index, maxLen)
Y_train_oh = convert_to_one_hot(Y_train, C = 5)
###Output
_____no_output_____
###Markdown
Fit the Keras model on `X_train_indices` and `Y_train_oh`. We will use `epochs = 50` and `batch_size = 32`.
###Code
model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True)
###Output
Epoch 1/50
132/132 [==============================] - 0s - loss: 1.6083 - acc: 0.1970
Epoch 2/50
132/132 [==============================] - 0s - loss: 1.5322 - acc: 0.2955
Epoch 3/50
132/132 [==============================] - 0s - loss: 1.5008 - acc: 0.3258
Epoch 4/50
132/132 [==============================] - 0s - loss: 1.4384 - acc: 0.3561
Epoch 5/50
132/132 [==============================] - 0s - loss: 1.3469 - acc: 0.4545
Epoch 6/50
132/132 [==============================] - 0s - loss: 1.2331 - acc: 0.5076
Epoch 7/50
132/132 [==============================] - 0s - loss: 1.1758 - acc: 0.4470
Epoch 8/50
132/132 [==============================] - 0s - loss: 1.0539 - acc: 0.5758
Epoch 9/50
132/132 [==============================] - 0s - loss: 0.8765 - acc: 0.7121
Epoch 10/50
132/132 [==============================] - 0s - loss: 0.8228 - acc: 0.6970
Epoch 11/50
132/132 [==============================] - 0s - loss: 0.7027 - acc: 0.7500
Epoch 12/50
132/132 [==============================] - 0s - loss: 0.6004 - acc: 0.8030
Epoch 13/50
132/132 [==============================] - 0s - loss: 0.4932 - acc: 0.8333
Epoch 14/50
132/132 [==============================] - 0s - loss: 0.5094 - acc: 0.8333 - ETA: 0s - loss: 0.5157 - acc: 0.828
Epoch 15/50
132/132 [==============================] - 0s - loss: 0.4786 - acc: 0.8258
Epoch 16/50
132/132 [==============================] - 0s - loss: 0.3540 - acc: 0.8636
Epoch 17/50
132/132 [==============================] - 0s - loss: 0.3902 - acc: 0.8636
Epoch 18/50
132/132 [==============================] - 0s - loss: 0.6484 - acc: 0.8106
Epoch 19/50
132/132 [==============================] - 0s - loss: 0.5179 - acc: 0.8182
Epoch 20/50
132/132 [==============================] - 0s - loss: 0.3960 - acc: 0.8409
Epoch 21/50
132/132 [==============================] - 0s - loss: 0.4723 - acc: 0.8182
Epoch 22/50
132/132 [==============================] - 0s - loss: 0.3892 - acc: 0.8636
Epoch 23/50
132/132 [==============================] - 0s - loss: 0.3795 - acc: 0.8561
Epoch 24/50
132/132 [==============================] - 0s - loss: 0.3056 - acc: 0.9091
Epoch 25/50
132/132 [==============================] - 0s - loss: 0.3489 - acc: 0.8864
Epoch 26/50
132/132 [==============================] - 0s - loss: 0.2422 - acc: 0.9394
Epoch 27/50
132/132 [==============================] - 0s - loss: 0.3179 - acc: 0.8864
Epoch 28/50
132/132 [==============================] - 0s - loss: 0.2402 - acc: 0.9318
Epoch 29/50
132/132 [==============================] - 0s - loss: 0.3943 - acc: 0.8712
Epoch 30/50
132/132 [==============================] - 0s - loss: 0.2677 - acc: 0.9091
Epoch 31/50
132/132 [==============================] - 0s - loss: 0.2955 - acc: 0.8864
Epoch 32/50
132/132 [==============================] - 0s - loss: 0.2040 - acc: 0.9318
Epoch 33/50
132/132 [==============================] - 0s - loss: 0.2124 - acc: 0.9470
Epoch 34/50
132/132 [==============================] - 0s - loss: 0.1566 - acc: 0.9621
Epoch 35/50
132/132 [==============================] - 0s - loss: 0.1635 - acc: 0.9621
Epoch 36/50
132/132 [==============================] - 0s - loss: 0.1874 - acc: 0.9394
Epoch 37/50
132/132 [==============================] - 0s - loss: 0.1776 - acc: 0.9470
Epoch 38/50
132/132 [==============================] - 0s - loss: 0.2140 - acc: 0.9394
Epoch 39/50
132/132 [==============================] - 0s - loss: 0.1389 - acc: 0.9621
Epoch 40/50
132/132 [==============================] - 0s - loss: 0.1530 - acc: 0.9545
Epoch 41/50
132/132 [==============================] - 0s - loss: 0.0870 - acc: 0.9848
Epoch 42/50
132/132 [==============================] - 0s - loss: 0.0799 - acc: 0.9773
Epoch 43/50
132/132 [==============================] - 0s - loss: 0.0801 - acc: 0.9848
Epoch 44/50
132/132 [==============================] - 0s - loss: 0.0492 - acc: 0.9924
Epoch 45/50
132/132 [==============================] - 0s - loss: 0.0787 - acc: 0.9848
Epoch 46/50
132/132 [==============================] - 0s - loss: 0.1068 - acc: 0.9773
Epoch 47/50
132/132 [==============================] - 0s - loss: 0.1492 - acc: 0.9470 - ETA: 0s - loss: 0.1518 - acc: 0.945
Epoch 48/50
132/132 [==============================] - 0s - loss: 0.3031 - acc: 0.9242
Epoch 49/50
132/132 [==============================] - 0s - loss: 0.1150 - acc: 0.9773
Epoch 50/50
132/132 [==============================] - 0s - loss: 0.1831 - acc: 0.9394
###Markdown
Your model should perform around **90% to 100% accuracy** on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set.
###Code
X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen)
Y_test_oh = convert_to_one_hot(Y_test, C = 5)
loss, acc = model.evaluate(X_test_indices, Y_test_oh)
print()
print("Test accuracy = ", acc)
###Output
32/56 [================>.............] - ETA: 0s
Test accuracy = 0.821428562914
###Markdown
You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples.
###Code
# This code allows you to see the mislabelled examples
C = 5
y_test_oh = np.eye(C)[Y_test.reshape(-1)]
X_test_indices = sentences_to_indices(X_test, word_to_index, maxLen)
pred = model.predict(X_test_indices)
for i in range(len(X_test)):
x = X_test_indices
num = np.argmax(pred[i])
if(num != Y_test[i]):
print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip())
###Output
Expected emoji:😄 prediction: she got me a nice present ❤️
Expected emoji:😞 prediction: work is hard 😄
Expected emoji:😞 prediction: This girl is messing with me ❤️
Expected emoji:🍴 prediction: any suggestions for dinner 😄
Expected emoji:❤️ prediction: I love taking breaks 😞
Expected emoji:😄 prediction: you brighten my day ❤️
Expected emoji:😄 prediction: will you be my valentine ❤️
Expected emoji:🍴 prediction: See you at the restaurant 😄
Expected emoji:😞 prediction: go away ⚾
Expected emoji:🍴 prediction: I did not have breakfast ❤️
###Markdown
Now you can try it on your own example. Write your own sentence below.
###Code
# Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings.
x_test = np.array(['not feeling happy'])
X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen)
print(x_test[0] +' '+ label_to_emoji(np.argmax(model.predict(X_test_indices))))
###Output
not feeling happy 😞
|
MNIST/mnist_cnn.ipynb | ###Markdown
**CNN for the MNIST Dataset** **Import libraries**
###Code
import numpy
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers.convolutional import Convolution2D
from keras.layers.convolutional import MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
K.tensorflow_backend.set_image_dim_ordering('th')
seed = 7
numpy.random.seed(seed)
###Output
_____no_output_____
###Markdown
**Load the Dataset**
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
###Output
_____no_output_____
###Markdown
**reshape to be [samples][channels][width][height]**
###Code
X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype('float32')
###Output
_____no_output_____
###Markdown
**normalize inputs from 0-255 to 0-1**
###Code
X_train = X_train / 255
X_test = X_test / 255
###Output
_____no_output_____
###Markdown
**one hot encode outputs**
###Code
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
###Output
_____no_output_____
###Markdown
**define a simple CNN model**
###Code
def baseline_model():
# create model
model = Sequential()
model.add(Convolution2D(32, 5, 5, input_shape=(1, 28, 28), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
**Build and Fit the model**
###Code
model = baseline_model()
model.fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=10, batch_size=200, verbose=1)
###Output
_____no_output_____
###Markdown
**Final evaluation of the model**
###Code
scores = model.evaluate(X_test, y_test, verbose=0)
print("CNN Error: %.2f%%" % (100-scores[1]*100))
###Output
_____no_output_____ |
Fundamentals of Data Analysis - Assignments .ipynb | ###Markdown
Fundamentals of Data Analysis - AssignmentsThis file contains the output of each assignment as part of the Fundamentals of Data Analysis module. OverviewThis module will contain four different assingments to be completed* [Assignment 1 Counts](t1)* [Assignment 2 Diceroll](t2)* [Assignment 3 Coin Flip](t3)* [Assignment 4 Simpson's Paradox](t4)*** Assignment 1 Counts Write a Python function called counts that takes a list asinput and returns a dictionary of unique items in the list as keys and the number oftimes each item appears as values. So, the input ['A', 'A', 'B', 'C', 'A']should have output {'A': 3, 'B': 1, 'C': 1} Refences Assignment 1[1] Tutorialspoint, Counting Frequencies in a list ising dictionary in Python; https://www.tutorialspoint.com/counting-the-frequencies-in-a-list-using-dictionary-in-python[2] Kite, How to count a frequency in Python; https://www.kite.com/python/answers/how-to-count-item-frequency-in-python[3] W3schools, Python List count() Method https://www.w3schools.com/python/ref_list_count.asp[4] GMIT lecture Video, For loops, Ian McLoughlin; https://web.microsoftstream.com/video/8492c53c-a684-4da9-a2c5-bce1d5c367a9[5] Adding item to a dictionary; https://www.w3schools.com/python/python_dictionaries.asp Lists for Function Test
###Code
#List 1 - upper case letters
list1 = ['A','A','B','C','A']
#List 2 - lower case letters
list2 = ['a','a','b','c','a']
#list 3 - mix of lower and upper case
list3 = ['A','a','B','c','C','A']
#List 4 - mix of upper case leters, lower case letter, string symbols, integers, floats, words
list4 = [1,2,3,4.56,"22","22","@@","@@","@@",'###','hello', 'hello',1255,1255.0]
#List 5 - integers
list5 = [1,1,1,1,3,4,5,6,7,7,7,7,7,7,7,9,9,9,9,9,9,9,9,9]
#List 6 - floats
list6 = [0.5654,0,5654,.11,1.23,1.23,1.23]
#List 7 - Names
list7 = ['Conor','Conor','Leo','Leo','Leo','Nancy','Rose','Connie','Aideen','Aideen']
#List 8 - Joined Lists
list8 = list1 + list5
###Output
_____no_output_____
###Markdown
Counts Function
###Code
#https://stackoverflow.com/questions/1801668/convert-a-python-list-with-strings-all-to-lowercase-or-uppercase
#https://www.python-course.eu/python3_lambda.php
#count function version 2 - case sensitivity
def counts(l):
"""
A function to count the items in a list and returns a Dictionary with Keys and Counts
"""
#var st (string) - lambda function to change all items to a string
#required for integer and float values with a list
st = list(map(lambda x: str(x),l))
#var cs (case senstitive) - Lambda function to change all items to lowercase
cs = list(map(lambda x: x.lower(),st))
#variable to create an empty dictionary
frequency = {}
#a for loop to loop through list items
for list_item in cs:
#loop through list and add items to dict with a count
#list item dictionay key...........l.count dictionary values
#.title to capitalize the Key value of dictionary
frequency[list_item.title()] = cs.count(list_item)
#print output
return frequency
###Output
_____no_output_____
###Markdown
Function testTest the function with different types of lists
###Code
#Test1
print("Test 1: Uppercase Letters")
print(counts(list1))
print(" ")
#Test2
print("Test 2: Lowercase Letters")
print(counts(list2))
print(" ")
#Test3
print("Test 3: mix of lower and upper case:")
print(counts(list3))
print(" ")
#Test4
print("Test 4: mix of upper case leters, lower case letter, string symbols, integers, floats, words")
print(counts(list4))
print(" ")
#Test5
print("Test 5:integers ")
print(counts(list5))
print(" ")
#Test6
print("Test 6: floating numbers")
print(counts(list6))
print("")
#Test7
print("Test 7: words, names etc.")
print(counts(list7))
print(" ")
print('Test 8: Joined Lists')
#Test8
print(counts(list8))
###Output
Test 1: Uppercase Letters
{'A': 3, 'B': 1, 'C': 1}
Test 2: Lowercase Letters
{'A': 3, 'B': 1, 'C': 1}
Test 3: mix of lower and upper case:
{'A': 3, 'B': 1, 'C': 2}
Test 4: mix of upper case leters, lower case letter, string symbols, integers, floats, words
{'1': 1, '2': 1, '3': 1, '4.56': 1, '22': 2, '@@': 3, '###': 1, 'Hello': 2, '1255': 1, '1255.0': 1}
Test 5:integers
{'1': 4, '3': 1, '4': 1, '5': 1, '6': 1, '7': 7, '9': 9}
Test 6: floating numbers
{'0.5654': 1, '0': 1, '5654': 1, '0.11': 1, '1.23': 3}
Test 7: words, names etc.
{'Conor': 2, 'Leo': 3, 'Nancy': 1, 'Rose': 1, 'Connie': 1, 'Aideen': 2}
Test 8: Joined Lists
{'A': 3, 'B': 1, 'C': 1, '1': 4, '3': 1, '4': 1, '5': 1, '6': 1, '7': 7, '9': 9}
###Markdown
*** Assignment 2 Counts Write a Python function called dicerolls that simulatesrolling dice. Your function should take two parameters: the number of dice k andthe number of times to roll the dice n. The function should simulate randomlyrolling k dice n times, keeping track of each total face value. It should then returna dictionary with the number of times each possible total face value occurred. So,calling the function as diceroll(k=2, n=1000) should return a dictionary like: {'2': 19, '3': 50, '4': 82} References Assignment 2[6] Numpy documentation, numpy.random.generator; https://numpy.org/doc/stable/reference/random/generator.htmlnumpy.random.Generator https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.integers.htmlnumpy.random.Generator.integers[7]Numpy documentation,numpy.random.integers; https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.integers.htmlnumpy.random.Generator.integers[8] Stackoverflow, dice simulation; https://stackoverflow.com/questions/33069476/simulating-rolling-2-dice-in-python[9] Stackoverflow, sorting dictionaries in python; https://stackoverflow.com/questions/20577840/python-dictionary-sorting-in-descending-order-based-on-values[10] Stackoverflow, plotting with dictionary data; https://stackoverflow.com/questions/53431971/plotting-histogram-on-python-with-dictionary[11] W3schools, nested loops; https://www.w3schools.com/python/gloss_python_for_nested.asp[12] Quoara, finding possible outcomes of each dice; https://www.quora.com/If-4-dice-are-rolled-what-is-the-probability-of-getting-a-sum-of-5 Diceroll function
###Code
import numpy as np
import matplotlib.pyplot as plt
#import operator
rng = np.random.default_rng()
#initial blank function
def diceroll(k,n):
"""
A function to simulate 2 dice being thrown 1000 time and out the reults of the total face value in a dictionary
"""
#variable to create an empty dictionary
resultdict = {}
#list for dice numbers rolled
result = []
#dice counter for number of dice
dice = 0
#for loop through number of throws
for i in range(n):
#for loop for number of dice used
for i in range(k):
#random number from numy integer function, += used in loop
#integer function return random integer including 1 and 6. endpoint = True makes 6 inclusive
dice += rng.integers(1,6,endpoint=True)
#apped random intger number
result.append(dice)
#create dictionary key and values
resultdict[dice] = result.count(dice)
#reset of count of dice to loop through again
dice = 0
#s = dict(sorted(resultdict.items(),reverse=False))
#return a dictionary with keys sorted in ascending value
return dict(resultdict.items())#,reverse=False)) sorted
###Output
_____no_output_____
###Markdown
Plotting the Function
###Code
#variables for each simulation
# one dice a thousand time
rollone = diceroll(1,1000)
#two dice a thousand times
rolltwo = diceroll(2,1000)
#three dice a thiusan times
rollthree = diceroll(3,1000)
#four dice a thousand times
rollfour = diceroll(4,1000)
# extracting dictionary key and value for plotting
labels1, values1 = zip(*rollone.items())
labels2, values2 = zip(*rolltwo.items())
labels3, values3 = zip(*rollthree.items())
labels4, values4 = zip(*rollfour.items())
###Output
_____no_output_____
###Markdown
Bar charts to illustrate the use of different number of dice used in the function
###Code
#for plot display sizing
# ref https://stackoverflow.com/questions/36367986/how-to-make-inline-plots-in-jupyter-notebook-larger
plt.rcParams['figure.figsize'] = [18, 14]
# plot style
#ref Dr Ian Mcloughlin lectures
plt.style.use('ggplot')
#sub[;ot title]
plt.suptitle("Plots of the different number of Dice Rolled",fontsize=24 )
#subplots 2 rows, 2 columns
plt.subplot(2,2,1)
#barplot
plt.bar(labels1,values1,color='blue',alpha=0.5)
#title
plt.title("Plot 1 1: One Dice Rolled 1000 Times")
#ylabel
plt.ylabel('Sum of Values')
#xlabel
plt.xlabel('Dice Totals')
plt.subplot(2,2,2)
#barplot
plt.bar(labels2,values2,color='blue',alpha=0.5)
#title
plt.title("Plot 2: Two Dice Rolled 1000 Times")
#ylabel
plt.ylabel('Sum of Values')
#xlabel
plt.xlabel('Dice Totals')
plt.subplot(2,2,3)
#barplot
plt.bar(labels3,values3,color='blue',alpha=0.5)
#title
plt.title("Plot 3: Three Dice Rolled 1000 Times")
#ylabel
plt.ylabel('Sum of Values')
#xlabel
plt.xlabel('Dice Totals')
plt.subplot(2,2,4)
#barplot
plt.bar(labels4,values4,color='blue',alpha=0.5)
#title
plt.title("Plot 4: Four Dice Rolled 1000 Times")
#ylabel
plt.ylabel('Sum of Values')
#xlabel
plt.xlabel('Dice Totals');
###Output
_____no_output_____
###Markdown
Possible out outcomes depending on number of dice used We can observe from the above plot how the number of dice used changes the distribuiton.For one dice used there are 6 possible outcomes, for two there are 36, for 4 there are 1296!!!If we take rolling a total value of 2, in plot 1 there is a 1 in 6 chance of getting two. Using two dice this is 1 in 36. Using more than two dice you can't get a value of two.The more dice you use, there are more possible combinations to get different totals. For two dice, there are more combinations of rolling a total of 7 than any other total value.1+6, 2+5, 3+4, 4+3, 5+2, 6+1 are all the different combinations that produce a total value of 7. We can see this clearly illustrated in plot 2.In the code below we can see the different possible outcomes for each total depending on how many dice are thrown. The number of outcomes determine the % probability.$P(E) = \frac{n(E)}{n(S)}$For example: For Two Dice the probabilty of rolling a total of 7* n(E) = 6* n(S) = 36 (6**2)* P(E) = 6/36 which is approx *17%* We can create a dictionary using our Counts function to see the different number of possible outcomes for creating different totals depending on how many dice are used.
###Code
#creating a dictionary using our Counts function to see the different number of possible outcomes for creating different totals
#depending on how many dice are used
#empty lists to append each dice number range
onedicepo = []
twodicepo = []
threedicepo = []
fourdicepo = []
#nested for loops for 4 different dice
for d1 in range(1,7,1):
#append first range in the first list
onedicepo.append(d1)
for d2 in range(1,7,1):
#variable combines two dice together
twodice = d1 + d2
twodicepo.append(twodice)
for d3 in range(1,7,1):
threedice = d1+d2+d3
threedicepo.append(threedice)
for d4 in range(1,7,1):
fourdice = d1+d2+d3+d4
fourdicepo.append(fourdice)
#Utilising the counts function we created from Assingment 1
print("Possible Outcomes of Totals for 1 Dice")
print(counts(onedicepo))
print(" ")
print("Possible Outcomes of Totals for 2 Dice")
print(counts(twodicepo))
print(" ")
print("Possible Outcomes of Totals for 3 Dice")
print(counts(threedicepo))
print(" ")
print("Possible Outcomes of Totals for 4 Dice")
print(counts(fourdicepo))
# extracting dictionary key and value for plotting
labels_five, values_five = zip(*counts(twodicepo).items())
#subplots title
plt.suptitle("Comparison of Two Dice Rolled 1000 Times V Possible Outcomes",fontsize=18 )
plt.subplot(2,1,1)
#barplot color blue, transparency .50
plt.bar(labels2,values2,color='b',alpha=0.5)
#title
plt.title("Two Dice Rolled 1000 Times")
#ylabel
plt.ylabel('Sum of Values')
#xlabel
plt.xlabel('Dice Totals')
plt.subplot(2,1,2)
#barplot, color green, transparency .50
plt.bar(labels_five,values_five,color='green',alpha=0.5)
#title
plt.title("Possible Outcomes of Two Dice Totals")
#ylabel
plt.ylabel('Possible Outcomes')
#xlabel
plt.xlabel('Dice Totals');
###Output
_____no_output_____
###Markdown
We can observe from the above plots for two dice rolled 1000 times and the possible outcome totals for two dice that their distribution is very similar!*** Assignment 3 Coin Flip Write some python code thatsimulates flipping a coin 100 times. Then run this code 1,000 times, keeping trackof the number of heads in each of the 1,000 simulations. Select an appropriateplot to depict the resulting list of 1,000 numbers, showing that it roughly followsa bell-shaped curve. References Assignment 3 References Assignment 3[13] Numpy random choice; https://www.sharpsightlabs.com/blog/numpy-random-choice/ [14] Numpy Documentation, Simple random data, numpy.choice; https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.choice.htmlnumpy.random.Generator.choice[15] Numpy Documentation, Distributions, binomial; https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.binomial.htmlnumpy.random.Generator.binomial[16] Seaborn, Distplots; https://seaborn.pydata.org/generated/seaborn.distplot.html[17] Seaborn, visualising distributions; https://seaborn.pydata.org/tutorial/distributions.html Task 3 - Coin FlipFor this task we will use numpy binomial distribution function as part of the numpy random sub-package.A binomial distribution can be thought of as simply the probability of a SUCCESS or FAILURE outcome in an experiment or survey that is repeated multiple times. The binomial is a type of distribution that has two possible outcomes (the prefix “bi” means two, or twice)[12](r12).The probability density for the Binomial Distribution is:$ P(N) = \binom{n}{N}p^N(1-p)^{n-N} $For example if a fair coin is flipped once, the results must be either Heads or tails (True or False). If one dice is rolled once the results has to be either 1, 2, 3 ,4 ,5 or 6. The probability of one number is 1/6If a new medicine is introduced to cure a disease, it either works or not.The Binominal Distribution is a discrete version of the Normal Distribution. The more trials used in the Binominal Distribution the more in becomes a Normal Distribution.The general prerequisites of binomial distribution are:* The number of observations *n* is fixed.* Two potential outcomes, it either happens (Success) or it doesn't (Failure)* Each observation is independent from the next.* Probability of *success* p is the same of each outcome Binominal function*binomial(n, p, size=None)**Draw samples from a binomial distribution* [13](r13) Parameters of the function* n: must be greater or equal than 0.* p: float value between 0 - 1 inclusive* size: test size of running the sample* will return an array the size of the input for the *size* parameter Flipp a coin 100 Times
###Code
#import seaborn package for plotting coin toss results
import seaborn as sns
#p value in function parameter
success = .5
#size is array size
size = 1000
#different variables for the amount of flips observed
ten_flips = rng.binomial(10,success,size)
one_thousand_flips = rng.binomial(1000,success,size)
ten_thousand_flips = rng.binomial(10000,success,size)
one_hundred_flips = rng.binomial(100,success,size)
one_thousand_flips = rng.binomial(1000,success,size)
ten_thousand_flips = rng.binomial(10000,success,size)
print(counts(one_thousand_flips))
#title of subplots
plt.suptitle('Coin Flip Comparision',fontsize=22)
plt.subplot(2,2,1)
#dist plot with KDE set as False
sns.distplot(ten_flips,kde=False,label= f"100 Flips $p$={success}",color='blue')
#legend
plt.legend()
#title
plt.title('100 Coin Flips',Fontsize=18)
plt.subplot(2,2,2)
#distplot with KDE set as False
sns.distplot(one_hundred_flips,kde=False,label= f"100 Flips $p$={success}",color='orange')
#legend
plt.legend()
#title
plt.title('100 Coin Flips',Fontsize=18)
plt.subplot(2,2,3)
#distplot wih KDE set as False
sns.distplot(one_thousand_flips,kde=False,label= f"1000 Flips $p$={success}",color='green' )
#legend
plt.legend()
#title
plt.title(f'1000 Coin Flips',Fontsize=18)
plt.subplot(2,2,4)
#distplot with KDE set as False
sns.distplot(ten_thousand_flips,kde=False,label= f"10000 Flips$p$={success}",color='red' )
#legend
plt.legend()
#title
plt.title(f'10000 Coin Flips',Fontsize=18);
#plot title
plt.title('Coin Flip Density Comparision',fontsize=16)
#Kernel Density (kde) plots for 10, 100 and 1000 flips
sns.distplot(ten_flips,hist=False,kde_kws = {'shade': True, 'linewidth': 3},label= f"10 Flips $p$={success}")
sns.distplot(one_hundred_flips,hist=False,kde_kws = {'shade': True, 'linewidth': 3},label= f"100 Flips $p$={success}")
sns.distplot(one_thousand_flips,hist=False,kde_kws = {'shade': True, 'linewidth': 3},label= f"1000 Flips $p$={success}",color='green');
###Output
_____no_output_____
###Markdown
As we can observe from the above plots, the more times the coin is flipped the more that data becomes normally distributed. The chances of a small number of heads or a large number of heads is very small, The chances of getting an average amount of heads is in the middle.*** Assignment 4 Simpson's Paradox Simpson’s paradox is a well-known statistical paradoxwhere a trend evident in a number of groups reverses when the groups are combinedinto one big data set. Use numpy to create four data sets, each with an x arrayand a corresponding y array, to demonstrate Simpson’s paradox. You mightcreate your x arrays using numpy.linspace and create the y array for eachx using notation like y = a * x + b where you choose the a and b for eachx , y pair to demonstrate the paradox. References Assignment 4[18] Bittannica.com, Simpson's paradox; https://www.britannica.com/topic/Simpsons-paradox[19] Brilliant.org, simpson's paradox; https://brilliant.org/wiki/simpsons-paradox/[20] Towardsdatascience.com, How to prove two opposite arguments using one dataset; https://towardsdatascience.com/simpsons-paradox-how-to-prove-two-opposite-arguments-using-one-dataset-1c9c917f5ff9[21] Seaborn, lmplot; https://seaborn.pydata.org/generated/seaborn.lmplot.html[22] Seaborn, relplot, https://seaborn.pydata.org/generated/seaborn.relplot.html [23] Seaborn, relplot, https://seaborn.pydata.org/generated/seaborn.relplot.html[24] Youtube, minutephysics, Simpson's Paradox; https://www.youtube.com/watch?v=ebEkn-BiW5k[25] Youtube, Dr. Trefor Bazett,How SIMPSON'S PARADOX explains weird COVID19 statistics; https://www.youtube.com/watch?v=t-Ci3FosqZs[26] numpy, polyfit function; https://numpy.org/doc/stable/reference/generated/numpy.polyfit.html Task 4 Simpson's Paradox*Simpson's paradox (or Yule-Simpson effect ), in statistics, is a phenomenon where one particular trend shown in groups of data is reversed when the groups are combined together. In order to interpret data, correctly understanding and identifying this paradox is of critical importance.* [19]Four different datsets will be created to illustrate the paradox. These for datasets will be then combined.Each dataset will effected by the same random noise, the slope increase will be the same for each group, but the linspace values and interept values will differ.For each dataset, numpy's *polyfit* function will be aplied to illustrate the *best fit* line for each group of data.This will then be al plotted to visually illusrate Simpson's paradox. The slope of each dataset will also be printed to show that thecombined dataset will have a negative slope. Creating 4 variables
###Code
#noise variable using normal distribution function
#mean of 10
#stdev of 15 for more noise
noise = rng.normal(10,15,100)
#numbers from 0 to 5 slit up into 100 segements
x1 = np.linspace(0,5,100)
#slope is x increased by 10 for every point with an intercept of 100 plus normally distributed noise
y1 = 10* x1 + 100+noise
#ordinary least square polyfit on the dataset
coeffs1 = np.polyfit(x1,y1,1)
#numbers from 5 to 10 slit up into 100 segements
x2 = np.linspace(5,10,100)
#slope is x increased by 10 for every point with an intercept of 75 plus normally distributed noise
y2 = 10* x1 + 75+noise
#ordinary least square polyfit on the dataset
coeffs2 = np.polyfit(x2,y2,1)
#numbers from 10 to 15 slit up into 100 segements
x3 = np.linspace(10,15,100)#15,20
#slope is x increased by 10 for every point with an intercept of 50 plus normally distributed noise
y3 = 10* x1 + 50+noise
#ordinary least square polyfit on the dataset
coeffs3 = np.polyfit(x3,y3,1)
#numbers from 15 to 20 slit up into 100 segements
x4 = np.linspace(15,20,100)
#slope is x increased by 10 for every point with an intercept of 25 plus normally distributed noise
y4 = 10* x1 + 25+noise
#ordinary least square polyfit on the dataset
coeffs4 = np.polyfit(x4,y4,1)
###Output
_____no_output_____
###Markdown
Combing the datasetsThe four different datsets from above are concatenated together to make one overall dataset. The *best fit* line will be created using numpy's polyfit function and we will plot the cobined dataset and the *best fit* line.
###Code
#combines varaible
#numpy concatenate function
x= np.concatenate([x1,x2,x3,x4])
y = np.concatenate([y1,y2,y3,y4])
#coefficient of combined data
combine_coeffs = np.polyfit(x,y,1)
print('The slope of the cobined dataet is:',combine_coeffs[0])
###Output
The slope of the cobined dataet is: -4.032717380112218
###Markdown
Plotting Combined DatasetThe below plot we can see that the slope is negative an is displaying a negative *best fit* line.
###Code
#plotting scattered data
plt.plot(x,y,'.',color='black',label='Combined Dataset')
#plotting coeffeicent values
plt.plot(x,combine_coeffs[0] *x + combine_coeffs[1],'--',color='black',linewidth=4, label = 'Best Fit line: Combined Dataset')
plt.title("Combined Dataset",fontsize=18)
plt.legend();
###Output
_____no_output_____
###Markdown
However if we show the datasets indiviually we can ilustrate Simpson's paradox. Hence, Simson's paradox occurs when the trend in a group of data reverses when the groups are combined.In the plot below, each group has a different trend to the the combined dataset.
###Code
#plotting scattered data
plt.plot(x1,y1,'.',label='Dataset 1',color='b')
plt.plot(x2,y2,'.',label='Dataset 2',color='orange')
plt.plot(x3,y3,'.',label='Dataset 3',color='green')
plt.plot(x4,y4,'.',label='Dataset 4',color='red')
#plotting coeffeicent values
plt.plot(x1,coeffs1[0] *x1 + coeffs1[1],color='blue',label = 'Best Fit line: Dataset 1')
plt.plot(x2,coeffs2[0] *x2 + coeffs2[1],color='orange',label = 'Best Fit line: Dataset 2')
plt.plot(x3,coeffs3[0] *x3 + coeffs3[1],color='green',label = 'Best Fit line: Dataset 3')
plt.plot(x4,coeffs4[0] *x4 + coeffs4[1],color='red', label = 'Best Fit line: Dataset 4')
plt.plot(x,combine_coeffs[0] *x + combine_coeffs[1],'--',color='black',linewidth=4, label = 'Best Fit line: Combined Dataset')
#title
plt.title("Example of Simpson's Paradox",fontsize=18)
#legend
plt.legend();
###Output
_____no_output_____
###Markdown
The slope for each group of data is positive but when combined becomes negative. This also shows the presence of Simpson's paradox.
###Code
print('Slope for each variable')
print('Slope Dataset 1:',coeffs1[0])
print('Slope Dataset 2:',coeffs2[0])
print('Slope Dataset 3:',coeffs3[0])
print('Slope Dataset 4:',coeffs4[0])
print('Slope Combined Dataset:',combine_coeffs[0])
###Output
Slope for each variable
Slope Dataset 1: 10.189210248930946
Slope Dataset 2: 10.189210248930943
Slope Dataset 3: 10.18921024893094
Slope Dataset 4: 10.189210248930946
Slope Combined Dataset: -4.032717380112218
|
turkiye-student-evaluation_generic/Turkiye-Student-Evaluation-Data-Set.ipynb | ###Markdown
Turkiye-Student-Evaluation-Data-Set Problem Statement:- In this project we are basically going to perform clustering on the given data and these clusters will signify different categories of students based on the marks, content, course and other features. The clustering algorithm used in the project is K-Means Clustering. The aim is to cluster the data on the basis of the given features, which will ultimately cluster together the student's with similar performance. Attribute Information:-The dataset has 5820 instances with 33 attributes. The description of each column is given below:instr: Instructor's identifier; values taken from {1,2,3}class: Course code (descriptor); values taken from {1-13}repeat: Number of times the student is taking this course; values taken from {0,1,2,3,...}attendance: Code of the level of attendance; values from {0, 1, 2, 3, 4}difficulty: Level of difficulty of the course as perceived by the student; values taken from {1,2,3,4,5}Q1: The semester course content, teaching method and evaluation system were provided at the start.Q2: The course aims and objectives were clearly stated at the beginning of the period.Q3: The course was worth the amount of credit assigned to it.Q4: The course was taught according to the syllabus announced on the first day of class.Q5: The class discussions, homework assignments, applications and studies were satisfactory.Q6: The textbook and other courses resources were sufficient and up to date.Q7: The course allowed field work, applications, laboratory, discussion and other studies.Q8: The quizzes, assignments, projects and exams contributed to helping the learning.Q9: I greatly enjoyed the class and was eager to actively participate during the lectures.Q10: My initial expectations about the course were met at the end of the period or year.Q11: The course was relevant and beneficial to my professional development.Q12: The course helped me look at life and the world with a new perspective.Q13: The Instructor's knowledge was relevant and up to date.Q14: The Instructor came prepared for classes.Q15: The Instructor taught in accordance with the announced lesson plan.Q16: The Instructor was committed to the course and was understandable.Q17: The Instructor arrived on time for classes.Q18: The Instructor has a smooth and easy to follow delivery/speech.Q19: The Instructor made effective use of class hours.Q20: The Instructor explained the course and was eager to be helpful to students.Q21: The Instructor demonstrated a positive approach to students.Q22: The Instructor was open and respectful of the views of students about the course.Q23: The Instructor encouraged participation in the course.Q24: The Instructor gave relevant homework assignments/projects, and helped/guided students.Q25: The Instructor responded to questions about the course inside and outside of the course.Q26: The Instructor's evaluation system (midterm and final questions, projects, assignments, etc.) effectively measured the course objectives.Q27: The Instructor provided solutions to exams and discussed them with students.Q28: The Instructor treated all students in a right and objective manner.Q1-Q28 are all Likert-type, meaning that the values are taken from {1,2,3,4,5} We will be implementing the following steps to achieve the final result:-1. Importing the necessary Libraries.2. Importing the dataset.3. Exploratory Data Analysis4. Performing feature engineering i.e. modifying existing variables and creating new ones for analysis.5. Building The model.6. Visualising the results. Step-1: Importing the Libraries
###Code
# import numpy as np for processing data
# import pandas as pd for importing the data and working with data
# import matplotlib.pyplot as plt for visualisation
# import seaborn as sns for data visualization
# from sklearn.preprocessing import StandardScaler for data preprocessing
# from sklearn.preprocessing import normalize for normalizing your data
# from sklearn.decomposition import PCA for dimensionality reduction
# from sklearn.cluster import KMeans for implementing the clustering algorithm
###Output
_____no_output_____
###Markdown
Step-2: Importing the dataset
###Code
# Use pd.read_csv('filename.csv') to import the necessary file
###Output
_____no_output_____
###Markdown
Seeing the dataset:
###Code
# using data.head() we can see the dataset
###Output
_____no_output_____
###Markdown
Step-3: Exploratory Data AnalysisIt is common for data scientists to spend a majority of their time exploring and cleaning data, but approaching this as an opportunity to invest in your model (instead of viewing it as just another chore on your to-do list) will yield big dividends later on in the data science process.Performing thorough exploratory data analysis (EDA) and cleaning the dataset are not only essential steps, but also a great opportunity to lay the foundation for a strong machine learning model. Seeing the shape and size of the dataset
###Code
# Using df.shape() and df.size() will give you the shape and size of the dataset
###Output
_____no_output_____
###Markdown
Describing the dataset
###Code
# Using df.describe() will describe the dataset
###Output
_____no_output_____
###Markdown
Seeing the non null values
###Code
# Using df.info() we can see number of non null values present
###Output
_____no_output_____
###Markdown
Seeing the list of columns
###Code
# Using df.columns we can see the list of all the columns in the dataset
###Output
_____no_output_____
###Markdown
Seeing the null values in the dataset
###Code
# Using df.isnull().sum will give you the null values
###Output
_____no_output_____
###Markdown
Visualising the data using seaborn libraryThere are different types of plots like bar plot, box plot, scatter plot etc.Scatter plot is very useful when we are analyzing the relation ship between 2 features on x and y axis.In seaborn library we have pairplot function which is very useful to scatter plot all the features at once instead of plotting them individually.
###Code
# Using sns.pairplot(df) we can visualize relationship between the features
###Output
_____no_output_____
###Markdown
Visualizing the Heat Map
###Code
# Using sns.heatmap(dataset.corr(),annot=True) we plot the heat map
###Output
_____no_output_____
###Markdown
Step-4: Performing Feature EngineeringWe will be performing the following 3 steps:1.Standard Scaler2.Normalization3.Principal Component Analysis Standard Scaler:Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual features do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance).For instance many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the L1 and L2 regularizers of linear models) assume that all features are centered around 0 and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expectedStandardize features by removing the mean and scaling to unit variance.The standard score of a sample x is calculated as:z = (x - u) / swhere u is the mean of the training samples, and s is the standard deviation of the training samples.
###Code
# to implement the standard scaler first create an object for StandardScaler
# now perform scaler.fit() to fit the data
# now perform scaler.transform() to get the scaled data
###Output
_____no_output_____
###Markdown
Normalization:Normalization is used to scale the data of an attribute so that it falls in a smaller range, such as -1.0 to 1.0 or 0.0 to 1.0. It is generally useful for classification algorithms.Normalization is generally required when we are dealing with attributes on a different scale, otherwise, it may lead to a dilution in effectiveness of an important equally important attribute(on lower scale) because of other attribute having values on larger scale.In simple words, when multiple attributes are there but attributes have values on different scales, this may lead to poor data models while performing data mining operations. So they are normalized to bring all the attributes on the same scale.
###Code
# now implement the normalization to normalize the data
###Output
_____no_output_____
###Markdown
Principal Component AnalysisPrincipal Components Analysis is an unsupervised learning class of statistical techniques used to explain data in high dimension using smaller number of variables called the principal components.Assuming we have a set X made up of n measurements each represented by a set of p features, X1, X2, … , Xp. If we want to plot this data in a 2-dimensional plane, we can plot n measurements using two features at a time. If the number of features are more than 3 or four then plotting this in two dimension will be a challenge as the number of plots would be p(p-1)/2 which would be hard to plot.We would like to visualize this data in two dimension without losing information contained in the data. This is what PCA allows us to do.
###Code
# so to implement PCA first we need to create an object for PCA and also need to mention that how many dimensions we need finally
# now do pca.fit(data) to fit the data
# now do pca.transform(data) to transform the higher-dimensionality data to lower dimensions
# now after implementing pca, use pd.DataFrame(data) to convert the new data into a Data Frame else
###Output
_____no_output_____
###Markdown
Step-5: Building The modelThe algorithm works as follows:1. First we initialize k points, called means, randomly.2. We categorize each item to its closest mean and we update the mean’s coordinates, which are the averages of the items categorized in that mean so far.3. We repeat the process for a given number of iterations and at the end, we have our clusters.So, basically we will be following two steps:-1. Implementing the Elbow method which we will return the optimal value of clusters to be formed.2. We will implement K-Means algorithm to create the clusters. Implementing Elbow MethodIn cluster analysis, the elbow method is a heuristic used in determining the number of clusters in a data set. The method consists of plotting the explained variation as a function of the number of clusters, and picking the elbow of the curve as the number of clusters to use. The same method can be used to choose the number of parameters in other data-driven models, such as the number of principal components to describe a data set.A fundamental step for any unsupervised algorithm is to determine the optimal number of clusters into which the data may be clustered. The Elbow Method is one of the most popular methods to determine this optimal value of k.To determine the optimal number of clusters, we have to select the value of k at the “elbow” ie the point after which the distortion/inertia start decreasing in a linear fashion.
###Code
# to implement the elbow method first create an empty list name it as wcss
# now initiate a for loop ranging between (1,11) and implement k-means clustering for every i number of clusters
# keep appending the empty list with the kmeans.inertia_ values
# now plot the graph between the range(1,11) and the wcss
# from the plot we determine the optimal value of k
###Output
_____no_output_____
###Markdown
Implementing K-Means:K-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. It is popular for cluster analysis in data mining. k-means clustering minimizes within-cluster variances (squared Euclidean distances), but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances. For instance, Better Euclidean solutions can be found using k-medians and k-medoids.The above algorithm in pseudocode: Initialize k means with random valuesFor a given number of iterations: Iterate through itemsUse kmeans with different number of clustersAppend the list with kmeans.inertia_ values
###Code
# now with the optimal k value implement the K-Means Clustering Algorithm
# now using kmeans.fit_predict to predict that which data belong to which cluster
###Output
_____no_output_____
###Markdown
Step-6: Visualising the result
###Code
# to visualise the final result use plt.scatter() with respective arguments to view the created clusters
# also plot the centroids of the respective clusters using kmeans.cluster_centers_
# finally you will be able to look at the result using plt.show()
###Output
_____no_output_____ |
Classification/K_nearest_neighbors.ipynb | ###Markdown
K-Nearest Neighbors (K-NN) Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Social_Network_Ads.csv')
x = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.25, random_state =0)
print(x_test)
###Output
[[ 30 87000]
[ 38 50000]
[ 35 75000]
[ 30 79000]
[ 35 50000]
[ 27 20000]
[ 31 15000]
[ 36 144000]
[ 18 68000]
[ 47 43000]
[ 30 49000]
[ 28 55000]
[ 37 55000]
[ 39 77000]
[ 20 86000]
[ 32 117000]
[ 37 77000]
[ 19 85000]
[ 55 130000]
[ 35 22000]
[ 35 47000]
[ 47 144000]
[ 41 51000]
[ 47 105000]
[ 23 28000]
[ 49 141000]
[ 28 87000]
[ 29 80000]
[ 37 62000]
[ 32 86000]
[ 21 88000]
[ 37 79000]
[ 57 60000]
[ 37 53000]
[ 24 58000]
[ 18 52000]
[ 22 81000]
[ 34 43000]
[ 31 34000]
[ 49 36000]
[ 27 88000]
[ 41 52000]
[ 27 84000]
[ 35 20000]
[ 43 112000]
[ 27 58000]
[ 37 80000]
[ 52 90000]
[ 26 30000]
[ 49 86000]
[ 57 122000]
[ 34 25000]
[ 35 57000]
[ 34 115000]
[ 59 88000]
[ 45 32000]
[ 29 83000]
[ 26 80000]
[ 49 28000]
[ 23 20000]
[ 32 18000]
[ 60 42000]
[ 19 76000]
[ 36 99000]
[ 19 26000]
[ 60 83000]
[ 24 89000]
[ 27 58000]
[ 40 47000]
[ 42 70000]
[ 32 150000]
[ 35 77000]
[ 22 63000]
[ 45 22000]
[ 27 89000]
[ 18 82000]
[ 42 79000]
[ 40 60000]
[ 53 34000]
[ 47 107000]
[ 58 144000]
[ 59 83000]
[ 24 55000]
[ 26 35000]
[ 58 38000]
[ 42 80000]
[ 40 75000]
[ 59 130000]
[ 46 41000]
[ 41 60000]
[ 42 64000]
[ 37 146000]
[ 23 48000]
[ 25 33000]
[ 24 84000]
[ 27 96000]
[ 23 63000]
[ 48 33000]
[ 48 90000]
[ 42 104000]]
###Markdown
Feature Scaling
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
print(x_train)
###Output
[[ 0.58164944 -0.88670699]
[-0.60673761 1.46173768]
[-0.01254409 -0.5677824 ]
[-0.60673761 1.89663484]
[ 1.37390747 -1.40858358]
[ 1.47293972 0.99784738]
[ 0.08648817 -0.79972756]
[-0.01254409 -0.24885782]
[-0.21060859 -0.5677824 ]
[-0.21060859 -0.19087153]
[-0.30964085 -1.29261101]
[-0.30964085 -0.5677824 ]
[ 0.38358493 0.09905991]
[ 0.8787462 -0.59677555]
[ 2.06713324 -1.17663843]
[ 1.07681071 -0.13288524]
[ 0.68068169 1.78066227]
[-0.70576986 0.56295021]
[ 0.77971394 0.35999821]
[ 0.8787462 -0.53878926]
[-1.20093113 -1.58254245]
[ 2.1661655 0.93986109]
[-0.01254409 1.22979253]
[ 0.18552042 1.08482681]
[ 0.38358493 -0.48080297]
[-0.30964085 -0.30684411]
[ 0.97777845 -0.8287207 ]
[ 0.97777845 1.8676417 ]
[-0.01254409 1.25878567]
[-0.90383437 2.27354572]
[-1.20093113 -1.58254245]
[ 2.1661655 -0.79972756]
[-1.39899564 -1.46656987]
[ 0.38358493 2.30253886]
[ 0.77971394 0.76590222]
[-1.00286662 -0.30684411]
[ 0.08648817 0.76590222]
[-1.00286662 0.56295021]
[ 0.28455268 0.07006676]
[ 0.68068169 -1.26361786]
[-0.50770535 -0.01691267]
[-1.79512465 0.35999821]
[-0.70576986 0.12805305]
[ 0.38358493 0.30201192]
[-0.30964085 0.07006676]
[-0.50770535 2.30253886]
[ 0.18552042 0.04107362]
[ 1.27487521 2.21555943]
[ 0.77971394 0.27301877]
[-0.30964085 0.1570462 ]
[-0.01254409 -0.53878926]
[-0.21060859 0.1570462 ]
[-0.11157634 0.24402563]
[-0.01254409 -0.24885782]
[ 2.1661655 1.11381995]
[-1.79512465 0.35999821]
[ 1.86906873 0.12805305]
[ 0.38358493 -0.13288524]
[-1.20093113 0.30201192]
[ 0.77971394 1.37475825]
[-0.30964085 -0.24885782]
[-1.6960924 -0.04590581]
[-1.00286662 -0.74174127]
[ 0.28455268 0.50496393]
[-0.11157634 -1.06066585]
[-1.10189888 0.59194336]
[ 0.08648817 -0.79972756]
[-1.00286662 1.54871711]
[-0.70576986 1.40375139]
[-1.29996338 0.50496393]
[-0.30964085 0.04107362]
[-0.11157634 0.01208048]
[-0.30964085 -0.88670699]
[ 0.8787462 -1.3505973 ]
[-0.30964085 2.24455257]
[ 0.97777845 1.98361427]
[-1.20093113 0.47597078]
[-1.29996338 0.27301877]
[ 1.37390747 1.98361427]
[ 1.27487521 -1.3505973 ]
[-0.30964085 -0.27785096]
[-0.50770535 1.25878567]
[-0.80480212 1.08482681]
[ 0.97777845 -1.06066585]
[ 0.28455268 0.30201192]
[ 0.97777845 0.76590222]
[-0.70576986 -1.49556302]
[-0.70576986 0.04107362]
[ 0.48261718 1.72267598]
[ 2.06713324 0.18603934]
[-1.99318916 -0.74174127]
[-0.21060859 1.40375139]
[ 0.38358493 0.59194336]
[ 0.8787462 -1.14764529]
[-1.20093113 -0.77073441]
[ 0.18552042 0.24402563]
[ 0.77971394 -0.30684411]
[ 2.06713324 -0.79972756]
[ 0.77971394 0.12805305]
[-0.30964085 0.6209365 ]
[-1.00286662 -0.30684411]
[ 0.18552042 -0.3648304 ]
[ 2.06713324 2.12857999]
[ 1.86906873 -1.26361786]
[ 1.37390747 -0.91570013]
[ 0.8787462 1.25878567]
[ 1.47293972 2.12857999]
[-0.30964085 -1.23462472]
[ 1.96810099 0.91086794]
[ 0.68068169 -0.71274813]
[-1.49802789 0.35999821]
[ 0.77971394 -1.3505973 ]
[ 0.38358493 -0.13288524]
[-1.00286662 0.41798449]
[-0.01254409 -0.30684411]
[-1.20093113 0.41798449]
[-0.90383437 -1.20563157]
[-0.11157634 0.04107362]
[-1.59706014 -0.42281668]
[ 0.97777845 -1.00267957]
[ 1.07681071 -1.20563157]
[-0.01254409 -0.13288524]
[-1.10189888 -1.52455616]
[ 0.77971394 -1.20563157]
[ 0.97777845 2.07059371]
[-1.20093113 -1.52455616]
[-0.30964085 0.79489537]
[ 0.08648817 -0.30684411]
[-1.39899564 -1.23462472]
[-0.60673761 -1.49556302]
[ 0.77971394 0.53395707]
[-0.30964085 -0.33583725]
[ 1.77003648 -0.27785096]
[ 0.8787462 -1.03167271]
[ 0.18552042 0.07006676]
[-0.60673761 0.8818748 ]
[-1.89415691 -1.40858358]
[-1.29996338 0.59194336]
[-0.30964085 0.53395707]
[-1.00286662 -1.089659 ]
[ 1.17584296 -1.43757673]
[ 0.18552042 -0.30684411]
[ 1.17584296 -0.74174127]
[-0.30964085 0.07006676]
[ 0.18552042 2.09958685]
[ 0.77971394 -1.089659 ]
[ 0.08648817 0.04107362]
[-1.79512465 0.12805305]
[-0.90383437 0.1570462 ]
[-0.70576986 0.18603934]
[ 0.8787462 -1.29261101]
[ 0.18552042 -0.24885782]
[-0.4086731 1.22979253]
[-0.01254409 0.30201192]
[ 0.38358493 0.1570462 ]
[ 0.8787462 -0.65476184]
[ 0.08648817 0.1570462 ]
[-1.89415691 -1.29261101]
[-0.11157634 0.30201192]
[-0.21060859 -0.27785096]
[ 0.28455268 -0.50979612]
[-0.21060859 1.6067034 ]
[ 0.97777845 -1.17663843]
[-0.21060859 1.63569655]
[ 1.27487521 1.8676417 ]
[-1.10189888 -0.3648304 ]
[-0.01254409 0.04107362]
[ 0.08648817 -0.24885782]
[-1.59706014 -1.23462472]
[-0.50770535 -0.27785096]
[ 0.97777845 0.12805305]
[ 1.96810099 -1.3505973 ]
[ 1.47293972 0.07006676]
[-0.60673761 1.37475825]
[ 1.57197197 0.01208048]
[-0.80480212 0.30201192]
[ 1.96810099 0.73690908]
[-1.20093113 -0.50979612]
[ 0.68068169 0.27301877]
[-1.39899564 -0.42281668]
[ 0.18552042 0.1570462 ]
[-0.50770535 -1.20563157]
[ 0.58164944 2.01260742]
[-1.59706014 -1.49556302]
[-0.50770535 -0.53878926]
[ 0.48261718 1.83864855]
[-1.39899564 -1.089659 ]
[ 0.77971394 -1.37959044]
[-0.30964085 -0.42281668]
[ 1.57197197 0.99784738]
[ 0.97777845 1.43274454]
[-0.30964085 -0.48080297]
[-0.11157634 2.15757314]
[-1.49802789 -0.1038921 ]
[-0.11157634 1.95462113]
[-0.70576986 -0.33583725]
[-0.50770535 -0.8287207 ]
[ 0.68068169 -1.37959044]
[-0.80480212 -1.58254245]
[-1.89415691 -1.46656987]
[ 1.07681071 0.12805305]
[ 0.08648817 1.51972397]
[-0.30964085 0.09905991]
[ 0.08648817 0.04107362]
[-1.39899564 -1.3505973 ]
[ 0.28455268 0.07006676]
[-0.90383437 0.38899135]
[ 1.57197197 -1.26361786]
[-0.30964085 -0.74174127]
[-0.11157634 0.1570462 ]
[-0.90383437 -0.65476184]
[-0.70576986 -0.04590581]
[ 0.38358493 -0.45180983]
[-0.80480212 1.89663484]
[ 1.37390747 1.28777882]
[ 1.17584296 -0.97368642]
[ 1.77003648 1.83864855]
[-0.90383437 -0.24885782]
[-0.80480212 0.56295021]
[-1.20093113 -1.5535493 ]
[-0.50770535 -1.11865214]
[ 0.28455268 0.07006676]
[-0.21060859 -1.06066585]
[ 1.67100423 1.6067034 ]
[ 0.97777845 1.78066227]
[ 0.28455268 0.04107362]
[-0.80480212 -0.21986468]
[-0.11157634 0.07006676]
[ 0.28455268 -0.19087153]
[ 1.96810099 -0.65476184]
[-0.80480212 1.3457651 ]
[-1.79512465 -0.59677555]
[-0.11157634 0.12805305]
[ 0.28455268 -0.30684411]
[ 1.07681071 0.56295021]
[-1.00286662 0.27301877]
[ 1.47293972 0.35999821]
[ 0.18552042 -0.3648304 ]
[ 2.1661655 -1.03167271]
[-0.30964085 1.11381995]
[-1.6960924 0.07006676]
[-0.01254409 0.04107362]
[ 0.08648817 1.05583366]
[-0.11157634 -0.3648304 ]
[-1.20093113 0.07006676]
[-0.30964085 -1.3505973 ]
[ 1.57197197 1.11381995]
[-0.80480212 -1.52455616]
[ 0.08648817 1.8676417 ]
[-0.90383437 -0.77073441]
[-0.50770535 -0.77073441]
[-0.30964085 -0.91570013]
[ 0.28455268 -0.71274813]
[ 0.28455268 0.07006676]
[ 0.08648817 1.8676417 ]
[-1.10189888 1.95462113]
[-1.6960924 -1.5535493 ]
[-1.20093113 -1.089659 ]
[-0.70576986 -0.1038921 ]
[ 0.08648817 0.09905991]
[ 0.28455268 0.27301877]
[ 0.8787462 -0.5677824 ]
[ 0.28455268 -1.14764529]
[-0.11157634 0.67892279]
[ 2.1661655 -0.68375498]
[-1.29996338 -1.37959044]
[-1.00286662 -0.94469328]
[-0.01254409 -0.42281668]
[-0.21060859 -0.45180983]
[-1.79512465 -0.97368642]
[ 1.77003648 0.99784738]
[ 0.18552042 -0.3648304 ]
[ 0.38358493 1.11381995]
[-1.79512465 -1.3505973 ]
[ 0.18552042 -0.13288524]
[ 0.8787462 -1.43757673]
[-1.99318916 0.47597078]
[-0.30964085 0.27301877]
[ 1.86906873 -1.06066585]
[-0.4086731 0.07006676]
[ 1.07681071 -0.88670699]
[-1.10189888 -1.11865214]
[-1.89415691 0.01208048]
[ 0.08648817 0.27301877]
[-1.20093113 0.33100506]
[-1.29996338 0.30201192]
[-1.00286662 0.44697764]
[ 1.67100423 -0.88670699]
[ 1.17584296 0.53395707]
[ 1.07681071 0.53395707]
[ 1.37390747 2.331532 ]
[-0.30964085 -0.13288524]
[ 0.38358493 -0.45180983]
[-0.4086731 -0.77073441]
[-0.11157634 -0.50979612]
[ 0.97777845 -1.14764529]
[-0.90383437 -0.77073441]
[-0.21060859 -0.50979612]
[-1.10189888 -0.45180983]
[-1.20093113 1.40375139]]
###Markdown
Training the K-NN model on the Training set
###Code
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p =2)
classifier.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting a new result
###Code
print(classifier.predict(sc.transform([[30, 87000]])))
###Output
[0]
###Markdown
Predicting the Test set results
###Code
y_pred = classifier.predict(x_test)
print(np.concatenate((y_pred.reshape(len(y_pred), 1), y_test.reshape(len(y_test), 1)), 1))
###Output
[[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[1 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 1]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[1 0]
[1 1]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 0]
[0 0]
[0 0]
[1 1]
[0 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 1]
[0 0]
[1 1]
[1 1]
[1 1]]
###Markdown
Making the Confusion Matrix
###Code
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
###Output
[[64 4]
[ 3 29]]
###Markdown
Visualising the Training set results
###Code
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(x_train), y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 1),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 1))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('K-NN (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
###Markdown
Visualising the Test set results
###Code
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(x_test), y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 1),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 1))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('K-NN (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
|
dev/08_vision_core.ipynb | ###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_px=500, max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
#TODO function to resize_max all images in a path (optionally recursively) and save them somewhere (same relative dirs if recursive)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn, **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, sz=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), sz=sz)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, sz=None)->None: return cls(tensor(x).view(-1, 4).float(), sz=sz)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = DataSource([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, sz=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, sz=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = getattr(x, '_meta', {}).get('sz', None)
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `sz=...`"
return self.sz if sz is None else sz
def setup(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x, self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a `TensorPoint` object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a `TensorPoint` by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = DataSource([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
pnt_tdl.after_item.c
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setup(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core_foundation.ipynb.
Converted 01a_core_utils.ipynb.
Converted 01b_core_dispatch.ipynb.
Converted 01c_core_transform.ipynb.
Converted 02_core_script.ipynb.
Converted 03_torchcore.ipynb.
Converted 03a_layers.ipynb.
Converted 04_data_load.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 71_callback_tensorboard.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
Converted xse_resnext.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_px=500, max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
#TODO function to resize_max all images in a path (optionally recursively) and save them somewhere (same relative dirs if recursive)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn, **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, sz=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), sz=sz)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, sz=None)->None: return cls(tensor(x).view(-1, 4).float(), sz=sz)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = DataSource([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, sz=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, sz=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = getattr(x, '_meta', {}).get('sz', None)
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `sz=...`"
return self.sz if sz is None else sz
def setup(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x, self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a `TensorPoint` object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a `TensorPoint` by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = DataSource([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
pnt_tdl.after_item.c
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setup(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_core_foundation.ipynb.
Converted 01a_core_utils.ipynb.
Converted 01b_core_dispatch.ipynb.
Converted 01c_core_transform.ipynb.
Converted 02_core_script.ipynb.
Converted 03_torchcore.ipynb.
Converted 03a_layers.ipynb.
Converted 04_data_load.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 71_callback_tensorboard.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
Converted xse_resnext.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_px=500, max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
#TODO function to resize_max all images in a path (optionally recursively) and save them somewhere (same relative dirs if recursive)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
default_batch_tfms = IntToFloatTensor
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn, **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, sz=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), sz=sz)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, sz=None)->None: return cls(tensor(x).view(-1, 4).float(), sz=sz)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
@classmethod
def create(cls, x): return cls(x)
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `ImageResizer` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
class ImageResizer(Transform):
order=10
"Resize image to `size` using `resample"
def __init__(self, size, resample=Image.BILINEAR):
if not is_listy(size): size=(size,size)
self.size,self.resample = (size[1],size[0]),resample
def encodes(self, o:PILImage): return o.resize(size=self.size, resample=self.resample)
def encodes(self, o:PILMask): return o.resize(size=self.size, resample=Image.NEAREST)
###Output
_____no_output_____
###Markdown
`size` can either be one integer (in which case images are resized to a square) or a tuple `height,width`.> Note: This is the usual convention for arrays or in PyTorch, but it's not the usual convention for PIL Image, which use the other way round.
###Code
f = ImageResizer(14)
test_eq(f(mnist_img).size, (14,14))
test_eq(f(mask).size, (14,14))
f = ImageResizer((32,28))
test_eq(f(mnist_img).size, (28,32))#PIL has width first
test_eq(array(f(mnist_img)).shape, (32,28))#But numpy as height first and that is our convention
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = DataSource([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
PILMask.create.loss_func = CrossEntropyLossFlat(axis=1)
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, sz=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, sz=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order,loss_func = 1,MSELossFlat()
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = getattr(x, '_meta', {}).get('sz', None)
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `sz=...`"
return self.sz if sz is None else sz
def setup(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x, self._get_sz(x))
TensorPoint.default_item_tfms = PointScaler
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a `TensorPoint` object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a `TensorPoint` by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = DataSource([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
pnt_tdl.after_item.c
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabels(MultiCategory):
create = MultiCategorize(add_na=True)
default_type_tfms = None
#export
class BBoxLabeler(Transform):
def setup(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
BBoxLabels.default_item_tfms = BBoxLabeler
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_transform.ipynb.
Converted 02_script.ipynb.
Converted 03_torch_core.ipynb.
Converted 03a_layers.ipynb.
Converted 04_dataloader.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_px=500, max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
#TODO function to resize_max all images in a path (optionally recursively) and save them somewhere (same relative dirs if recursive)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn, **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, sz=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), sz=sz)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, sz=None)->None: return cls(tensor(x).view(-1, 4).float(), sz=sz)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = DataSource([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, sz=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, sz=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = getattr(x, '_meta', {}).get('sz', None)
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `sz=...`"
return self.sz if sz is None else sz
def setup(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x, self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a `TensorPoint` object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a `TensorPoint` by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = DataSource([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
pnt_tdl.after_item.c
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setup(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_transform.ipynb.
Converted 02_script.ipynb.
Converted 03_torch_core.ipynb.
Converted 03a_layers.ipynb.
Converted 04_dataloader.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
a,b = L([5,2]).map(lambda x: math.floor(x * 4/3))
a,b
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_px=500, max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
#TODO function to resize_max all images in a path (optionally recursively) and save them somewhere (same relative dirs if recursive)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
default_batch_tfms = ByteToFloatTensor
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn, **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = fns[0]; mnist_fn
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, sz=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), sz=sz)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, sz=None)->None: return cls(tensor(x).view(-1, 4).float(), sz=sz)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#bg': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
@classmethod
def create(cls, x): return cls(x)
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `ImageResizer` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
class ImageResizer(Transform):
order=10
"Resize image to `size` using `resample"
def __init__(self, size, resample=Image.BILINEAR):
if not is_listy(size): size=(size,size)
self.size,self.resample = (size[1],size[0]),resample
def encodes(self, o:PILImage): return o.resize(size=self.size, resample=self.resample)
def encodes(self, o:PILMask): return o.resize(size=self.size, resample=Image.NEAREST)
###Output
_____no_output_____
###Markdown
`size` can either be one integer (in which case images are resized to a square) or a tuple `height,width`.> Note: This is the usual convention for arrays or in PyTorch, but it's not the usual convention for PIL Image, which use the other way round.
###Code
f = ImageResizer(14)
test_eq(f(mnist_img).size, (14,14))
test_eq(f(mask).size, (14,14))
f = ImageResizer((32,28))
test_eq(f(mnist_img).size, (28,32))#PIL has width first
test_eq(array(f(mnist_img)).shape, (32,28))#But numpy as height first and that is our convention
# export
def image2byte(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = torch.ByteTensor(torch.ByteStorage.from_buffer(img.tobytes()))
w,h = img.size
return res.view(h,w,-1).permute(2,0,1)
#export
@ToTensor
def encodes(self, o:PILImage): return TensorImage(image2byte(o))
@ToTensor
def encodes(self, o:PILImageBW): return TensorImageBW(image2byte(o))
@ToTensor
def encodes(self, o:PILMask): return TensorMask(image2byte(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = DataSource([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
PILMask.create.loss_func = CrossEntropyLossFlat(axis=1)
##export
#def _scale_pnts(x, y, do_scale=True, y_first=False):
# if y_first: y = y.flip(1)
# sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
# return y * 2/tensor(sz).float() - 1 if do_scale else y
#
#def _unscale_pnts(x, y):
# sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
# return (y+1) * tensor(sz).float()/2
## export
##TODO: Transform on a whole tuple lose types, see if we can simplify that?
#class PointScaler(ItemTransform):
# "Scale a tensor representing points"
# def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
# def encodes(self, o): return (o[0],TensorPoint(_scale_pnts(*o, self.do_scale, self.y_first)))
# def decodes(self, o): return (o[0],TensorPoint(_unscale_pnts(*o)))
#
#TensorPoint.default_item_tfms = PointScaler
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, sz=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, sz=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
loss_func = MSELossFlat()
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = getattr(x, '_meta', {}).get('sz', None)
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `sz=...`"
return self.sz if sz is None else sz
def setup(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x, self._get_sz(x))
TensorPoint.default_item_tfms = PointScaler
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a `TensorPoint` object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a `TensorPoint` by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = DataSource([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
pnt_tdl.after_item.c
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
# export
#class BBoxScaler(PointScaler):
# "Scale a tensor representing bounding boxes"
# def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
# def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
#
# def encodes(self, x:(BBox,TensorBBox)):
# pnts = x.bbox.view(-1,2)
# scaled_bb = _scale_pnts(pnts, self._get_sz(pnts), self.do_scale, self.y_first)
# return TensorBBox((scaled_bb.view(-1,4),x.lbl))
#
# def decodes(self, x:(BBox,TensorBBox)):
# scaled_bb = _unscale_pnts(x.bbox.view(-1,2), self._get_sz(x.bbox.view(-1,2)))
# return TensorBBox((scaled_bb.view(-1,4), x.lbl))
# export
#class BBoxCategorize(Transform):
# "Reversible transform of category string to `vocab` id"
# order,state_args=1,'vocab'
# def __init__(self, vocab=None):
# self.vocab = vocab
# self.o2i = None if vocab is None else {v:k for k,v in enumerate(vocab)}
#
# def setups(self, dsrc):
# if not dsrc: return
# vals = set()
# for bb in dsrc: vals = vals.union(set(bb.lbl))
# self.vocab,self.otoi = uniqueify(list(vals), sort=True, bidir=True, start='#bg')
#
# def encodes(self, o:BBox):
# return TensorBBox.create((o.bbox,tensor([self.otoi[o_] for o_ in o.lbl if o_ in self.otoi])))
# def decodes(self, o:TensorBBox):
# return BBox((o.bbox,[self.vocab[i_] for i_ in o.lbl]))
#
#BBox.default_type_tfms,BBox.default_item_tfms = BBoxCategorize,BBoxScaler
#export
#TODO tests
#def bb_pad(samples, pad_idx=0):
# "Function that collect `samples` of labelled bboxes and adds padding with `pad_idx`."
# max_len = max([len(s[1][1]) for s in samples])
# def _f(img,bbox,lbl):
# bbox = torch.cat([bbox,bbox.new_zeros(max_len-bbox.shape[0], 4)])
# lbl = torch.cat([lbl, lbl .new_zeros(max_len-lbl .shape[0])+pad_idx])
# return img,TensorBBox((bbox,lbl))
# return [_f(x,*y) for x,y in samples]
#export
class BBoxLabeler(Transform):
def setup(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
# export
#TODO: merge with padding
def clip_remove_empty(bbox, label):
"Clip bounding boxes with image border and label background the empty ones."
bbox = torch.clamp(bbox, -1, 1)
empty = ((bbox[...,2] - bbox[...,0])*(bbox[...,3] - bbox[...,1]) < 0.)
return (bbox[~empty], label[~empty])
bb = tensor([[-2,-0.5,0.5,1.5], [-0.5,-0.5,0.5,0.5], [1,0.5,0.5,0.75], [-0.5,-0.5,0.5,0.5]])
bb,lbl = clip_remove_empty(bb, tensor([1,2,3,2]))
test_eq(bb, tensor([[-1,-0.5,0.5,1.], [-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]]))
test_eq(lbl, tensor([1,2,2]))
#export
#TODO tests
def bb_pad(samples, pad_idx=0):
"Function that collect `samples` of labelled bboxes and adds padding with `pad_idx`."
samples = [(s[0], *clip_remove_empty(*s[1:])) for s in samples]
max_len = max([len(s[2]) for s in samples])
def _f(img,bbox,lbl):
bbox = torch.cat([bbox,bbox.new_zeros(max_len-bbox.shape[0], 4)])
lbl = torch.cat([lbl, lbl .new_zeros(max_len-lbl .shape[0])+pad_idx])
return img,bbox,lbl
return [_f(*s) for s in samples]
img1,img2 = TensorImage(torch.randn(16,16,3)),TensorImage(torch.randn(16,16,3))
bb1 = tensor([[-2,-0.5,0.5,1.5], [-0.5,-0.5,0.5,0.5], [1,0.5,0.5,0.75], [-0.5,-0.5,0.5,0.5]])
lbl1 = tensor([1, 2, 3, 2])
bb2 = tensor([[-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]])
lbl2 = tensor([2, 2])
samples = [(img1, bb1, lbl1), (img2, bb2, lbl2)]
res = bb_pad(samples)
non_empty = tensor([True,True,False,True])
test_eq(res[0][0], img1)
test_eq(res[0][1], tensor([[-1,-0.5,0.5,1.], [-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]]))
test_eq(res[0][2], tensor([1,2,2]))
test_eq(res[1][0], img2)
test_eq(res[1][1], tensor([[-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5], [0,0,0,0]]))
test_eq(res[1][2], tensor([2,2,0]))
#export
TensorBBox.dbunch_kwargs = {'before_batch': bb_pad}
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
###Output
_____no_output_____
###Markdown
Show methods
###Code
#export
def _get_grid(n, rows=None, cols=None, add_vert=0, figsize=None, double=False):
rows = rows or int(np.ceil(math.sqrt(n)))
cols = cols or int(np.ceil(n/rows))
if double: cols*=2 ; n*=2
figsize = (cols*3, rows*3+add_vert) if figsize is None else figsize
_,axs = subplots(rows, cols, figsize=figsize)
axs = axs.flatten()
for ax in axs[n:]: ax.set_axis_off()
return axs
#export
@typedispatch
def show_batch(x:TensorImage, y, its, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = _get_grid(min(len(its), max_n), rows=rows, cols=cols, figsize=figsize)
ctxs = default_show_batch(x, y, its, ctxs=ctxs, max_n=max_n, **kwargs)
return ctxs
#export
@typedispatch
def show_results(x:TensorImage, y, its, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = _get_grid(min(len(its), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize)
ctxs = default_show_results(x, y, its, ctxs=ctxs, max_n=max_n, **kwargs)
return ctxs
#export
@typedispatch
def show_results(x:TensorImage, y:TensorCategory, its, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = _get_grid(min(len(its), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize)
for i in range(2):
ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(its.itemgot(i),ctxs,range(max_n))]
ctxs = [r.show(ctx=c, color='green' if b==r else 'red', **kwargs)
for b,r,c,_ in zip(its.itemgot(1),its.itemgot(2),ctxs,range(max_n))]
return ctxs
#export
@typedispatch
def show_results(x:TensorImage, y:(TensorImageBase, TensorPoint, TensorBBox), its, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = _get_grid(min(len(its), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize, double=True)
for i in range(2):
ctxs[::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(its.itemgot(i),ctxs[::2],range(max_n))]
for i in [0,2]:
ctxs[1::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(its.itemgot(i),ctxs[1::2],range(max_n))]
return ctxs
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_transform.ipynb.
Converted 02_script.ipynb.
Converted 03_torch_core.ipynb.
Converted 03a_layers.ipynb.
Converted 04_dataloader.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Useful stats
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
###Output
_____no_output_____
###Markdown
Helpers
###Code
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
a,b = L([5,2]).map(lambda x: math.floor(x * 4/3))
a,b
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_px=500, max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
#TODO function to resize_max all images in a path (optionally recursively) and save them somewhere (same relative dirs if recursive)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
default_batch_tfms = ByteToFloatTensor
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn, **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = fns[0]; mnist_fn
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float())
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
ctx.scatter(self[:, 0], self[:, 1], **{**self._show_args, **kwargs})
return ctx
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class BBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#bg': _draw_rect(ctx, b, hw=False, text=l)
return ctx
@classmethod
def create(cls, x): return cls(x)
bbox,lbl = add_props(lambda i,self: self[i])
# export
class TensorBBox(Tuple):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x): return cls(tensor(x[0]).view(-1, 4).float(), x[1])
bbox,lbl = add_props(lambda i,self: self[i])
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = BBox(bbox)
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `ImageResizer` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
class ImageResizer(Transform):
order=10
"Resize image to `size` using `resample"
def __init__(self, size, resample=Image.BILINEAR):
if not is_listy(size): size=(size,size)
self.size,self.resample = (size[1],size[0]),resample
def encodes(self, o:PILImage): return o.resize(size=self.size, resample=self.resample)
def encodes(self, o:PILMask): return o.resize(size=self.size, resample=Image.NEAREST)
###Output
_____no_output_____
###Markdown
`size` can either be one integer (in which case images are resized to a square) or a tuple `height,width`.> Note: This is the usual convention for arrays or in PyTorch, but it's not the usual convention for PIL Image, which use the other way round.
###Code
f = ImageResizer(14)
test_eq(f(mnist_img).size, (14,14))
test_eq(f(mask).size, (14,14))
f = ImageResizer((32,28))
test_eq(f(mnist_img).size, (28,32))#PIL has width first
test_eq(array(f(mnist_img)).shape, (32,28))#But numpy as height first and that is our convention
# export
def image2byte(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = torch.ByteTensor(torch.ByteStorage.from_buffer(img.tobytes()))
w,h = img.size
return res.view(h,w,-1).permute(2,0,1)
#export
@ToTensor
def encodes(self, o:PILImage): return TensorImage(image2byte(o))
@ToTensor
def encodes(self, o:PILImageBW): return TensorImageBW(image2byte(o))
@ToTensor
def encodes(self, o:PILMask): return TensorMask(image2byte(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = DataSource([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
#export
def _scale_pnts(x, y, do_scale=True,y_first=False):
if y_first: y = y.flip(1)
sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return y * 2/tensor(sz).float() - 1 if do_scale else y
def _unscale_pnts(x, y):
sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return (y+1) * tensor(sz).float()/2
# export
#TODO: Transform on a whole tuple lose types, see if we can simplify that?
class PointScaler(ItemTransform):
"Scale a tensor representing points"
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def encodes(self, o): return (o[0],TensorPoint(_scale_pnts(*o, self.do_scale, self.y_first)))
def decodes(self, o): return (o[0],TensorPoint(_unscale_pnts(*o)))
TensorPoint.default_item_tfms = PointScaler
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = DataSource([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
# export
class BBoxScaler(PointScaler):
"Scale a tensor representing bounding boxes"
def encodes(self, o):
x,y = o
scaled_bb = _scale_pnts(x, y.bbox.view(-1,2), self.do_scale, self.y_first)
return (x,TensorBBox((scaled_bb.view(-1,4),y.lbl)))
def decodes(self, o):
x,y = o
scaled_bb = _unscale_pnts(x, y.bbox.view(-1,2))
return (x, TensorBBox((scaled_bb.view(-1,4), y.lbl)))
# export
class BBoxCategorize(Transform):
"Reversible transform of category string to `vocab` id"
order,state_args=1,'vocab'
def __init__(self, vocab=None):
self.vocab = vocab
self.o2i = None if vocab is None else {v:k for k,v in enumerate(vocab)}
def setups(self, dsrc):
if not dsrc: return
vals = set()
for bb in dsrc: vals = vals.union(set(bb.lbl))
self.vocab,self.otoi = uniqueify(list(vals), sort=True, bidir=True, start='#bg')
def encodes(self, o:BBox):
return TensorBBox.create((o.bbox,tensor([self.otoi[o_] for o_ in o.lbl if o_ in self.otoi])))
def decodes(self, o:TensorBBox):
return BBox((o.bbox,[self.vocab[i_] for i_ in o.lbl]))
BBox.default_type_tfms,BBox.default_item_tfms = BBoxCategorize,BBoxScaler
#export
#TODO tests
def bb_pad(samples, pad_idx=0):
"Function that collect `samples` of labelled bboxes and adds padding with `pad_idx`."
max_len = max([len(s[1][1]) for s in samples])
def _f(img,bbox,lbl):
bbox = torch.cat([bbox,bbox.new_zeros(max_len-bbox.shape[0], 4)])
lbl = torch.cat([lbl, lbl .new_zeros(max_len-lbl .shape[0])+pad_idx])
return img,TensorBBox((bbox,lbl))
return [_f(x,*y) for x,y in samples]
def _coco_lbl(x): return BBox(bbox)
tcat = BBoxCategorize()
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_lbl, tcat]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxScaler(), ToTensor()])
x,y = coco_tdl.one_batch()
y0 = y[0][0],y[1][0]
#Scaling and flipping properly done
test_close(y0[0], -1+tensor(bbox[0])/64)
test_eq(y0[1], tensor([1,1,1]))
a,b = coco_tdl.decode_batch((x,y))[0]
test_close(b[0], tensor(bbox[0]).float())
test_eq(b[1], bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), BBox)
coco_tdl.show_batch();
###Output
_____no_output_____
###Markdown
Show methods
###Code
#export
def _get_grid(n, rows=None, cols=None, add_vert=0, figsize=None, double=False):
rows = rows or int(np.ceil(math.sqrt(n)))
cols = cols or int(np.ceil(n/rows))
if double: cols*=2 ; n*=2
figsize = (cols*3, rows*3+add_vert) if figsize is None else figsize
_,axs = subplots(rows, cols, figsize=figsize)
axs = axs.flatten()
for ax in axs[n:]: ax.set_axis_off()
return axs
#export
@typedispatch
def show_batch(x:TensorImage, y, its, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = _get_grid(min(len(its), max_n), rows=rows, cols=cols, figsize=figsize)
ctxs = default_show_batch(x, y, its, ctxs=ctxs, max_n=max_n, **kwargs)
return ctxs
#export
@typedispatch
def show_results(x:TensorImage, y, its, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = _get_grid(min(len(its), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize)
ctxs = default_show_results(x, y, its, ctxs=ctxs, max_n=max_n, **kwargs)
return ctxs
#export
@typedispatch
def show_results(x:TensorImage, y:TensorCategory, its, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = _get_grid(min(len(its), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize)
for i in range(2):
ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(its.itemgot(i),ctxs,range(max_n))]
ctxs = [r.show(ctx=c, color='green' if b==r else 'red', **kwargs)
for b,r,c,_ in zip(its.itemgot(1),its.itemgot(2),ctxs,range(max_n))]
return ctxs
#export
@typedispatch
def show_results(x:TensorImage, y:TensorImageBase, its, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = _get_grid(min(len(its), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize, double=True)
for i in range(2):
ctxs[::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(its.itemgot(i),ctxs[::2],range(max_n))]
for i in [0,2]:
ctxs[1::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(its.itemgot(i),ctxs[1::2],range(max_n))]
return ctxs
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_transform.ipynb.
Converted 02_script.ipynb.
Converted 03_torch_core.ipynb.
Converted 04_dataloader.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_px=500, max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
#TODO function to resize_max all images in a path (optionally recursively) and save them somewhere (same relative dirs if recursive)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
default_batch_tfms = IntToFloatTensor
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn, **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, sz=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), sz=sz)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, sz=None)->None: return cls(tensor(x).view(-1, 4).float(), sz=sz)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
@classmethod
def create(cls, x): return cls(x)
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `ImageResizer` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
class ImageResizer(Transform):
order=10
"Resize image to `size` using `resample"
def __init__(self, size, resample=Image.BILINEAR):
if not is_listy(size): size=(size,size)
self.size,self.resample = (size[1],size[0]),resample
def encodes(self, o:PILImage): return o.resize(size=self.size, resample=self.resample)
def encodes(self, o:PILMask): return o.resize(size=self.size, resample=Image.NEAREST)
###Output
_____no_output_____
###Markdown
`size` can either be one integer (in which case images are resized to a square) or a tuple `height,width`.> Note: This is the usual convention for arrays or in PyTorch, but it's not the usual convention for PIL Image, which use the other way round.
###Code
f = ImageResizer(14)
test_eq(f(mnist_img).size, (14,14))
test_eq(f(mask).size, (14,14))
f = ImageResizer((32,28))
test_eq(f(mnist_img).size, (28,32))#PIL has width first
test_eq(array(f(mnist_img)).shape, (32,28))#But numpy as height first and that is our convention
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = DataSource([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
PILMask.create.loss_func = CrossEntropyLossFlat(axis=1)
##export
#def _scale_pnts(x, y, do_scale=True, y_first=False):
# if y_first: y = y.flip(1)
# sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
# return y * 2/tensor(sz).float() - 1 if do_scale else y
#
#def _unscale_pnts(x, y):
# sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
# return (y+1) * tensor(sz).float()/2
## export
##TODO: Transform on a whole tuple lose types, see if we can simplify that?
#class PointScaler(ItemTransform):
# "Scale a tensor representing points"
# def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
# def encodes(self, o): return (o[0],TensorPoint(_scale_pnts(*o, self.do_scale, self.y_first)))
# def decodes(self, o): return (o[0],TensorPoint(_unscale_pnts(*o)))
#
#TensorPoint.default_item_tfms = PointScaler
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, sz=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, sz=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order,loss_func = 1,MSELossFlat()
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = getattr(x, '_meta', {}).get('sz', None)
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `sz=...`"
return self.sz if sz is None else sz
def setup(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x, self._get_sz(x))
TensorPoint.default_item_tfms = PointScaler
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a `TensorPoint` object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a `TensorPoint` by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = DataSource([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
pnt_tdl.after_item.c
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
# export
#class BBoxScaler(PointScaler):
# "Scale a tensor representing bounding boxes"
# def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
# def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
#
# def encodes(self, x:(BBox,TensorBBox)):
# pnts = x.bbox.view(-1,2)
# scaled_bb = _scale_pnts(pnts, self._get_sz(pnts), self.do_scale, self.y_first)
# return TensorBBox((scaled_bb.view(-1,4),x.lbl))
#
# def decodes(self, x:(BBox,TensorBBox)):
# scaled_bb = _unscale_pnts(x.bbox.view(-1,2), self._get_sz(x.bbox.view(-1,2)))
# return TensorBBox((scaled_bb.view(-1,4), x.lbl))
# export
#class BBoxCategorize(Transform):
# "Reversible transform of category string to `vocab` id"
# order,state_args=1,'vocab'
# def __init__(self, vocab=None):
# self.vocab = vocab
# self.o2i = None if vocab is None else {v:k for k,v in enumerate(vocab)}
#
# def setups(self, dsrc):
# if not dsrc: return
# vals = set()
# for bb in dsrc: vals = vals.union(set(bb.lbl))
# self.vocab,self.otoi = uniqueify(list(vals), sort=True, bidir=True, start='#bg')
#
# def encodes(self, o:BBox):
# return TensorBBox.create((o.bbox,tensor([self.otoi[o_] for o_ in o.lbl if o_ in self.otoi])))
# def decodes(self, o:TensorBBox):
# return BBox((o.bbox,[self.vocab[i_] for i_ in o.lbl]))
#
#BBox.default_type_tfms,BBox.default_item_tfms = BBoxCategorize,BBoxScaler
#export
#TODO tests
#def bb_pad(samples, pad_idx=0):
# "Function that collect `samples` of labelled bboxes and adds padding with `pad_idx`."
# max_len = max([len(s[1][1]) for s in samples])
# def _f(img,bbox,lbl):
# bbox = torch.cat([bbox,bbox.new_zeros(max_len-bbox.shape[0], 4)])
# lbl = torch.cat([lbl, lbl .new_zeros(max_len-lbl .shape[0])+pad_idx])
# return img,TensorBBox((bbox,lbl))
# return [_f(x,*y) for x,y in samples]
#export
class BBoxLabels(MultiCategory):
create = MultiCategorize(add_na=True)
default_type_tfms = None
#export
class BBoxLabeler(Transform):
def setup(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
BBoxLabels.default_item_tfms = BBoxLabeler
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
# export
def clip_remove_empty(bbox, label):
"Clip bounding boxes with image border and label background the empty ones."
bbox = torch.clamp(bbox, -1, 1)
empty = ((bbox[...,2] - bbox[...,0])*(bbox[...,3] - bbox[...,1]) < 0.)
return (bbox[~empty], label[~empty])
bb = tensor([[-2,-0.5,0.5,1.5], [-0.5,-0.5,0.5,0.5], [1,0.5,0.5,0.75], [-0.5,-0.5,0.5,0.5]])
bb,lbl = clip_remove_empty(bb, tensor([1,2,3,2]))
test_eq(bb, tensor([[-1,-0.5,0.5,1.], [-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]]))
test_eq(lbl, tensor([1,2,2]))
#export
def bb_pad(samples, pad_idx=0):
"Function that collect `samples` of labelled bboxes and adds padding with `pad_idx`."
samples = [(s[0], *clip_remove_empty(*s[1:])) for s in samples]
max_len = max([len(s[2]) for s in samples])
def _f(img,bbox,lbl):
bbox = torch.cat([bbox,bbox.new_zeros(max_len-bbox.shape[0], 4)])
lbl = torch.cat([lbl, lbl .new_zeros(max_len-lbl .shape[0])+pad_idx])
return img,bbox,lbl
return [_f(*s) for s in samples]
img1,img2 = TensorImage(torch.randn(16,16,3)),TensorImage(torch.randn(16,16,3))
bb1 = tensor([[-2,-0.5,0.5,1.5], [-0.5,-0.5,0.5,0.5], [1,0.5,0.5,0.75], [-0.5,-0.5,0.5,0.5]])
lbl1 = tensor([1, 2, 3, 2])
bb2 = tensor([[-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]])
lbl2 = tensor([2, 2])
samples = [(img1, bb1, lbl1), (img2, bb2, lbl2)]
res = bb_pad(samples)
non_empty = tensor([True,True,False,True])
test_eq(res[0][0], img1)
test_eq(res[0][1], tensor([[-1,-0.5,0.5,1.], [-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]]))
test_eq(res[0][2], tensor([1,2,2]))
test_eq(res[1][0], img2)
test_eq(res[1][1], tensor([[-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5], [0,0,0,0]]))
test_eq(res[1][2], tensor([2,2,0]))
#export
TensorBBox.dbunch_kwargs = {'before_batch': bb_pad}
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
###Output
_____no_output_____
###Markdown
Show methods
###Code
#export
def get_grid(n, rows=None, cols=None, add_vert=0, figsize=None, double=False, title=None):
rows = rows or int(np.ceil(math.sqrt(n)))
cols = cols or int(np.ceil(n/rows))
if double: cols*=2 ; n*=2
figsize = (cols*3, rows*3+add_vert) if figsize is None else figsize
fig,axs = subplots(rows, cols, figsize=figsize)
axs = axs.flatten()
for ax in axs[n:]: ax.set_axis_off()
if title is not None: fig.suptitle(title, weight='bold', size=14)
return axs
#export
@typedispatch
def show_batch(x:TensorImage, y, samples, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), rows=rows, cols=cols, figsize=figsize)
ctxs = show_batch[object](x, y, samples, ctxs=ctxs, max_n=max_n, **kwargs)
return ctxs
#export
@typedispatch
def show_results(x:TensorImage, y, samples, outs, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize)
ctxs = show_results[object](x, y, samples, outs, ctxs=ctxs, max_n=max_n, **kwargs)
return ctxs
#export
@typedispatch
def show_results(x:TensorImage, y:TensorCategory, samples, outs, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize)
for i in range(2):
ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))]
ctxs = [r.show(ctx=c, color='green' if b==r else 'red', **kwargs)
for b,r,c,_ in zip(samples.itemgot(1),outs.itemgot(0),ctxs,range(max_n))]
return ctxs
#export
@typedispatch
def show_results(x:TensorImage, y:(TensorImageBase, TensorPoint, TensorBBox), samples, outs, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize, double=True)
for i in range(2):
ctxs[::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs[::2],range(max_n))]
for x in [samples,outs]:
ctxs[1::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(x.itemgot(0),ctxs[1::2],range(max_n))]
return ctxs
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_transform.ipynb.
Converted 02_script.ipynb.
Converted 03_torch_core.ipynb.
Converted 03a_layers.ipynb.
Converted 04_dataloader.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
This cell doesn't have an export destination and was ignored:
e
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
|
module02/lesson15/10_finding_pairs/pairs_candidates_solution.ipynb | ###Markdown
Checking if a pair of stocks is cointegrated Imports
###Code
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
from statsmodels.tsa.stattools import adfuller
import matplotlib.pyplot as plt
import quiz_tests
# Set plotting options
%matplotlib inline
plt.rc('figure', figsize=(16, 9))
# fix random generator so it's easier to reproduce results
np.random.seed(2018)
# use returns to create a price series
drift = 100
r1 = np.random.normal(0, 1, 1000)
s1 = pd.Series(np.cumsum(r1), name='s1') + drift
#make second series
offset = 10
noise = np.random.normal(0, 1, 1000)
s2 = s1 + offset + noise
s2.name = 's2'
## hedge ratio
lr = LinearRegression()
lr.fit(s1.values.reshape(-1,1),s2.values.reshape(-1,1))
hedge_ratio = lr.coef_[0][0]
#spread
spread = s2 - s1 * hedge_ratio
###Output
_____no_output_____
###Markdown
Question Do you think we'll need the intercept when calculating the spread? Why or why not? Since the intercept is a constant, it's not necesary to include it in the spread, since it just shifts the spread up by a constant. We use the spread to check when it deviates from its historical average, so what matters going foward is how the spread differs from this average. Quiz Check if spread is stationary using Augmented Dickey Fuller TestThe [adfuller](http://www.statsmodels.org/dev/generated/statsmodels.tsa.stattools.adfuller.html) function is part of the statsmodel library.```adfuller(x, maxlag=None, regression='c', autolag='AIC', store=False, regresults=False)[source]adf (float) – Test statisticpvalue (float) – p-value...```
###Code
def is_spread_stationary(spread, p_level=0.05):
"""
spread: obtained from linear combination of two series with a hedge ratio
p_level: level of significance required to reject null hypothesis of non-stationarity
returns:
True if spread can be considered stationary
False otherwise
"""
adf_result = adfuller(spread)
pvalue = adf_result[1]
print(f"pvalue {pvalue:.4f}")
if pvalue <= p_level:
print(f"pvalue is <= {p_level}, assume spread is stationary")
return True
else:
print(f"pvalue is > {p_level}, assume spread is not stationary")
return False
quiz_tests.test_is_spread_stationary(is_spread_stationary)
# Try out your function
print(f"Are the two series candidates for pairs trading? {is_spread_stationary(spread)}")
###Output
pvalue 0.0000
pvalue is <= 0.05, assume spread is stationary
Are the two series candidates for pairs trading? True
|
NoteBooks/Intro_MRI_Bloch_Solvers.ipynb | ###Markdown
Bloch Equation SolversThis notebook will investigate methods to solve the Bloch-Equations. Recall the Bloch Equations are:$$\frac{\partial M}{\partial t} = \gamma M \times B $$where $\gamma$ is the gyromagnetic ratio, $M$ is the magnetization, and $B$ is the magnetic field. It is most common to solve everything in the rotating frame. Where the entire corrdinate system rotates at the Larmor frequency ($\gamma B_0$), such that we only care about the frequency difference from that. There are two types of solvers:* Standard differential equation solvers. These solvers use Runge-Kutta or some othe solver to integrate the changes. These solvers are great for RF pulses or short blocks of time with large changes.* Nutation solvers. These solvers break the problem into periods of rotation and periods of relaxation. This enables much more computationally efficient solutions. This notebooks will show some ways to set this up, starting with the standard differential equation solver.
###Code
# This is comment, Pyhton will ignore this line
# Import libraries (load libraries which provide some functions)
import numpy as np # array library
import matplotlib.pyplot as plt # for plotting
import math
import cmath
# Hit the play button to run this cell
###Output
_____no_output_____
###Markdown
1. Standard Solvers Bloch Equation SetupIn these solvers, we will use the matrix form of the Bloch equations\begin{align}\frac{\partial}{\partial t} \begin{bmatrix} M_x \\ M_y \\ M_z \end{bmatrix} = \begin{bmatrix} -1/T2 & \gamma B_z & -\gamma B_y \\ -\gamma B_z & -1/T2 & \gamma B_x \\ \gamma B_y & -\gamma B_x & -1/T1 \end{bmatrix} \begin{bmatrix}M_x \\ M_y \\ M_z \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \\ M_0/T1 \end{bmatrix}\end{align}This is effectively a rotation about each axis at the rate $\gamma B$, a decay of $M_x$ and $M_y$ at a rate $M_{xy}/T2$, and a recovery of $M_z$ at a rate $(M_0-M_z)/T1$. $M_0$ is assumed to be 1 for simplicity. SolverThe solver will integrate the above equation. This could be done with a simple stepwise solver:\begin{align}M_x(t+\Delta t) = M_x(t) + \Delta \frac{\partial M_x(t)}{\partial t}\end{align}Where $\Delta$ is a step size and the temporal resolution of the discrete array. However, this method is not fully stable, especially since the derivative of $M_x$ depends on itself. You can very easily find odd conditions, such as the magnetization growing. Instead, more complex methods are needed, which allow for large steps sizes. This is a general issue with numerical solvers and not unique to MRI. The below code is a standard Runge-Kutta 4 (RK4) solver for a differential equation. It's essential numerical integration but is much more stable. It achives this stability by evaluating the derivative at the next steps. For more info see [Wikipedia](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods)
###Code
from scipy import interpolate
def bloch_solver( B, time, freq, T1=2000, T2=2000, M0=[0,0,1], GAM=42.58e6*2*math.pi):
# This is simple Rk4 solution to the Bloch Equations.
#
# Inputs:
# B(array) -- Magentic Field [N x 3] (T)
# time(array) -- Time of each point in waveforms (s)
# T1 -- Longitudinal relaxation times (s)
# T2 -- Transverse relaxation times (s)
# Freq Offset -- Off center frequency in Hz
# M0 -- Initial state of magnetization (not equilibrium magnetization)
# Outputs:
# MOutput -- Magnetization for each position in time
# Convert frequency to rads/s
act_freq = 2*math.pi*freq
#Convert to roation rates (gamma*B)
assert B.shape[1] == 3
Bx = GAM*B[:,0]
By = GAM*B[:,1]
Bz = GAM*B[:,2] + act_freq
#Create a spline for interpolation
spline_Bx = interpolate.splrep(time, Bx)
spline_By = interpolate.splrep(time, By)
spline_Bz = interpolate.splrep(time, Bz)
#Initialize
Mag = np.array(M0).reshape(3,1)
# Output storage
MOutput = np.zeros_like(B)
#Runge-Kutta PDE Solution
dt = time[2] - time[1]
for count, t1 in enumerate(time):
m1 = Mag
bx = interpolate.splev(t1, spline_Bx)
by = interpolate.splev(t1, spline_By)
bz = interpolate.splev(t1, spline_Bz)
k1 = np.array([[ -1/T2, bz, -by],
[ -bz ,-1/T2, bx],
[ by , -bx, -1/T1]])@ m1 + np.array([[0],[0],[1/T1]])
t2 = t1 + dt/2
bx = interpolate.splev(t2, spline_Bx)
by = interpolate.splev(t2, spline_By)
bz = interpolate.splev(t2, spline_Bz)
m2 = Mag + k1*dt/2
k2 = np.array([[ -1/T2, bz, -by],
[ -bz ,-1/T2, bx],
[ by , -bx, -1/T1]])@ m2 + np.array([[0],[0],[1/T1]])
t3 = t1 + dt/2
bx = interpolate.splev(t3, spline_Bx)
by = interpolate.splev(t3, spline_By)
bz = interpolate.splev(t3, spline_Bz)
m3 = Mag + k2*dt/2
k3 = np.array([[ -1/T2, bz, -by],
[ -bz ,-1/T2, bx],
[ by , -bx, -1/T1]])@m3 + np.array([[0],[0],[1/T1]])
t4 = t1 + dt
bx = interpolate.splev(t4, spline_Bx)
by = interpolate.splev(t4, spline_By)
bz = interpolate.splev(t4, spline_Bz)
m4 = Mag + k3*dt
k4 = np.array([[ -1/T2, bz, -by],
[ -bz ,-1/T2, bx],
[ by , -bx, -1/T1]])@m4 + np.array([[0],[0],[1/T1]])
# Runge-Kutta averages the above terms
Mag = Mag + dt/6*(k1 + 2*k2 + 2*k3 + k4);
# Save to an array
MOutput[count,0]= Mag[0]
MOutput[count,1]= Mag[1]
MOutput[count,2]= Mag[2]
return MOutput
###Output
_____no_output_____
###Markdown
Excitation ExperimentThis is the most simple experiment with a single excitation pulse followed by recovery. We assume magnetization starts at $M_0 \equiv 1$. We aim to apply an RF pulse to rotate the magnetization into the transverse plane. The amount we rotate the magnetization is the flip angle ($\alpha$). This is equal to:\begin{equation}\alpha = \int_{0}^{T_{RF}} \gamma B_1(t) dt\end{equation}This assumes the RF pulse is applied along a single axis and more complex relationships do exist for some pulse we won't cover (e.g. adiabatic pulses). The below code will just define a pulse envelope and a phase of the pulse. The phase sets the direction on the $B_1$ field (e.g. along x, y, or mixture).
###Code
# Simulation settings
dt = 2e-6 # systems run around 2 us resolution in the rotating frame
Tmax = 30e-3 #total time to simulate
gamma_bar = 42.58e6
gamma = math.pi*2.0*gamma_bar
# RF Settings
Trf = 2e-3 # period of the pulse
flip = 90 #degrees
excite_phase = 0 # excite phase in degrees
# Define the time span
time = np.arange(-1e-3,Tmax,dt)
# define the RF pulse shape
RF = np.zeros_like(time, dtype=np.complex)
idx = (time<Trf) & (time>=0)
RF[idx] = 1 # Rect function
RF[idx] = np.exp( -16*(time[idx] - Trf/2)**2 / (Trf**2)) # Gaussian
# Now lets scale the RF pulse amplitude to be correct rotation for on-resonance spins
RF = RF*(flip/360) / np.sum(gamma_bar*RF*dt)
#Rotate to the phase of excitation
RF = RF*np.exp(2j*math.pi*excite_phase/360)
# The RF is complex
B = np.zeros( (len(time),3))
B[:,0] = np.real(RF)
B[:,1] = np.imag(RF)
# Plot
plt.figure(figsize=(8,8))
plt.title('Applied magnetic fields vs. time')
plt.plot(time,B[:,0],label='B_x')
plt.plot(time,B[:,1],label='B_y')
plt.plot(time,B[:,2],label='B_z')
plt.xlabel('Time [s]')
plt.ylabel('B1 [T]')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Simulate use the above define solversSome things to change:* Change T2 to some other values (typical range 1e-3 to 1), does the behavior match what you expect?* Change the frequency offset to other values. Why might the Larmor frequency and the RF frequency be slightly different?* Change T1 to some other values (typical range 100e-3 to 5), does the behavior match what you expect?
###Code
# B is defined above
# freq = offset frequency from assumed Larmor frequency
# time is defined above
# T1 is the longitudinal relaxation rate
# T2 is the transverse longitudunal rate
Mout = bloch_solver( B, time, freq=100, T1=2000, T2=20e-3)
# Plot
plt.figure(figsize=(8,8))
plt.plot(time,Mout[:,0],label=r'$M_x$')
plt.plot(time,Mout[:,1],label=r'$M_y$')
plt.plot(time,np.sqrt(Mout[:,1]**2+Mout[:,0]**2),'--',label=r'$M_{xy}$')
plt.plot(time,Mout[:,2],label=r'$M_z$')
plt.xlabel('Time [s]')
plt.ylabel('M [a.u.]')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
2. Nutation Solvers Problem SetupNutation solvers break the Bloch equations into events. Events would include RF exitations, times of free recovery, and magnetic field changes from gradients (more on this last one later). The premise of this is that we know the solutions for the Bloch equations analytically. Such solvers can also include traditional solvers for events with unknown equations. Some we know include:* **Transverse magnetization decay wo/Frequency shift**\begin{equation}M_{xy}(t) = M_{xy}(t=0)e^{-t/T2}\end{equation}* **Longitudinal magnetization recovery**\begin{equation}M_{z}(t) = M_0 + ( M_{z}(t=0)-M_0)e^{-t/T1}\end{equation}* **RF pulse at exactly the Larmor frequency**\begin{equation}M_xy = R_z(-\theta)R_x(\alpha)R_z(\theta)M\end{equation}where $R_z$ is a rotation about $z$, $R_x$ is rotation about $x$, $\alpha$ is the flip angle, and $\theta$ is the phase of the RF pulse.
###Code
class Event:
def __init__(self, excite_flip=0, excite_phase=0, recovery_time=0, spoil=False):
self.excite_flip = excite_flip
self.excite_phase = excite_phase
self.recovery_time = recovery_time
self.spoil = spoil
def bloch_nutation_solver( event_list, M0, T1, T2, freq ):
# Inputs:
# event_list -- Special structure with entries
# .excite_flip flip angle of rotation
# .excite_phase phase of excite degrees
# .recovery_time time after excite to recover
# .spoil (if 'true' this set the Mxy to zero at the recovery)
# T1 -- Longitudinal relaxation times (s)
# T2 -- Transverse relaxation times (s)
# Freq Offset-- Off center frequency in Hz
# M0 -- Initial state of magnetization (not equilibrium magnetization)
# Outputs:
# time -- Magnetization for each position in time
# BOutput -- Magnetic field for each position in time (interpolated)
# Initialize
count = 0;
time = [0,];
Mout = [M0,]
M=M0;
# Go through the event_list
for event in event_list:
theta = event.excite_phase * math.pi / 180
alpha = event.excite_flip * math.pi / 180
T = event.recovery_time
spoil = event.spoil
# Excite
Rz = np.array([[math.cos(theta), math.sin(theta), 0],
[-math.sin(theta), math.cos(theta), 0],
[0, 0, 1]])
Rx = np.array([[1,0, 0],
[0, math.cos(alpha), math.sin(alpha)],
[0, -math.sin(alpha), math.cos(alpha)]])
M = np.linalg.inv(Rz)@Rx@Rz@M
# Relaxation (Transverse)
if spoil:
Mxy = 0
else:
Mxy = M[0] + 1j*M[1]
Mxy = Mxy*cmath.exp( 2j*math.pi*freq*T)*math.exp(-T/T2)
# Relaxation (Longitudinal)
Mz = M[2]
Mz = 1 + (Mz - 1)*math.exp(-T/T1);
# Put back into [Mx; My; Mz] vector
M = np.array([[Mxy[0].real], [Mxy[0].imag], [Mz[0]]])
# Store for output
time.append(time[-1]+T)
Mout.append(M)
Mout = np.array(Mout)
return time, Mout
###Output
_____no_output_____
###Markdown
ExampleHere we simulate a situation in which we apply two RF pulses seperated by a gap:$90 ^{\circ} $ - Delay - $180 ^{\circ}$ - Free recoverySome experiments to consider:* Change the flip angles. * Change the delays (modify the range(100) to a desired delay in ms* Change T1 and T2
###Code
# Blank list of events
event_list = []
# Excite 90 degree
event_list.append( Event(excite_flip=90, excite_phase=0))
#Recover (100ms,could be one event but plotting possible here)
for pos in range(100):
event_list.append( Event(recovery_time=1e-3))
# Excite 180 degree
event_list.append( Event(excite_flip=180, excite_phase=90))
#Recover (500ms,could be one event but plotting possible here)
for pos in range(500):
event_list.append( Event(recovery_time=1e-3))
# Simulate
M0 = np.array([[0],[0],[1.0]])
T1 = 1
T2 = 1
freq = 3
time,Mout = bloch_nutation_solver( event_list, M0, T1, T2, freq );
plt.figure(figsize=(8,8))
plt.plot(time,Mout[:,0], label=r'$M_x$')
plt.plot(time,Mout[:,1], label=r'$M_y$')
plt.plot(time,np.sqrt(Mout[:,1]**2+Mout[:,0]**2), '--', label=r'$M_y$')
plt.plot(time,Mout[:,2], label=r'$M_z$')
plt.legend()
plt.show()
###Output
_____no_output_____ |
07_Horner_Plot.ipynb | ###Markdown
The Horner Plot methodHeat **always** flows from hotter to colder parts. During a drilling process, the relatively cool drilling fluid *cools* the surrounding in the vicinity of the borehole. If temperatures are measured shortly after the drilling ended, they therefore likely underestimate the true rock temperature at that depth. Over time, the measured temperatures will increase, as temperature in the borehole re-equilibrates with the temperature of the surrounding rockmass. As it is usually not possible to wait for the temperatures to re-equilibrate, there exist correction methods for those measured *Bottom Hole Temperatures* (often abbreviated as *BHT*).The Horner plot method is one correction method. It uses the behaviour of *in situ* temperature (and pressure) when disturbed by drilling. It plots the following linear equation: $$ T = T_\infty + \frac{Q}{4\pi\lambda} \times ln(1 + \frac{t_c}{\Delta t}) $$where $T$ is the bottom-hole temperature [°C], $T_\infty$ is the 'virgin rock temperature' or *in-situ* temperature [°C], $Q$ the heat flow per unit length [W/m], $\lambda$ the thermal conductivity of the rock [W/(m K)], $t_c$ the time between end of drilling and end of mud circulation and $\Delta t$ the time between end of mud circulation and measurement. This equation resembles a linear equation$$ y = b + m \times x $$where $ln(1 + \frac{t_c}{\Delta t})$ equals $x$ and $T$ equals $y$. Now if we have temperature measurements at different times $\Delta t$, we can assess the *in-situ* temperature by linear regression. Assume we have three temperatures measured at three different times at a depth of 1500 m: * $\Delta t_1$ = 10 h, T = 53 °C * $\Delta t_2$ = 15.5 h, T = 56.5 °C * $\Delta t_3$ = 20.5 h, T = 58.5 °C
###Code
# import some libraries
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Set up the variables
tc = 3.
Dt = np.array([10., 15.5, 24.5])#.reshape(3,1)
T = np.array([53., 56.5, 59.8])#.reshape(3,1) # the y values in the linear equation
# calculate x values for the linear equation
x = np.log(1 + (tc/Dt))
###Output
_____no_output_____
###Markdown
Now that we know x ($ln(1 + \frac{t_c}{\Delta t})$) and y ($T$) values, we can do a linear regression to get $m$ and $b$ of the linear equation
###Code
# linear regression
m,b = np.polyfit(x,T,1)
# set up a regression line
x_reg = np.linspace(0,0.3,200)
T_reg = m * x_reg + b
# plot the results
fig = plt.figure(figsize=(12,5))
dots, = plt.plot(x,T,'o', color='#660033', alpha=0.8)
line, = plt.plot(x_reg,T_reg, '-', linewidth=2, color='#33ADAD')
plt.xlabel('ln(1 + ($t_c/\Delta t$))', fontsize=16)
plt.ylabel('Temperature [$^\circ$C]', fontsize=16)
plt.tick_params(axis='both', which='major', labelsize=13)
plt.legend([dots,line], ["Measured temperatures", "linear regression model"], loc=1, fontsize=18)
###Output
_____no_output_____
###Markdown
We know that the slope $m$ equals $\frac{Q}{4\pi \lambda}$. If the mean thermal conductivity of the rocks equals 2.24 W m$^{-1}$ K$^{-1}$, we can calculate Q.
###Code
tc = 2.24
Q = 4*np.pi*tc*m
print("The heat flow per unit length Q is {} W/m (negative sign means flow into the borehole).".format(Q))
###Output
The heat flow per unit length Q is -1293.98727063 W/m (negative sign means flow into the borehole).
|
Week-1/Etivity1.2.ipynb | ###Markdown
**Artificial Intelligence - MSc**ET5003 - MACHINE LEARNING APPLICATIONS Instructor: Enrique Naredo ET5003_Etivity-1 Introduction16154541 -Darren White Explanation of the problemThe problem presented is to use Bayesian multinomial logistic regression to classify images from the MNIST database of handwritten digits. Dataset The MNIST database is a set of images of handwritten digits. The database contains 60,000 training images and 10,000 testing images. The database is based on the NIST database. The NIST database was compiled from American Census Bureau employees and high school students. The training set comes from the American Census Bureau employees and the testing set has been taken from American high school students. As a result of the difference between the groups it was posited that this database may not be efficient for machine learning.The MNIST database is compiled as follows;* 50% of the training data is taken from the NIST training set.* 50% of the training data is taken from the NIST test set.* 50% of the testing data is taken from the NIST training set.* 50% of the testing data is taken from the NIST test set.The MNIST database is maintained by Yann LeCun, (Courant Institute, NYU) Corinna Cortes, (Google Labs, New York) and Christopher J.C. Burges, (Microsoft Research, Redmond). Method Multinomial Logistic Regression (MLR) is used to classify the images in the MNIST database. MLR is an extension of Binary Logistic Regression (BLR) in which numerous binary models are deployed simultaneously. Multinomial logistic regression is used to classify categorial outcomes rather than continuous outcomes. Multinomial models do not assume normality, linearity or homoscedasticity. This makes it a strong modelling choice as real world data can often display these imperfections. Code Imports
###Code
# Suppressing Warnings:
import warnings
warnings.filterwarnings("ignore")
! pip install opencv-python
! pip install scikit-image
! pip install arviz
# Used to perform logistic regression, scoring, shuffle & split of dataset
# to training & test
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
# Used to fetch MNIST dataset, works locally & on Google Colab
from sklearn.datasets import fetch_openml
# Used to generate probabilistic multinomial model
import pymc3 as pm
# Used to view plots of posterior
import arviz as az
# Used for numerical operations, generating tensors for PyMC3 usage
import theano as tt
# Used for numerical operations
import numpy as np
# Used to generate random numbers to draw samples from dataset
import random
# Used in plotting images & graphs
import matplotlib.pyplot as plt
from IPython.display import HTML
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load Data
###Code
mnist = fetch_openml('mnist_784', cache=False)
X = mnist.data.astype('float32')
y = mnist.target.astype('int64')
X /= 255.0
X.min(), X.max()
###Output
_____no_output_____
###Markdown
The use of `sklearn.datasets` through `fetch_openml()` to gather the MNIST dataset allows the notebook to run on both Google Colab and locally without change to the code. Preprocessing We split the MNIST data into a train and test set with 75% as training data and 25% as test data.
###Code
# assigning features and labels
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
assert(X_train.shape[0] + X_test.shape[0] == mnist.data.shape[0])
X_train.shape, y_train.shape
def plot_example(X: np.array, y: np.array, n: int=5, plot_title: str=None) -> None:
"""Plots the first 'n' images and their labels in a row.
Args:
X (numpy array): Image data with each row of array contining an image
as a flat vector
y (numpy array): Image labels
n (int): Number of images to display
plot_title (str): Title of the plot
Returns:
None
"""
fig, axs = plt.subplots(1, n)
fig.suptitle(plot_title, fontsize=20)
axs = axs.ravel()
for i, (img, y) in enumerate(zip(X[:n].reshape(n, 28, 28), y[:n])):
axs[i].axis('off')
axs[i].imshow(img, cmap="Greys_r")
axs[i].set_title(y)
fig.tight_layout()
fig.subplots_adjust(top=0.88)
plt.show()
plot_example(X=X_train, y=y_train, n=6, plot_title="Training Data")
###Output
_____no_output_____
###Markdown
Building a Number Classifier from the MNIST Database A Bayesian Multinomial Logistic Regression (BMLR) shall be built to classify the handwritten numbers in the MNIST Database.Multinomial logistic regression is a classifiaction technique that is used to predict the category of an input or the probability of its membership to a category. This is calculated based on multiple independent variables that are either binary or continuous. Multinomial logistic regression allows for the dependent variable to be part of more than two categories (Czepiel, n.d.)(Carpita, et al., 2014).To build the classifier we must first understand its basic construction. The formula for BMLR is: $Pr(Y_{ik} = Pr(Y_i = k\mid x_i; \beta_1 , \beta_2 , ..., \beta_m) = \frac{\displaystyle\exp(\beta_{0k} + x_i \beta_k)}{\displaystyle\sum_{j=1}^{m}\exp(\beta_{0j} + x_i\beta_j)}$ with $k = 1,2,...$ where $\beta_k$ is a row vector of regression coefficients of $x$ for the $k$th category of $y$ Since multinomial logistic regression is an expansion of binary logistic regression we will first define a binary model. Logistic regression assumes that for a single data point $(x,y)$: $P(Y = 1 \mid X = x) = \sigma(z)$ where $z = \theta_0 + \displaystyle\sum_{i = 1}^{m} \theta_i x_i$ where $\theta$ is a vector of parameters of length $m$ the values of these parameters is found from $n$ training examples. This is equivalent to: $P(Y =1 \mid X = x) = \sigma(\theta^Tx)$ Maximum likelihood estimation (MLE) is used to choose the parameter values of the logistic regression. To do this we calculate the log-likelihood and find the values of $\theta$ that maximise it. Since the predictions being made are binary we can define each label as a Bernoulli random variable. The probability of one data point can thus be written as: $P(Y = y\mid X=x) = \sigma(\theta^Tx)^y \cdot [1 - \sigma(\theta^Tx)]^{(1-y)} $ The likelihood of all of the data is defined as follows: The likelihood of the independent training labels: $L(\theta) = \displaystyle\prod_{i =1}^n P(Y = y^{(i)} \mid X = x^{(i)})$ Using the likelihood of a Bernoulli we get $L(\theta) = \displaystyle\prod_{i =1}^n P(Y = y^{(i)} \mid X = x^{(i)}) = \displaystyle\prod_{i=1}^n\sigma(\theta^Tx^{(i)})^{y^{(i)}} \cdot [1-\sigma(\theta^Tx^{(i)})]^{(1-y^{(i)})}$ Therefore the log-likelihood of the logistic regression is: $LL(\theta) = \displaystyle\sum_{(i=1)}^ny^{(i)}\log[\sigma(\theta^Tx^{(i)}) + (1 - y^{(i)}) \log[1 - \sigma(\theta^Tx^{(i)})]$ By using a partial derivative of each parameter we can find the values of $\theta $ that maximise the log-likelihood. The partial derivative of $LL(\theta)$ is: $\frac{\partial LL(\theta)}{\partial\theta_j} = \displaystyle\sum_{i=1}^n [y^{(i)} - \sigma(\theta^Tx^{(i)})]x^{(i)}_j$ Using this various optimisation techniques can be deployed to identify the maximum likelihood. A typical binary logistic regression might use gradient descent. However multinomial classifiers will likely use more sophisticated techniques (Monroe, 2017). Classifier Dataset Summary
###Code
# Number of training examples
n_train = len(X_train)
# Number of testing examples.
n_test = len(X_test)
# Shape of an MNIST image
image_shape =X_train[0].shape
# unique classes/labels in the dataset.
alltotal = set(y_train )
# number of classes
n_classes = len(alltotal )
# print information
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
## plot histogram
fig, ax = plt.subplots()
# array with evenly spaced classes
ind = np.arange(n_classes)
# histogram
n, bins, patches = ax.hist(y_train, n_classes, ec='black')
# horizontal axis label
ax.set_xlabel('classes')
# vertical axis label
ax.set_ylabel('counts')
# plot title
ax.set_title(r'Histogram of MNIST images')
# show plot
plt.figure(figsize=(10,8))
plt.show()
###Output
_____no_output_____
###Markdown
We can see from the histogram that we have a relatively balanced dataset which should create good conditions for classification Data Preparation
###Code
# Seed the run for repeatability
np.random.seed(0)
# Classes we will retain
n_classes = 3
classes = [3, 7, 9]
# The number of instances we'll keep for each of our 3 digits:
N_per_class = 500
X = []
labels = []
for d in classes:
imgs = X_train[np.where(y_train==d)[0],:]
X.append(imgs[np.random.permutation(imgs.shape[0]),:][0:N_per_class,:])
labels.append(np.ones(N_per_class)*d)
X_train2 = np.vstack(X).astype(np.float64)
y_train2 = np.hstack(labels)
###Output
_____no_output_____
###Markdown
We reduce the number of classes to 3 and rather than randomly select them for each notebook run we have explicitly selected them. This allows us to to discuss findings as a group based on the same data. We select all image indices of each desired class from `X_train`, randomly arrange them and append the first `inst_class` of them to the `inputs` array.
###Code
print(X_train2.shape,y_train2.shape)
# plot digits
def plot_digits(instances, images_per_row=5, **options):
"""Plots images in rows
Args:
instances (numpy array): Numpy array of image data
images_per_row (int): Number of images to print on each row
Returns:
None
"""
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap='gist_yarg', **options)
plt.axis("off")
# Show random instances from each Digit:
plt.figure(figsize=(8,8))
# Selecting a few label indices from each of the 3 classes to show:
n_sample = 9
label_indices = []
for i in range(n_classes):
label_indices += random.sample(range(i*N_per_class, (i+1)*N_per_class), n_sample)
print(label_indices)
# Plotting 'original' image
plot_digits(X_train2[label_indices,:], images_per_row=9)
plt.title("Original Image Samples", fontsize=14)
### we split the dataset in training and validation
X_tr, X_val, y_tr, y_val = train_test_split(X_train2, y_train2, test_size=0.2, random_state=0)
X_tr, y_tr = shuffle(X_tr, y_tr)
print(X_tr.shape)
print(X_val.shape)
print(y_tr.shape)
print(y_val.shape)
# transform images into vectors
X_trv = X_tr.flatten().reshape(X_tr.shape[0],X_tr.shape[1])
X_valv = X_val.flatten().reshape(X_val.shape[0],X_tr.shape[1])
print(X_trv.shape)
print(X_valv.shape)
print(y_tr.shape)
print(y_val.shape)
###Output
(1200, 784)
(300, 784)
(1200,)
(300,)
###Markdown
Given that the MNIST dataset is already in flat vector form, i.e. each image is already a one dimensional vector, the `flatten` step is not required. However we retain this step for future reference when using other datasets that may require flattening. Algorithm
###Code
#General-recipe ML logistic regression
clf = LogisticRegression(random_state=0, max_iter=2000, C=100, solver='lbfgs', multi_class='multinomial').fit(X_trv, y_tr)
y_pred_logi = clf.predict(X_valv)
y_pred_logi_prob = clf.predict_proba(X_valv)
prob_classmax = np.max(y_pred_logi_prob,axis=1)
print("Accuracy =", accuracy_score(y_pred_logi, y_val))
###Output
Accuracy = 0.92
###Markdown
Accuracy achieved of 0.92 is relatively good. We'll review the highest probablities for correctly and incorrectly classified images.
###Code
# probability of general-recipe logistic regression in correct instances
highest_prob_matches = np.sort(prob_classmax[y_val==y_pred_logi])
print(f"Probabilities of best scoring matches:\n{highest_prob_matches}")
# probability of general-recipe logistic regression in wrong instances
highest_prob_mismatches = np.sort(prob_classmax[y_val!=y_pred_logi])
print(f"Probabilities of best scoring mismatches:\n{highest_prob_mismatches}")
mismatch_indices_gt_99 = np.intersect1d(np.where(y_val!=y_pred_logi), np.where(prob_classmax > 0.99))
print(f"Mismatch count above 99% probability : {len(mismatch_indices_gt_99)}")
# Display mismatches above 99% probability
display_cnt = len(mismatch_indices_gt_99)
X_valv_mismatches = []
y_val_mismatches = []
y_pred_mismatches = []
compare = 'Comparison of actual vs predicted: \n'
for idx in mismatch_indices_gt_99:
X_valv_mismatches.append(X_valv[idx])
y_val_mismatches.append(y_val[idx])
y_pred_mismatches.append(y_pred_logi[idx])
compare += (f"y_pred:{y_pred_logi[idx]} y_val:{y_val[idx]}" +\
f", Pr({classes[0]}):{y_pred_logi_prob[idx][0]:.8f}" +\
f", Pr({classes[1]}):{y_pred_logi_prob[idx][1]:.8f}" +\
f", Pr({classes[2]}):{y_pred_logi_prob[idx][2]:.8f}\n")
X_valv_mismatches = np.array(X_valv_mismatches)
y_val_mismatches = np.array(y_val_mismatches)
y_pred_mismatches = np.array(y_pred_mismatches)
print(compare)
plot_example(X=X_valv_mismatches, y=y_pred_mismatches,
n=display_cnt, plot_title="Mismatches >99% probability (predictions labelled)")
###Output
Comparison of actual vs predicted:
y_pred:7.0 y_val:9.0, Pr(3):0.00000160, Pr(7):0.99999838, Pr(9):0.00000002
y_pred:7.0 y_val:9.0, Pr(3):0.00007013, Pr(7):0.99638036, Pr(9):0.00354951
y_pred:9.0 y_val:7.0, Pr(3):0.00051846, Pr(7):0.00205414, Pr(9):0.99742740
y_pred:9.0 y_val:7.0, Pr(3):0.00212730, Pr(7):0.00167353, Pr(9):0.99619916
y_pred:7.0 y_val:9.0, Pr(3):0.00000096, Pr(7):0.99945519, Pr(9):0.00054385
y_pred:9.0 y_val:7.0, Pr(3):0.00000448, Pr(7):0.00098430, Pr(9):0.99901122
y_pred:7.0 y_val:9.0, Pr(3):0.00000096, Pr(7):0.99999904, Pr(9):0.00000000
###Markdown
We observe seven of the wrong predictions by logistic regression having a confidence above 99%. On reviewing the images it is difficult to understand how these have been labelled incorrectly with such high confidence. the probability values for the correct labels are very low given our total probabiliy must sum to 1. Probabilistic ML
###Code
X_trv.shape[1]
import sklearn.preprocessing
## We use LabelBinarizer to transfor classes into counts
# neg_label=0, pos_label=1
y_2_bin = sklearn.preprocessing.LabelBinarizer().fit_transform(y_tr.reshape(-1,1))
nf = X_trv.shape[1]
# number of classes
nc = len(classes)
# floatX = float32
floatX = tt.config.floatX
init_b = np.random.randn(nf, nc-1).astype(floatX)
init_a = np.random.randn(nc-1).astype(floatX)
with pm.Model() as multi_logistic:
# Prior
β = pm.Normal('beta', 0, sigma=100, shape=(nf, nc-1), testval=init_b)
α = pm.Normal('alpha', 0, sigma=100, shape=(nc-1,), testval=init_a)
# we need to consider nc-1 features because the model is not identifiable
# the softmax turns a vector into a probability that sums up to one
# therefore we add zeros to go back to dimension nc
# so that softmax returns a vector of dimension nc
β1 = tt.tensor.concatenate([np.zeros((nf,1)),β ],axis=1)
α1 = tt.tensor.concatenate([[0],α ],)
# Likelihood
mu = pm.math.matrix_dot(X_trv,β1) + α1
# It doesn't work if the problem is binary
p = tt.tensor.nnet.nnet.softmax(mu)
observed = pm.Multinomial('likelihood', p=p, n=1, observed=y_2_bin)
###Output
_____no_output_____
###Markdown
We set our priors as normal distributions with mean of 0 and $\sigma$ of 100. For $\alpha$ we specify a vector size of the class count minus one, i.e. $3-1=2$. For $\beta$ we specify a matrix size of the input pixel count times the class count minux one, i.e. $784x2$.
###Code
y_2_bin
with multi_logistic:
#approx = pm.fit(300000, method='advi') # takes longer
approx = pm.fit(3000, method='advi')
plt.figure(figsize=(10,8))
plt.ylabel('Loss')
plt.xlabel('Iteration')
plt.plot(approx.hist)
plt.title('Loss vs Iteration', fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
The loss is seen to decrease as we iterate further on the model.
###Code
# View graph of the posterior alpha & beta values
dd = 300
posterior = approx.sample(draws=dd)
az.plot_trace(posterior);
# View summary table of the posterior
with multi_logistic:
display(az.summary(posterior, round_to=2))
###Output
arviz - WARNING - Shape validation failed: input_shape: (1, 300), minimum_shape: (chains=2, draws=4)
###Markdown
The summary table and plots show our two alpha values for our multinomial three class problem. The 784 beta values correpond to the input feature set size of $28x28=784$ pixels per image. The right hand size of the plot shows the samples of the Markov chain plotted for beta and alpha values.
###Code
## The softmax function transforms each element of a collection by computing the exponential
# of each element divided by the sum of the exponentials of all the elements.
from scipy.special import softmax
#select an image in the test set
i = 10
#i = random.randint(0, dd)
#select a sample in the posterior
s = 100
#s = random.randint(0, dd)
beta = np.hstack([np.zeros((nf,1)), posterior['beta'][s,:] ])
alpha = np.hstack([[0], posterior['alpha'][s,:] ])
image = X_valv[i,:].reshape(28,28)
plt.figure(figsize=(2,2))
plt.imshow(image,cmap="Greys_r")
np.set_printoptions(suppress=True)
print("test image #" + str(i))
print("posterior sample #" + str(s))
print("true class=", y_val[i])
print("classes: " + str(classes))
print("estimated prob=",softmax((np.array([X_valv[i,:].dot(beta) + alpha])))[0,:])
# Bayesian prediction
# return the class that has the highest posterior probability
y_pred_Bayesian=[]
for i in range(X_valv.shape[0]):
val=np.zeros((1,len(classes)))
for s in range(posterior['beta'].shape[0]):
beta = np.hstack([np.zeros((nf,1)), posterior['beta'][s,:] ])
alpha = np.hstack([[0], posterior['alpha'][s,:] ])
val = val + softmax((np.array([X_valv[i,:].dot(beta) + alpha])))
mean_probability = val/posterior['beta'].shape[0]
y_pred_Bayesian.append( np.argmax(mean_probability))
print(y_pred_Bayesian)
# recall the classes we are using
print(classes)
# prediction array (using classes)
nn = 10 # just an example
np.array(classes)[y_pred_Bayesian[0:nn]]
# using validation: y_val
print("Accuracy=", accuracy_score(np.array(classes)[y_pred_Bayesian], y_val))
###Output
Accuracy= 0.9166666666666666
###Markdown
Selecting Differences
###Code
y_predB=[]
for i in range(X_valv.shape[0]):
#print(i)
val=[]
for s in range(posterior['beta'].shape[0]):
beta = np.hstack([np.zeros((nf,1)), posterior['beta'][s,:] ])
alpha = np.hstack([[0], posterior['alpha'][s,:] ])
val.append(softmax((np.array([X_valv[i,:].dot(beta) + alpha])))[0,:])
#mean probability
valmean = np.mean(val,axis=0)
#class with maximum mean probability
classmax = np.argmax(valmean)
#ranks
ranks = np.array(val.copy())
ranks = ranks *0 #init
colmax = np.argmax(np.array(val),axis=1)
ranks[np.arange(0,len(colmax)),colmax]=1
y_predB.append( [classmax, valmean[classmax], np.std(ranks,axis=0)[classmax]])
y_predB= np.array(y_predB)
# prediction array
mm = 10
y_predB[0:mm,:]
#sorting in descending order
difficult = np.argsort(-y_predB[:,2])
y_predB[difficult[0:mm],:]
#probability of general-recipe logistic regression in wrong instances
prob_classmax[y_pred_logi != y_val]
y_predB[y_pred_logi != y_val,:]
## Difficult & easy instances
easy = np.argsort(y_predB[:,2])
print("Accuracy in easy instances =", accuracy_score(y_pred_logi[easy[0:100]], y_val[easy[0:100]]))
difficult = np.argsort(-y_predB[:,2])
print("Accuracy in difficult instances =", accuracy_score(y_pred_logi[difficult[0:100]], y_val[difficult[0:100]]))
# show 10 random 'easy' images
fig, axs = plt.subplots(2,5, figsize=(15, 6))
fig.subplots_adjust(hspace = .2, wspace=.001)
axs = axs.ravel()
for i in range(10):
index = easy[i]
image = X_valv[index,:].reshape(28,28)
axs[i].axis('off')
axs[i].imshow(image,cmap="Greys_r")
# show 10 random 'difficult' images
fig, axs = plt.subplots(2,5, figsize=(15, 6))
fig.subplots_adjust(hspace = .2, wspace=.001)
axs = axs.ravel()
for i in range(10):
index = difficult[i]
image = X_valv[index,:].reshape(28,28)
axs[i].axis('off')
axs[i].imshow(image,cmap="Greys_r")
# show 10 random 'easy' images
fig, axs = plt.subplots(2,5, figsize=(15, 6))
fig.subplots_adjust(hspace = .2, wspace=.001)
axs = axs.ravel()
for i in range(10):
index = easy[i]
image = X_valv[index,:].reshape(28,28)
axs[i].axis('off')
axs[i].imshow(image,cmap="Greys_r")
###Output
_____no_output_____
###Markdown
Predicted answers - easy
###Code
plot_example(X=X_valv[easy], y=y_pred_logi[easy], n=6, plot_title="Predicted easy examples")
###Output
_____no_output_____
###Markdown
Actual answers - easy
###Code
plot_example(X=X_valv[easy], y=y_val[easy], n=6, plot_title="Actual easy examples")
###Output
_____no_output_____
###Markdown
Predicted answers - difficult
###Code
plot_example(X=X_valv[difficult], y=y_pred_logi[difficult], n=6, plot_title="Predicted Answers - difficult" )
###Output
_____no_output_____
###Markdown
Actual answers - difficult
###Code
plot_example(X=X_valv[difficult], y=y_val[difficult], n=6, plot_title="Actual Answers - Difficult")
###Output
_____no_output_____ |
tutorials/noise/7_accreditation.ipynb | ###Markdown
Accreditation protocol Accreditation Protocol (AP) is a protocol devised to characterize the reliability of noisy quantum devices.Given a noisy quantum device implementing a "target" quantum circuit, AP certifies an upper-bound on the variation distance between the probability distribution of the outputs returned by the device and the ideal probability distribution.This method is based on Ferracin et al, "Accrediting outputs of noisy intermediate-scale quantum devices", https://arxiv.org/abs/1811.09709.This notebook gives an example for how to use the ignis.characterization.accreditation module. This particular example shows how to accredit the outputs of a 4-qubit quantum circuit of depth 5. All the circuits are run using the noisy Aer simulator.
###Code
#Import general libraries (needed for functions)
import numpy as np
from numpy import random
import qiskit
#Import Qiskit classes
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, Aer, execute
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error
#Import the accreditation functions.
from qiskit.ignis.verification.accreditation import AccreditationFitter,AccreditationCircuits
###Output
_____no_output_____
###Markdown
Input to the protocol AP can accredit the outputs of a __target circuit__ that1) Takes as input $n$ qubits in the state $|{0}>$2) Ends with single-qubit measurements in the Pauli-$Z$ basis3) Is made of $m$ "bands", each band containing a round of single-qubit gates and a round of controlled-$Z$ gates.The accreditation is made by employing __trap circuits__, circuits that can be efficiently simulated on a classical computer and that whose outputs are used to witness the correct functionality of the device.Let's now draw a target quantum circuit!We start with a simple circuit to generate and measure 4-qubits GHZ states.
###Code
# Create a Quantum Register with n_qb qubits.
q_reg = QuantumRegister(4, 'q')
# Create a Classical Register with n_qb bits.
c_reg = ClassicalRegister(4, 's')
# Create a Quantum Circuit acting on the q register
target_circuit = QuantumCircuit(q_reg, c_reg)
target_circuit.h(0)
target_circuit.h(1)
target_circuit.h(2)
target_circuit.h(3)
target_circuit.cz(0,1)
target_circuit.cz(0,2)
target_circuit.h(1)
target_circuit.h(2)
target_circuit.cz(0,3)
target_circuit.cz(1,2)
target_circuit.h(1)
target_circuit.h(2)
target_circuit.h(3)
target_circuit.measure(q_reg, c_reg)
target_circuit.draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
Generating accreditation circuits The function $accreditation\_circuits$ generates all the circuits required by AP, target and traps. It automatically appends random Pauli gates to the circuits (if the implementation is noisy, these random Pauli gates reduce the noise to Pauli errors ! ) It also returns the list $postp\_list$ of strings required to post-process the outputs, as well as the number $v\_zero$ indicating the circuit implementing the target.This is the target circuit with randomly chosen Pauli gates:
###Code
accsys = AccreditationCircuits(target_circuit)
v = 10
circ_list, postp_list, v_zero = accsys.generate_circuits(v)
circ_list[(v_zero)%(v+1)].draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
This is how a trap looks like:
###Code
circ_list[(v_zero+1)%(v+1)].draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
One can use the optional twoqubitgate argument to switch use cx instead of cz gates and can arbitrarily change the coupling map, in order to compile to the desired device topology (which in this case might lead to more layers than expected).
###Code
accsys.target_circuit(target_circuit, two_qubit_gate='cx', coupling_map=[[0,1],[1,2],[2,3]] )
v = 10
circ_list, postp_list, v_zero = accsys.generate_circuits(v)
circ_list[(v_zero)%(v+1)].draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
Simulate the ideal circuits Let's implement AP.We use $accreditation\_circuits$ to generate target and trap circuits.Then, we use the function $single\_protocol\_run$ to implement all these circuits, keeping the output of the target only if all of the traps return the correct output.
###Code
simulator = qiskit.Aer.get_backend('qasm_simulator')
test_1 = AccreditationFitter()
# Create target and trap circuits with random Pauli gates
accsys = AccreditationCircuits(target_circuit)
circuit_list, postp_list, v_zero = accsys.generate_circuits(v)
job = execute(circuit_list,
simulator,
shots=1)
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_1.single_protocol_run(result, postp_list, v_zero)
print("Outputs of the target: ",test_1.outputs," , AP",test_1.flag,"these outputs!")
###Output
Outputs of the target: ['0100'] , AP accepted these outputs!
###Markdown
In the absence of noise, all traps return the expected output, therefore we always accept the output of the target.To obtain an upper-bound on the variation distance on the outputs of the target circuit, we need to implement AP $d$ times, each time with ___v___ different trap circuits.
###Code
# Number of runs
d = 20
test_2 = AccreditationFitter()
for run in range(d):
# Create target and trap circuits with random Pauli gates
circuit_list, postp_list, v_zero = accsys.generate_circuits(v)
# Implement all these circuits
job = execute(circuit_list,
simulator,
shots=1)
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_2.single_protocol_run(result, postp_list, v_zero)
print("Protocol run number",run+1,", outputs of the target",test_2.flag)
print('\nAfter',test_2.num_runs,'runs, AP has accepted',test_2.N_acc,'outputs!')
print('\nList of accepted outputs:\n', test_2.outputs)
###Output
Protocol run number 1 , outputs of the target accepted
Protocol run number 2 , outputs of the target accepted
Protocol run number 3 , outputs of the target accepted
Protocol run number 4 , outputs of the target accepted
Protocol run number 5 , outputs of the target accepted
Protocol run number 6 , outputs of the target accepted
Protocol run number 7 , outputs of the target accepted
Protocol run number 8 , outputs of the target accepted
Protocol run number 9 , outputs of the target accepted
Protocol run number 10 , outputs of the target accepted
Protocol run number 11 , outputs of the target accepted
Protocol run number 12 , outputs of the target accepted
Protocol run number 13 , outputs of the target accepted
Protocol run number 14 , outputs of the target accepted
Protocol run number 15 , outputs of the target accepted
Protocol run number 16 , outputs of the target accepted
Protocol run number 17 , outputs of the target accepted
Protocol run number 18 , outputs of the target accepted
Protocol run number 19 , outputs of the target accepted
Protocol run number 20 , outputs of the target accepted
After 20 runs, AP has accepted 20 outputs!
List of accepted outputs:
['0100', '0010', '1011', '1001', '0110', '1111', '1101', '0000', '1101', '0000', '0110', '1011', '0110', '1101', '0000', '0110', '1101', '0010', '1001', '1101', '0000']
###Markdown
The function $bound\_variation\_distance$ calculates the upper-bound on the variation distance (VD) using$$VD\leq \frac{\varepsilon}{N_{\textrm{acc}}/d-\theta}\textrm{ ,}$$where $\theta\in[0,1]$ is a positive number and$$\varepsilon= \frac{1.7}{v+1}$$is the maximum probability of accepting an incorrect state for the target.The function $bound\_variation\_distance$ also calculates the confidence in the bound as $$1-2\textrm{exp}\big(-2\theta d^2\big)$$
###Code
theta = 5/100
test_2.bound_variation_distance(theta)
print("AP accepted",test_2.N_acc,"out of",test_2.num_runs,"times.")
print("With confidence",test_2.confidence,"AP certifies that VD is upper-bounded by",test_2.bound)
###Output
AP accepted 20 out of 20 times.
With confidence 1.0 AP certifies that VD is upper-bounded by 0.16267942583732053
###Markdown
Defining the noise model We define a noise model for the simulator. We add depolarizing error probabilities to the controlled-$Z$ and single-qubit gates.
###Code
noise_model = NoiseModel()
p1q = 0.003
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u1')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u3')
p2q = 0.03
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2q, 2), 'cx')
basis_gates = ['u1','u2','u3','cx']
###Output
_____no_output_____
###Markdown
We then implement noisy circuits and pass their outputs to $single\_protocol\_run$.
###Code
test_3 = AccreditationFitter()
for run in range(d):
# Create target and trap circuits with random Pauli gates
circuit_list, postp_list, v_zero = accsys.generate_circuits(v)
job = execute(circuit_list,
simulator,
noise_model=noise_model,
basis_gates=basis_gates,
shots=1,
backend_options={'max_parallel_experiments': 0})
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_3.single_protocol_run(result, postp_list, v_zero)
print("Protocol run number",run+1,", outputs of the target",test_3.flag)
print("\nAP accepted",test_3.N_acc,"out of",test_3.num_runs,"times.")
print('\nList of accepted outputs:\n', test_3.outputs)
theta = 5/100
test_3.bound_variation_distance(theta)
print("\nWith confidence",test_3.confidence,"AP certifies that VD is upper-bounded by",test_3.bound)
###Output
Protocol run number 1 , outputs of the target rejected
Protocol run number 2 , outputs of the target rejected
Protocol run number 3 , outputs of the target rejected
Protocol run number 4 , outputs of the target rejected
Protocol run number 5 , outputs of the target rejected
Protocol run number 6 , outputs of the target rejected
Protocol run number 7 , outputs of the target accepted
Protocol run number 8 , outputs of the target rejected
Protocol run number 9 , outputs of the target accepted
Protocol run number 10 , outputs of the target rejected
Protocol run number 11 , outputs of the target rejected
Protocol run number 12 , outputs of the target rejected
Protocol run number 13 , outputs of the target accepted
Protocol run number 14 , outputs of the target rejected
Protocol run number 15 , outputs of the target rejected
Protocol run number 16 , outputs of the target rejected
Protocol run number 17 , outputs of the target rejected
Protocol run number 18 , outputs of the target accepted
Protocol run number 19 , outputs of the target rejected
Protocol run number 20 , outputs of the target rejected
AP accepted 4 out of 20 times.
List of accepted outputs:
['0100', '0010', '1011', '1001', '0110', '1111', '1101', '0000', '1101', '0000', '0110', '1011', '0110', '1101', '0000', '0110', '1101', '0010', '1001', '1101', '0000', '0100', '0110', '1111', '0100']
With confidence 1.0 AP certifies that VD is upper-bounded by 1
###Markdown
Changing the number of trap circuits per protocol run changes the upper-bound on the VD, but not the confidence.What number of trap circuits will ensure the minimal upper-bound for your target circuit?
###Code
min_traps = 4
max_traps = 10
for num_trap_circs in range(min_traps,max_traps):
test_4 = AccreditationFitter()
for run in range(d):
# Create target and trap circuits with random Pauli gates
circuit_list, postp_list, v_zero = accsys.generate_circuits(num_trap_circs)
job = execute(circuit_list,
simulator,
noise_model=noise_model,
basis_gates=basis_gates,
shots=1,
backend_options={'max_parallel_experiments': 0})
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_4.single_protocol_run(result, postp_list, v_zero)
print("\nWith", num_trap_circs,
"traps, AP accepted", test_4.N_acc,
"out of", test_4.num_runs, "times.")
test_4.bound_variation_distance(theta)
print("With confidence", test_4.confidence,
"AP with", num_trap_circs,
"traps certifies that VD is upper-bounded by", test_4.bound)
###Output
With 4 traps, AP accepted 16 out of 20 times.
With confidence 1.0 AP with 4 traps certifies that VD is upper-bounded by 0.45333333333333337
With 5 traps, AP accepted 7 out of 20 times.
With confidence 1.0 AP with 5 traps certifies that VD is upper-bounded by 0.9444444444444444
With 6 traps, AP accepted 11 out of 20 times.
With confidence 1.0 AP with 6 traps certifies that VD is upper-bounded by 0.48571428571428577
With 7 traps, AP accepted 9 out of 20 times.
With confidence 1.0 AP with 7 traps certifies that VD is upper-bounded by 0.53125
With 8 traps, AP accepted 7 out of 20 times.
With confidence 1.0 AP with 8 traps certifies that VD is upper-bounded by 0.6296296296296298
With 9 traps, AP accepted 11 out of 20 times.
With confidence 1.0 AP with 9 traps certifies that VD is upper-bounded by 0.33999999999999986
###Markdown
Accreditation protocol Accreditation Protocol (AP) is a protocol devised to characterize the reliability of noisy quantum devices.Given a noisy quantum device implementing a "target" quantum circuit, AP certifies an upper-bound on the variation distance between the probability distribution of the outputs returned by the device and the ideal probability distribution.This method is based on Ferracin et al, "Accrediting outputs of noisy intermediate-scale quantum devices", https://arxiv.org/abs/1811.09709.This notebook gives an example for how to use the ignis.characterization.accreditation module. This particular example shows how to accredit the outputs of a 4-qubit quantum circuit of depth 5. All the circuits are run using the noisy Aer simulator.
###Code
#Import general libraries (needed for functions)
import numpy as np
from numpy import random
import qiskit
#Import Qiskit classes
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, Aer, execute
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error
#Import the accreditation functions.
from qiskit.ignis.verification.accreditation import AccreditationFitter,AccreditationCircuits
###Output
_____no_output_____
###Markdown
Input to the protocol AP can accredit the outputs of a __target circuit__ that1) Takes as input $n$ qubits in the state $|{0}>$2) Ends with single-qubit measurements in the Pauli-$Z$ basis3) Is made of $m$ "bands", each band containing a round of single-qubit gates and a round of controlled-$Z$ gates.The accreditation is made by employing __trap circuits__, circuits that can be efficiently simulated on a classical computer and that whose outputs are used to witness the correct functionality of the device.Let's now draw a target quantum circuit!We start with a simple circuit to generate and measure 4-qubits GHZ states.
###Code
# Create a Quantum Register with n_qb qubits.
q_reg = QuantumRegister(4, 'q')
# Create a Classical Register with n_qb bits.
c_reg = ClassicalRegister(4, 's')
# Create a Quantum Circuit acting on the q register
target_circuit = QuantumCircuit(q_reg, c_reg)
target_circuit.h(0)
target_circuit.h(1)
target_circuit.h(2)
target_circuit.h(3)
target_circuit.cz(0,1)
target_circuit.cz(0,2)
target_circuit.h(1)
target_circuit.h(2)
target_circuit.cz(0,3)
target_circuit.cz(1,2)
target_circuit.h(1)
target_circuit.h(2)
target_circuit.h(3)
target_circuit.measure(q_reg, c_reg)
target_circuit.draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
Generating accreditation circuits The function $accreditation\_circuits$ generates all the circuits required by AP, target and traps. It automatically appends random Pauli gates to the circuits (if the implementation is noisy, these random Pauli gates reduce the noise to Pauli errors ! ) It also returns the list $postp\_list$ of strings required to post-process the outputs, as well as the number $v\_zero$ indicating the circuit implementing the target.This is the target circuit with randomly chosen Pauli gates:
###Code
accsys = AccreditationCircuits(target_circuit)
v = 10
circ_list, postp_list, v_zero = accsys.generate_circuits(v)
circ_list[(v_zero)%(v+1)].draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
This is how a trap looks like:
###Code
circ_list[(v_zero+1)%(v+1)].draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
One can use the optional twoqubitgate arguement to switch use cx instead of cz gates and can arbitrarily change the coupling map, in order to compile to the desired device topology (which in this case might lead to more layers than expected).
###Code
accsys.target_circuit(target_circuit, two_qubit_gate='cx', coupling_map=[[0,1],[1,2],[2,3]] )
v = 10
circ_list, postp_list, v_zero = accsys.generate_circuits(v)
circ_list[(v_zero)%(v+1)].draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
Simulate the ideal circuits Let's implement AP.We use $accreditation\_circuits$ to generate target and trap circuits.Then, we use the function $single\_protocol\_run$ to implement all these circuits, keeping the output of the target only if all of the traps return the correct output.
###Code
simulator = qiskit.Aer.get_backend('qasm_simulator')
test_1 = AccreditationFitter()
# Create target and trap circuits with random Pauli gates
accsys = AccreditationCircuits(target_circuit)
circuit_list, postp_list, v_zero = accsys.generate_circuits(v)
job = execute(circuit_list,
simulator,
shots=1)
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_1.single_protocol_run(result, postp_list, v_zero)
print("Outputs of the target: ",test_1.outputs," , AP",test_1.flag,"these outputs!")
###Output
Outputs of the target: ['0100'] , AP accepted these outputs!
###Markdown
In the absence of noise, all traps return the expected output, therefore we always accept the output of the target.To obtain an upper-bound on the variation distance on the outputs of the target circuit, we need to implement AP $d$ times, each time with ___v___ different trap circuits.
###Code
# Number of runs
d = 20
test_2 = AccreditationFitter()
for run in range(d):
# Create target and trap circuits with random Pauli gates
circuit_list, postp_list, v_zero = accsys.generate_circuits(v)
# Implement all these circuits
job = execute(circuit_list,
simulator,
shots=1)
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_2.single_protocol_run(result, postp_list, v_zero)
print("Protocol run number",run+1,", outputs of the target",test_2.flag)
print('\nAfter',test_2.num_runs,'runs, AP has accepted',test_2.N_acc,'outputs!')
print('\nList of accepted outputs:\n', test_2.outputs)
###Output
Protocol run number 1 , outputs of the target accepted
Protocol run number 2 , outputs of the target accepted
Protocol run number 3 , outputs of the target accepted
Protocol run number 4 , outputs of the target accepted
Protocol run number 5 , outputs of the target accepted
Protocol run number 6 , outputs of the target accepted
Protocol run number 7 , outputs of the target accepted
Protocol run number 8 , outputs of the target accepted
Protocol run number 9 , outputs of the target accepted
Protocol run number 10 , outputs of the target accepted
Protocol run number 11 , outputs of the target accepted
Protocol run number 12 , outputs of the target accepted
Protocol run number 13 , outputs of the target accepted
Protocol run number 14 , outputs of the target accepted
Protocol run number 15 , outputs of the target accepted
Protocol run number 16 , outputs of the target accepted
Protocol run number 17 , outputs of the target accepted
Protocol run number 18 , outputs of the target accepted
Protocol run number 19 , outputs of the target accepted
Protocol run number 20 , outputs of the target accepted
After 20 runs, AP has accepted 20 outputs!
List of accepted outputs:
['0100', '0010', '1011', '1001', '0110', '1111', '1101', '0000', '1101', '0000', '0110', '1011', '0110', '1101', '0000', '0110', '1101', '0010', '1001', '1101', '0000']
###Markdown
The function $bound\_variation\_distance$ calculates the upper-bound on the variation distance (VD) using$$VD\leq \frac{\varepsilon}{N_{\textrm{acc}}/d-\theta}\textrm{ ,}$$where $\theta\in[0,1]$ is a positive number and$$\varepsilon= \frac{1.7}{v+1}$$is the maximum probability of accepting an incorrect state for the target.The function $bound\_variation\_distance$ also calculates the confidence in the bound as $$1-2\textrm{exp}\big(-2\theta d^2\big)$$
###Code
theta = 5/100
test_2.bound_variation_distance(theta)
print("AP accepted",test_2.N_acc,"out of",test_2.num_runs,"times.")
print("With confidence",test_2.confidence,"AP certifies that VD is upper-bounded by",test_2.bound)
###Output
AP accepted 20 out of 20 times.
With confidence 1.0 AP certifies that VD is upper-bounded by 0.16267942583732053
###Markdown
Defining the noise model We define a noise model for the simulator. We add depolarizing error probabilities to the controlled-$Z$ and single-qubit gates.
###Code
noise_model = NoiseModel()
p1q = 0.003
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u1')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u3')
p2q = 0.03
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2q, 2), 'cx')
basis_gates = ['u1','u2','u3','cx']
###Output
_____no_output_____
###Markdown
We then implement noisy circuits and pass their outputs to $single\_protocol\_run$.
###Code
test_3 = AccreditationFitter()
for run in range(d):
# Create target and trap circuits with random Pauli gates
circuit_list, postp_list, v_zero = accsys.generate_circuits(v)
job = execute(circuit_list,
simulator,
noise_model=noise_model,
basis_gates=basis_gates,
shots=1,
backend_options={'max_parallel_experiments': 0})
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_3.single_protocol_run(result, postp_list, v_zero)
print("Protocol run number",run+1,", outputs of the target",test_3.flag)
print("\nAP accepted",test_3.N_acc,"out of",test_3.num_runs,"times.")
print('\nList of accepted outputs:\n', test_3.outputs)
theta = 5/100
test_3.bound_variation_distance(theta)
print("\nWith confidence",test_3.confidence,"AP certifies that VD is upper-bounded by",test_3.bound)
###Output
Protocol run number 1 , outputs of the target rejected
Protocol run number 2 , outputs of the target rejected
Protocol run number 3 , outputs of the target rejected
Protocol run number 4 , outputs of the target rejected
Protocol run number 5 , outputs of the target rejected
Protocol run number 6 , outputs of the target rejected
Protocol run number 7 , outputs of the target accepted
Protocol run number 8 , outputs of the target rejected
Protocol run number 9 , outputs of the target accepted
Protocol run number 10 , outputs of the target rejected
Protocol run number 11 , outputs of the target rejected
Protocol run number 12 , outputs of the target rejected
Protocol run number 13 , outputs of the target accepted
Protocol run number 14 , outputs of the target rejected
Protocol run number 15 , outputs of the target rejected
Protocol run number 16 , outputs of the target rejected
Protocol run number 17 , outputs of the target rejected
Protocol run number 18 , outputs of the target accepted
Protocol run number 19 , outputs of the target rejected
Protocol run number 20 , outputs of the target rejected
AP accepted 4 out of 20 times.
List of accepted outputs:
['0100', '0010', '1011', '1001', '0110', '1111', '1101', '0000', '1101', '0000', '0110', '1011', '0110', '1101', '0000', '0110', '1101', '0010', '1001', '1101', '0000', '0100', '0110', '1111', '0100']
With confidence 1.0 AP certifies that VD is upper-bounded by 1
###Markdown
Changing the number of trap circuits per protocol run changes the upper-bound on the VD, but not the confidence.What number of trap circuits will ensure the minimal upper-bound for your target circuit?
###Code
min_traps = 4
max_traps = 10
for num_trap_circs in range(min_traps,max_traps):
test_4 = AccreditationFitter()
for run in range(d):
# Create target and trap circuits with random Pauli gates
circuit_list, postp_list, v_zero = accsys.generate_circuits(num_trap_circs)
job = execute(circuit_list,
simulator,
noise_model=noise_model,
basis_gates=basis_gates,
shots=1,
backend_options={'max_parallel_experiments': 0})
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_4.single_protocol_run(result, postp_list, v_zero)
print("\nWith", num_trap_circs,
"traps, AP accepted", test_4.N_acc,
"out of", test_4.num_runs, "times.")
test_4.bound_variation_distance(theta)
print("With confidence", test_4.confidence,
"AP with", num_trap_circs,
"traps certifies that VD is upper-bounded by", test_4.bound)
###Output
With 4 traps, AP accepted 16 out of 20 times.
With confidence 1.0 AP with 4 traps certifies that VD is upper-bounded by 0.45333333333333337
With 5 traps, AP accepted 7 out of 20 times.
With confidence 1.0 AP with 5 traps certifies that VD is upper-bounded by 0.9444444444444444
With 6 traps, AP accepted 11 out of 20 times.
With confidence 1.0 AP with 6 traps certifies that VD is upper-bounded by 0.48571428571428577
With 7 traps, AP accepted 9 out of 20 times.
With confidence 1.0 AP with 7 traps certifies that VD is upper-bounded by 0.53125
With 8 traps, AP accepted 7 out of 20 times.
With confidence 1.0 AP with 8 traps certifies that VD is upper-bounded by 0.6296296296296298
With 9 traps, AP accepted 11 out of 20 times.
With confidence 1.0 AP with 9 traps certifies that VD is upper-bounded by 0.33999999999999986
###Markdown
Accreditation protocol **Accreditation Protocol (AP)** is a protocol devised to characterize the reliability of noisy quantum devices.Given a noisy quantum device implementing a "target" quantum circuit, AP certifies an upper-bound on the variation distance between the probability distribution of the outputs returned by the device and the ideal probability distribution.This method is based on Ferracin et al, "Accrediting outputs of noisy intermediate-scale quantum devices", https://arxiv.org/abs/1811.09709.This notebook gives an example for how to use the ``ignis.characterization.accreditation`` module. This particular example shows how to accredit the outputs of a 4-qubit quantum circuit of depth 5. All the circuits are run using the noisy Aer simulator.
###Code
#Import general libraries (needed for functions)
import numpy as np
from numpy import random
import qiskit
#Import Qiskit classes
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, Aer, execute
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error
#Import the accreditation functions.
from qiskit.ignis.verification.accreditation import AccreditationFitter,AccreditationCircuits
###Output
_____no_output_____
###Markdown
Input to the protocol AP can accredit the outputs of a __target circuit__ that1) Takes as input $n$ qubits in the state $|{0}>$2) Ends with single-qubit measurements in the Pauli-$Z$ basis3) Is made of $m$ "bands", each band containing a round of single-qubit gates and a round of controlled-$Z$ gates.The accreditation is made by employing __trap circuits__, circuits that can be efficiently simulated on a classical computer and that whose outputs are used to witness the correct functionality of the device.Let's now draw a target quantum circuit!We start with a simple circuit to generate and measure 4-qubits GHZ states.
###Code
# Create a Quantum Register with n_qb qubits.
q_reg = QuantumRegister(4, 'q')
# Create a Classical Register with n_qb bits.
c_reg = ClassicalRegister(4, 's')
# Create a Quantum Circuit acting on the q register
target_circuit = QuantumCircuit(q_reg, c_reg)
target_circuit.h(0)
target_circuit.h(1)
target_circuit.h(2)
target_circuit.h(3)
target_circuit.cz(0,1)
target_circuit.cz(0,2)
target_circuit.h(1)
target_circuit.h(2)
target_circuit.cz(0,3)
target_circuit.cz(1,2)
target_circuit.h(1)
target_circuit.h(2)
target_circuit.h(3)
target_circuit.measure(q_reg, c_reg)
target_circuit.draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
Generating accreditation circuits The function $accreditation\_circuits$ generates all the circuits required by AP, target and traps. It automatically appends random Pauli gates to the circuits (if the implementation is noisy, these random Pauli gates reduce the noise to Pauli errors ! ) It also returns the list $postp\_list$ of strings required to post-process the outputs, as well as the number $v\_zero$ indicating the circuit implementing the target.This is the target circuit with randomly chosen Pauli gates:
###Code
accsys = AccreditationCircuits(target_circuit)
v = 10
circ_list, postp_list, v_zero = accsys.generate_circuits(v)
circ_list[(v_zero)%(v+1)].draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
This is how a trap looks like:
###Code
circ_list[(v_zero+1)%(v+1)].draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
One can use the optional twoqubitgate argument to switch use cx instead of cz gates and can arbitrarily change the coupling map, in order to compile to the desired device topology (which in this case might lead to more layers than expected).
###Code
accsys.target_circuit(target_circuit, two_qubit_gate='cx', coupling_map=[[0,1],[1,2],[2,3]] )
v = 10
circ_list, postp_list, v_zero = accsys.generate_circuits(v)
circ_list[(v_zero)%(v+1)].draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
Simulate the ideal circuits Let's implement AP.We use $accreditation\_circuits$ to generate target and trap circuits.Then, we use the function $single\_protocol\_run$ to implement all these circuits, keeping the output of the target only if all of the traps return the correct output.
###Code
simulator = qiskit.Aer.get_backend('qasm_simulator')
test_1 = AccreditationFitter()
# Create target and trap circuits with random Pauli gates
accsys = AccreditationCircuits(target_circuit)
circuit_list, postp_list, v_zero = accsys.generate_circuits(v)
job = execute(circuit_list,
simulator,
shots=1)
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_1.single_protocol_run(result, postp_list, v_zero)
print("Outputs of the target: ",test_1.outputs," , AP",test_1.flag,"these outputs!")
###Output
Outputs of the target: ['0100'] , AP accepted these outputs!
###Markdown
In the absence of noise, all traps return the expected output, therefore we always accept the output of the target.To obtain an upper-bound on the variation distance on the outputs of the target circuit, we need to implement AP $d$ times, each time with ___v___ different trap circuits.
###Code
# Number of runs
d = 20
test_2 = AccreditationFitter()
for run in range(d):
# Create target and trap circuits with random Pauli gates
circuit_list, postp_list, v_zero = accsys.generate_circuits(v)
# Implement all these circuits
job = execute(circuit_list,
simulator,
shots=1)
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_2.single_protocol_run(result, postp_list, v_zero)
print("Protocol run number",run+1,", outputs of the target",test_2.flag)
print('\nAfter',test_2.num_runs,'runs, AP has accepted',test_2.N_acc,'outputs!')
print('\nList of accepted outputs:\n', test_2.outputs)
###Output
Protocol run number 1 , outputs of the target accepted
Protocol run number 2 , outputs of the target accepted
Protocol run number 3 , outputs of the target accepted
Protocol run number 4 , outputs of the target accepted
Protocol run number 5 , outputs of the target accepted
Protocol run number 6 , outputs of the target accepted
Protocol run number 7 , outputs of the target accepted
Protocol run number 8 , outputs of the target accepted
Protocol run number 9 , outputs of the target accepted
Protocol run number 10 , outputs of the target accepted
Protocol run number 11 , outputs of the target accepted
Protocol run number 12 , outputs of the target accepted
Protocol run number 13 , outputs of the target accepted
Protocol run number 14 , outputs of the target accepted
Protocol run number 15 , outputs of the target accepted
Protocol run number 16 , outputs of the target accepted
Protocol run number 17 , outputs of the target accepted
Protocol run number 18 , outputs of the target accepted
Protocol run number 19 , outputs of the target accepted
Protocol run number 20 , outputs of the target accepted
After 20 runs, AP has accepted 20 outputs!
List of accepted outputs:
['0100', '0010', '1011', '1001', '0110', '1111', '1101', '0000', '1101', '0000', '0110', '1011', '0110', '1101', '0000', '0110', '1101', '0010', '1001', '1101', '0000']
###Markdown
The function $bound\_variation\_distance$ calculates the upper-bound on the variation distance (VD) using$$VD\leq \frac{\varepsilon}{N_{\textrm{acc}}/d-\theta}\textrm{ ,}$$where $\theta\in[0,1]$ is a positive number and$$\varepsilon= \frac{1.7}{v+1}$$is the maximum probability of accepting an incorrect state for the target.The function $bound\_variation\_distance$ also calculates the confidence in the bound as $$1-2\textrm{exp}\big(-2\theta d^2\big)$$
###Code
theta = 5/100
test_2.bound_variation_distance(theta)
print("AP accepted",test_2.N_acc,"out of",test_2.num_runs,"times.")
print("With confidence",test_2.confidence,"AP certifies that VD is upper-bounded by",test_2.bound)
###Output
AP accepted 20 out of 20 times.
With confidence 1.0 AP certifies that VD is upper-bounded by 0.16267942583732053
###Markdown
Defining the noise model We define a noise model for the simulator. We add depolarizing error probabilities to the controlled-$Z$ and single-qubit gates.
###Code
noise_model = NoiseModel()
p1q = 0.003
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u1')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u3')
p2q = 0.03
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2q, 2), 'cx')
basis_gates = ['u1','u2','u3','cx']
###Output
_____no_output_____
###Markdown
We then implement noisy circuits and pass their outputs to $single\_protocol\_run$.
###Code
test_3 = AccreditationFitter()
for run in range(d):
# Create target and trap circuits with random Pauli gates
circuit_list, postp_list, v_zero = accsys.generate_circuits(v)
job = execute(circuit_list,
simulator,
noise_model=noise_model,
basis_gates=basis_gates,
shots=1,
backend_options={'max_parallel_experiments': 0})
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_3.single_protocol_run(result, postp_list, v_zero)
print("Protocol run number",run+1,", outputs of the target",test_3.flag)
print("\nAP accepted",test_3.N_acc,"out of",test_3.num_runs,"times.")
print('\nList of accepted outputs:\n', test_3.outputs)
theta = 5/100
test_3.bound_variation_distance(theta)
print("\nWith confidence",test_3.confidence,"AP certifies that VD is upper-bounded by",test_3.bound)
###Output
Protocol run number 1 , outputs of the target rejected
Protocol run number 2 , outputs of the target rejected
Protocol run number 3 , outputs of the target rejected
Protocol run number 4 , outputs of the target rejected
Protocol run number 5 , outputs of the target rejected
Protocol run number 6 , outputs of the target rejected
Protocol run number 7 , outputs of the target accepted
Protocol run number 8 , outputs of the target rejected
Protocol run number 9 , outputs of the target accepted
Protocol run number 10 , outputs of the target rejected
Protocol run number 11 , outputs of the target rejected
Protocol run number 12 , outputs of the target rejected
Protocol run number 13 , outputs of the target accepted
Protocol run number 14 , outputs of the target rejected
Protocol run number 15 , outputs of the target rejected
Protocol run number 16 , outputs of the target rejected
Protocol run number 17 , outputs of the target rejected
Protocol run number 18 , outputs of the target accepted
Protocol run number 19 , outputs of the target rejected
Protocol run number 20 , outputs of the target rejected
AP accepted 4 out of 20 times.
List of accepted outputs:
['0100', '0010', '1011', '1001', '0110', '1111', '1101', '0000', '1101', '0000', '0110', '1011', '0110', '1101', '0000', '0110', '1101', '0010', '1001', '1101', '0000', '0100', '0110', '1111', '0100']
With confidence 1.0 AP certifies that VD is upper-bounded by 1
###Markdown
Changing the number of trap circuits per protocol run changes the upper-bound on the VD, but not the confidence.What number of trap circuits will ensure the minimal upper-bound for your target circuit?
###Code
min_traps = 4
max_traps = 10
for num_trap_circs in range(min_traps,max_traps):
test_4 = AccreditationFitter()
for run in range(d):
# Create target and trap circuits with random Pauli gates
circuit_list, postp_list, v_zero = accsys.generate_circuits(num_trap_circs)
job = execute(circuit_list,
simulator,
noise_model=noise_model,
basis_gates=basis_gates,
shots=1,
backend_options={'max_parallel_experiments': 0})
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_4.single_protocol_run(result, postp_list, v_zero)
print("\nWith", num_trap_circs,
"traps, AP accepted", test_4.N_acc,
"out of", test_4.num_runs, "times.")
test_4.bound_variation_distance(theta)
print("With confidence", test_4.confidence,
"AP with", num_trap_circs,
"traps certifies that VD is upper-bounded by", test_4.bound)
###Output
With 4 traps, AP accepted 16 out of 20 times.
With confidence 1.0 AP with 4 traps certifies that VD is upper-bounded by 0.45333333333333337
With 5 traps, AP accepted 7 out of 20 times.
With confidence 1.0 AP with 5 traps certifies that VD is upper-bounded by 0.9444444444444444
With 6 traps, AP accepted 11 out of 20 times.
With confidence 1.0 AP with 6 traps certifies that VD is upper-bounded by 0.48571428571428577
With 7 traps, AP accepted 9 out of 20 times.
With confidence 1.0 AP with 7 traps certifies that VD is upper-bounded by 0.53125
With 8 traps, AP accepted 7 out of 20 times.
With confidence 1.0 AP with 8 traps certifies that VD is upper-bounded by 0.6296296296296298
With 9 traps, AP accepted 11 out of 20 times.
With confidence 1.0 AP with 9 traps certifies that VD is upper-bounded by 0.33999999999999986
###Markdown
Accreditation protocol Accreditation Protocol (AP) is a protocol devised to characterize the reliability of noisy quantum devices.Given a noisy quantum device implementing a "target" quantum circuit, AP certifies an upper-bound on the variation distance between the probability distribution of the outputs returned by the device and the ideal probability distribution.This method is based on Ferracin et al, "Accrediting outputs of noisy intermediate-scale quantum devices", https://arxiv.org/abs/1811.09709.This notebook gives an example for how to use the ignis.characterization.accreditation module. This particular example shows how to accredit the outputs of a 4-qubit quantum circuit of depth 5. All the circuits are run using the noisy Aer simulator.
###Code
#Import general libraries (needed for functions)
import numpy as np
from numpy import random
import qiskit
#Import Qiskit classes
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, Aer, execute
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error
#Import the accreditation functions.
from qiskit.ignis.verification.accreditation import AccreditationFitter,AccreditationCircuits
###Output
/opt/miniconda3/lib/python3.7/site-packages/qiskit_aqua-0.7.0-py3.7.egg/qiskit/aqua/operators/primitive_ops/pauli_op.py:25: DeprecationWarning: The module qiskit.extensions.standard is deprecated as of 0.14.0 and will be removed no earlier than 3 months after the release. You should import the standard gates from qiskit.circuit.library.standard_gates instead.
from qiskit.extensions.standard import RZGate, RYGate, RXGate, XGate, YGate, ZGate, IGate
###Markdown
Input to the protocol AP can accredit the outputs of a __target circuit__ that1) Takes as input $n$ qubits in the state $|{0}>$2) Ends with single-qubit measurements in the Pauli-$Z$ basis3) Is made of $m$ "bands", each band containing a round of single-qubit gates and a round of controlled-$Z$ gates.The accreditation is made by employing __trap circuits__, circuits that can be efficiently simulated on a classical computer and that whose outputs are used to witness the correct functionality of the device.Let's now draw a target quantum circuit!We start with a simple circuit to generate and measure 4-qubits GHZ states.
###Code
# Create a Quantum Register with n_qb qubits.
q_reg = QuantumRegister(4, 'q')
# Create a Classical Register with n_qb bits.
c_reg = ClassicalRegister(4, 's')
# Create a Quantum Circuit acting on the q register
target_circuit = QuantumCircuit(q_reg, c_reg)
target_circuit.h(0)
target_circuit.h(1)
target_circuit.h(2)
target_circuit.h(3)
target_circuit.cz(0,1)
target_circuit.cz(0,2)
target_circuit.h(1)
target_circuit.h(2)
target_circuit.cz(0,3)
target_circuit.cz(1,2)
target_circuit.h(1)
target_circuit.h(2)
target_circuit.h(3)
target_circuit.measure(q_reg, c_reg)
target_circuit.draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
Generating accreditation circuits The function $accreditation\_circuits$ generates all the circuits required by AP, target and traps. It automatically appends random Pauli gates to the circuits (if the implementation is noisy, these random Pauli gates reduce the noise to Pauli errors ! ) It also returns the list $postp\_list$ of strings required to post-process the outputs, as well as the number $v\_zero$ indicating the circuit implementing the target.This is the target circuit with randomly chosen Pauli gates:
###Code
accsys = AccreditationCircuits(target_circuit)
v = 10
circ_list, postp_list, v_zero = accsys.generate_circuits(v)
circ_list[(v_zero)%(v+1)].draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
This is how a trap looks like:
###Code
circ_list[(v_zero+1)%(v+1)].draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
One can use the optional twoqubitgate arguement to switch use cx instead of cz gates and can arbitrarily change the coupling map, in order to compile to the desired device topology (which in this case might lead to more layers than expected).
###Code
accsys.target_circuit(target_circuit, two_qubit_gate='cx', coupling_map=[[0,1],[1,2],[2,3]] )
v = 10
circ_list, postp_list, v_zero = accsys.generate_circuits(v)
circ_list[(v_zero)%(v+1)].draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
Simulate the ideal circuits Let's implement AP.We use $accreditation\_circuits$ to generate target and trap circuits.Then, we use the function $single\_protocol\_run$ to implement all these circuits, keeping the output of the target only if all of the traps return the correct output.
###Code
simulator = qiskit.Aer.get_backend('qasm_simulator')
test_1 = AccreditationFitter()
# Create target and trap circuits with random Pauli gates
accsys = AccreditationCircuits(target_circuit)
circuit_list, postp_list, v_zero = accsys.generate_circuits(v)
job = execute(circuit_list,
simulator,
shots=1)
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_1.single_protocol_run(result, postp_list, v_zero)
print("Outputs of the target: ",test_1.outputs," , AP",test_1.flag,"these outputs!")
###Output
Outputs of the target: ['0100'] , AP accepted these outputs!
###Markdown
In the absence of noise, all traps return the expected output, therefore we always accept the output of the target.To obtain an upper-bound on the variation distance on the outputs of the target circuit, we need to implement AP $d$ times, each time with ___v___ different trap circuits.
###Code
# Number of runs
d = 20
test_2 = AccreditationFitter()
for run in range(d):
# Create target and trap circuits with random Pauli gates
circuit_list, postp_list, v_zero = accsys.generate_circuits(v)
# Implement all these circuits
job = execute(circuit_list,
simulator,
shots=1)
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_2.single_protocol_run(result, postp_list, v_zero)
print("Protocol run number",run+1,", outputs of the target",test_2.flag)
print('\nAfter',test_2.num_runs,'runs, AP has accepted',test_2.N_acc,'outputs!')
print('\nList of accepted outputs:\n', test_2.outputs)
###Output
Protocol run number 1 , outputs of the target accepted
Protocol run number 2 , outputs of the target accepted
Protocol run number 3 , outputs of the target accepted
Protocol run number 4 , outputs of the target accepted
Protocol run number 5 , outputs of the target accepted
Protocol run number 6 , outputs of the target accepted
Protocol run number 7 , outputs of the target accepted
Protocol run number 8 , outputs of the target accepted
Protocol run number 9 , outputs of the target accepted
Protocol run number 10 , outputs of the target accepted
Protocol run number 11 , outputs of the target accepted
Protocol run number 12 , outputs of the target accepted
Protocol run number 13 , outputs of the target accepted
Protocol run number 14 , outputs of the target accepted
Protocol run number 15 , outputs of the target accepted
Protocol run number 16 , outputs of the target accepted
Protocol run number 17 , outputs of the target accepted
Protocol run number 18 , outputs of the target accepted
Protocol run number 19 , outputs of the target accepted
Protocol run number 20 , outputs of the target accepted
After 20 runs, AP has accepted 20 outputs!
List of accepted outputs:
['0100', '1011', '0000', '0000', '1011', '0010', '0100', '0000', '1111', '1001', '0010', '0110', '1111', '0110', '0110', '1001', '1101', '0100', '1001', '0010', '0100']
###Markdown
The function $bound\_variation\_distance$ calculates the upper-bound on the variation distance (VD) using$$VD\leq \frac{\varepsilon}{N_{\textrm{acc}}/d-\theta}\textrm{ ,}$$where $\theta\in[0,1]$ is a positive number and$$\varepsilon= \frac{1.7}{v+1}$$is the maximum probability of accepting an incorrect state for the target.The function $bound\_variation\_distance$ also calculates the confidence in the bound as $$1-2\textrm{exp}\big(-2\theta d^2\big)$$
###Code
theta = 5/100
test_2.bound_variation_distance(theta)
print("AP accepted",test_2.N_acc,"out of",test_2.num_runs,"times.")
print("With confidence",test_2.confidence,"AP certifies that VD is upper-bounded by",test_2.bound)
###Output
AP accepted 20 out of 20 times.
With confidence 1.0 AP certifies that VD is upper-bounded by 0.16267942583732053
###Markdown
Defining the noise model We define a noise model for the simulator. We add depolarizing error probabilities to the controlled-$Z$ and single-qubit gates.
###Code
noise_model = NoiseModel()
p1q = 0.003
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u1')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u3')
p2q = 0.03
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2q, 2), 'cx')
basis_gates = ['u1','u2','u3','cx']
###Output
_____no_output_____
###Markdown
We then implement noisy circuits and pass their outputs to $single\_protocol\_run$.
###Code
test_3 = AccreditationFitter()
for run in range(d):
# Create target and trap circuits with random Pauli gates
circuit_list, postp_list, v_zero = accsys.generate_circuits(v)
job = execute(circuit_list,
simulator,
noise_model=noise_model,
basis_gates=basis_gates,
shots=1,
backend_options={'max_parallel_experiments': 0})
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_3.single_protocol_run(result, postp_list, v_zero)
print("Protocol run number",run+1,", outputs of the target",test_3.flag)
print("\nAP accepted",test_3.N_acc,"out of",test_3.num_runs,"times.")
print('\nList of accepted outputs:\n', test_3.outputs)
theta = 5/100
test_3.bound_variation_distance(theta)
print("\nWith confidence",test_3.confidence,"AP certifies that VD is upper-bounded by",test_3.bound)
###Output
Protocol run number 1 , outputs of the target accepted
Protocol run number 2 , outputs of the target accepted
Protocol run number 3 , outputs of the target rejected
Protocol run number 4 , outputs of the target rejected
Protocol run number 5 , outputs of the target rejected
Protocol run number 6 , outputs of the target rejected
Protocol run number 7 , outputs of the target accepted
Protocol run number 8 , outputs of the target rejected
Protocol run number 9 , outputs of the target accepted
Protocol run number 10 , outputs of the target rejected
Protocol run number 11 , outputs of the target accepted
Protocol run number 12 , outputs of the target rejected
Protocol run number 13 , outputs of the target rejected
Protocol run number 14 , outputs of the target rejected
Protocol run number 15 , outputs of the target rejected
Protocol run number 16 , outputs of the target rejected
Protocol run number 17 , outputs of the target rejected
Protocol run number 18 , outputs of the target accepted
Protocol run number 19 , outputs of the target accepted
Protocol run number 20 , outputs of the target rejected
AP accepted 7 out of 20 times.
List of accepted outputs:
['0100', '1011', '0000', '0000', '1011', '0010', '0100', '0000', '1111', '1001', '0010', '0110', '1111', '0110', '0110', '1001', '1101', '0100', '1001', '0010', '0100', '1001', '1011', '0010', '0000', '0100', '0100', '0110']
With confidence 1.0 AP certifies that VD is upper-bounded by 0.5151515151515151
###Markdown
Changing the number of trap circuits per protocol run changes the upper-bound on the VD, but not the confidence.What number of trap circuits will ensure the minimal upper-bound for your target circuit?
###Code
min_traps = 4
max_traps = 10
for num_trap_circs in range(min_traps,max_traps):
test_4 = AccreditationFitter()
for run in range(d):
# Create target and trap circuits with random Pauli gates
circuit_list, postp_list, v_zero = accsys.generate_circuits(num_trap_circs)
job = execute(circuit_list,
simulator,
noise_model=noise_model,
basis_gates=basis_gates,
shots=1,
backend_options={'max_parallel_experiments': 0})
result = job.result()
# Post-process the outputs and see if the protocol accepts
test_4.single_protocol_run(result, postp_list, v_zero)
print("\nWith", num_trap_circs,
"traps, AP accepted", test_4.N_acc,
"out of", test_4.num_runs, "times.")
test_4.bound_variation_distance(theta)
print("With confidence", test_4.confidence,
"AP with", num_trap_circs,
"traps certifies that VD is upper-bounded by", test_4.bound)
###Output
With 4 traps, AP accepted 15 out of 20 times.
With confidence 1.0 AP with 4 traps certifies that VD is upper-bounded by 0.48571428571428577
With 5 traps, AP accepted 11 out of 20 times.
With confidence 1.0 AP with 5 traps certifies that VD is upper-bounded by 0.5666666666666667
|
assignments/2019/assignment2/TensorFlow.ipynb | ###Markdown
What's this TensorFlow business?You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you choose to work with that notebook). What is it?TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray. Why?* Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. * We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) * We want you to be exposed to the sort of deep learning code you might run into in academia or industry. How will I learn TensorFlow?TensorFlow has many excellent tutorials available, including those from [Google themselves](https://www.tensorflow.org/get_started/get_started).Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.**NOTE: This notebook is meant to teach you the latest version of Tensorflow 2.0. Most examples on the web today are still in 1.x, so be careful not to confuse the two when looking up documentation**. Install Tensorflow 2.0Tensorflow 2.0 is still not in a fully 100% stable release, but it's still usable and more intuitive than TF 1.x. Please make sure you have it installed before moving on in this notebook! Here are some steps to get started:1. Have the latest version of Anaconda installed on your machine.2. Create a new conda environment starting from Python 3.7. In this setup example, we'll call it `tf_20_env`.3. Run the command: `source activate tf_20_env`4. Then pip install TF 2.0 as described here: https://www.tensorflow.org/install/pip A guide on creating Anaconda enviornments: https://uoa-eresearch.github.io/eresearch-cookbook/recipe/2014/11/20/conda/This will give you an new enviornemnt to play in TF 2.0. Generally, if you plan to also use TensorFlow in your other projects, you might also want to keep a seperate Conda environment or virtualenv in Python 3.7 that has Tensorflow 1.9, so you can switch back and forth at will. Table of ContentsThis notebook has 5 parts. We will walk through TensorFlow at **three different levels of abstraction**, which should help you better understand it and prepare you for working on your project.1. Part I, Preparation: load the CIFAR-10 dataset.2. Part II, Barebone TensorFlow: **Abstraction Level 1**, we will work directly with low-level TensorFlow graphs. 3. Part III, Keras Model API: **Abstraction Level 2**, we will use `tf.keras.Model` to define arbitrary neural network architecture. 4. Part IV, Keras Sequential + Functional API: **Abstraction Level 3**, we will use `tf.keras.Sequential` to define a linear feed-forward network very conveniently, and then explore the functional libraries for building unique and uncommon models that require more flexibility.5. Part V, CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features. We will discuss Keras in more detail later in the notebook.Here is a table of comparison:| API | Flexibility | Convenience ||---------------|-------------|-------------|| Barebone | High | Low || `tf.keras.Model` | High | Medium || `tf.keras.Sequential` | Low | High | Part I: PreparationFirst, we load the CIFAR-10 dataset. This might take a few minutes to download the first time you run it, but after that the files should be cached on disk and loading should be faster.In previous parts of the assignment we used CS231N-specific code to download and read the CIFAR-10 dataset; however the `tf.keras.datasets` package in TensorFlow provides prebuilt utility functions for loading many common datasets.For the purposes of this assignment we will still write our own code to preprocess the data and iterate through it in minibatches. The `tf.data` package in TensorFlow provides tools for automating this process, but working with this package adds extra complication and is beyond the scope of this notebook. However using `tf.data` can be much more efficient than the simple approach used in this notebook, so you should consider using it for your project.
###Code
import os
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
def load_cifar10(num_training=49000, num_validation=1000, num_test=10000):
"""
Fetch the CIFAR-10 dataset from the web and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 dataset and use appropriate data types and shapes
cifar10 = tf.keras.datasets.cifar10.load_data()
(X_train, y_train), (X_test, y_test) = cifar10
X_train = np.asarray(X_train, dtype=np.float32)
y_train = np.asarray(y_train, dtype=np.int32).flatten()
X_test = np.asarray(X_test, dtype=np.float32)
y_test = np.asarray(y_test, dtype=np.int32).flatten()
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean pixel and divide by std
mean_pixel = X_train.mean(axis=(0, 1, 2), keepdims=True)
std_pixel = X_train.std(axis=(0, 1, 2), keepdims=True)
X_train = (X_train - mean_pixel) / std_pixel
X_val = (X_val - mean_pixel) / std_pixel
X_test = (X_test - mean_pixel) / std_pixel
return X_train, y_train, X_val, y_val, X_test, y_test
# If there are errors with SSL downloading involving self-signed certificates,
# it may be that your Python version was recently installed on the current machine.
# See: https://github.com/tensorflow/tensorflow/issues/10779
# To fix, run the command: /Applications/Python\ 3.7/Install\ Certificates.command
# ...replacing paths as necessary.
# Invoke the above function to get our data.
NHW = (0, 1, 2)
X_train, y_train, X_val, y_val, X_test, y_test = load_cifar10()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape, y_train.dtype)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
class Dataset(object):
def __init__(self, X, y, batch_size, shuffle=False):
"""
Construct a Dataset object to iterate over data X and labels y
Inputs:
- X: Numpy array of data, of any shape
- y: Numpy array of labels, of any shape but with y.shape[0] == X.shape[0]
- batch_size: Integer giving number of elements per minibatch
- shuffle: (optional) Boolean, whether to shuffle the data on each epoch
"""
assert X.shape[0] == y.shape[0], 'Got different numbers of data and labels'
self.X, self.y = X, y
self.batch_size, self.shuffle = batch_size, shuffle
def __iter__(self):
N, B = self.X.shape[0], self.batch_size
idxs = np.arange(N)
if self.shuffle:
np.random.shuffle(idxs)
return iter((self.X[i:i+B], self.y[i:i+B]) for i in range(0, N, B))
train_dset = Dataset(X_train, y_train, batch_size=64, shuffle=True)
val_dset = Dataset(X_val, y_val, batch_size=64, shuffle=False)
test_dset = Dataset(X_test, y_test, batch_size=64)
# We can iterate through a dataset like this:
for t, (x, y) in enumerate(train_dset):
print(t, x.shape, y.shape)
if t > 5: break
###Output
0 (64, 32, 32, 3) (64,)
1 (64, 32, 32, 3) (64,)
2 (64, 32, 32, 3) (64,)
3 (64, 32, 32, 3) (64,)
4 (64, 32, 32, 3) (64,)
5 (64, 32, 32, 3) (64,)
6 (64, 32, 32, 3) (64,)
###Markdown
You can optionally **use GPU by setting the flag to True below**. It's not neccessary to use a GPU for this assignment; if you are working on Google Cloud then we recommend that you do not use a GPU, as it will be significantly more expensive.
###Code
# Set up some global variables
USE_GPU = False
if USE_GPU:
device = '/device:GPU:0'
else:
device = '/cpu:0'
# Constant to control how often we print when training models
print_every = 100
print('Using device: ', device)
###Output
Using device: /cpu:0
###Markdown
Part II: Barebones TensorFlowTensorFlow ships with various high-level APIs which make it very convenient to define and train neural networks; we will cover some of these constructs in Part III and Part IV of this notebook. In this section we will start by building a model with basic TensorFlow constructs to help you better understand what's going on under the hood of the higher-level APIs.**"Barebones Tensorflow" is important to understanding the building blocks of TensorFlow, but much of it involves concepts from TensorFlow 1.x.** We will be working with legacy modules such as `tf.Variable`.Therefore, please read and understand the differences between legacy (1.x) TF and the new (2.0) TF. Historical background on TensorFlow 1.xTensorFlow 1.x is primarily a framework for working with **static computational graphs**. Nodes in the computational graph are Tensors which will hold n-dimensional arrays when the graph is run; edges in the graph represent functions that will operate on Tensors when the graph is run to actually perform useful computation.Before Tensorflow 2.0, we had to configure the graph into two phases. There are plenty of tutorials online that explain this two-step process. The process generally looks like the following for TF 1.x:1. **Build a computational graph that describes the computation that you want to perform**. This stage doesn't actually perform any computation; it just builds up a symbolic representation of your computation. This stage will typically define one or more `placeholder` objects that represent inputs to the computational graph.2. **Run the computational graph many times.** Each time the graph is run (e.g. for one gradient descent step) you will specify which parts of the graph you want to compute, and pass a `feed_dict` dictionary that will give concrete values to any `placeholder`s in the graph. The new paradigm in Tensorflow 2.0Now, with Tensorflow 2.0, we can simply adopt a functional form that is more Pythonic and similar in spirit to PyTorch and direct Numpy operation. Instead of the 2-step paradigm with computation graphs, making it (among other things) easier to debug TF code. You can read more details at https://www.tensorflow.org/guide/eager.The main difference between the TF 1.x and 2.0 approach is that the 2.0 approach doesn't make use of `tf.Session`, `tf.run`, `placeholder`, `feed_dict`. To get more details of what's different between the two version and how to convert between the two, check out the official migration guide: https://www.tensorflow.org/alpha/guide/migration_guideLater, in the rest of this notebook we'll focus on this new, simpler approach. TensorFlow warmup: Flatten FunctionWe can see this in action by defining a simple `flatten` function that will reshape image data for use in a fully-connected network.In TensorFlow, data for convolutional feature maps is typically stored in a Tensor of shape N x H x W x C where:- N is the number of datapoints (minibatch size)- H is the height of the feature map- W is the width of the feature map- C is the number of channels in the feature mapThis is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we use fully connected affine layers to process the image, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "flatten" operation to collapse the `H x W x C` values per representation into a single long vector. Notice the `tf.reshape` call has the target shape as `(N, -1)`, meaning it will reshape/keep the first dimension to be N, and then infer as necessary what the second dimension is in the output, so we can collapse the remaining dimensions from the input properly.**NOTE**: TensorFlow and PyTorch differ on the default Tensor layout; TensorFlow uses N x H x W x C but PyTorch uses N x C x H x W.
###Code
def flatten(x):
"""
Input:
- TensorFlow Tensor of shape (N, D1, ..., DM)
Output:
- TensorFlow Tensor of shape (N, D1 * ... * DM)
"""
N = tf.shape(x)[0]
return tf.reshape(x, (N, -1))
def test_flatten():
# Construct concrete values of the input data x using numpy
x_np = np.arange(24).reshape((2, 3, 4))
print('x_np:\n', x_np, '\n')
# Compute a concrete output value.
x_flat_np = flatten(x_np)
print('x_flat_np:\n', x_flat_np, '\n')
test_flatten()
###Output
x_np:
[[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[12 13 14 15]
[16 17 18 19]
[20 21 22 23]]]
x_flat_np:
tf.Tensor(
[[ 0 1 2 3 4 5 6 7 8 9 10 11]
[12 13 14 15 16 17 18 19 20 21 22 23]], shape=(2, 12), dtype=int64)
###Markdown
Barebones TensorFlow: Define a Two-Layer NetworkWe will now implement our first neural network with TensorFlow: a fully-connected ReLU network with two hidden layers and no biases on the CIFAR10 dataset. For now we will use only low-level TensorFlow operators to define the network; later we will see how to use the higher-level abstractions provided by `tf.keras` to simplify the process.We will define the forward pass of the network in the function `two_layer_fc`; this will accept TensorFlow Tensors for the inputs and weights of the network, and return a TensorFlow Tensor for the scores. After defining the network architecture in the `two_layer_fc` function, we will test the implementation by checking the shape of the output.**It's important that you read and understand this implementation.**
###Code
def two_layer_fc(x, params):
"""
A fully-connected neural network; the architecture is:
fully-connected layer -> ReLU -> fully connected layer.
Note that we only need to define the forward pass here; TensorFlow will take
care of computing the gradients for us.
The input to the network will be a minibatch of data, of shape
(N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,
and the output layer will produce scores for C classes.
Inputs:
- x: A TensorFlow Tensor of shape (N, d1, ..., dM) giving a minibatch of
input data.
- params: A list [w1, w2] of TensorFlow Tensors giving weights for the
network, where w1 has shape (D, H) and w2 has shape (H, C).
Returns:
- scores: A TensorFlow Tensor of shape (N, C) giving classification scores
for the input data x.
"""
w1, w2 = params # Unpack the parameters
x = flatten(x) # Flatten the input; now x has shape (N, D)
h = tf.nn.relu(tf.matmul(x, w1)) # Hidden layer: h has shape (N, H)
scores = tf.matmul(h, w2) # Compute scores of shape (N, C)
return scores
def two_layer_fc_test():
hidden_layer_size = 42
# Scoping our TF operations under a tf.device context manager
# lets us tell TensorFlow where we want these Tensors to be
# multiplied and/or operated on, e.g. on a CPU or a GPU.
with tf.device(device):
x = tf.zeros((64, 32, 32, 3))
w1 = tf.zeros((32 * 32 * 3, hidden_layer_size))
w2 = tf.zeros((hidden_layer_size, 10))
# Call our two_layer_fc function for the forward pass of the network.
scores = two_layer_fc(x, [w1, w2])
print(scores.shape)
two_layer_fc_test()
###Output
(64, 10)
###Markdown
Barebones TensorFlow: Three-Layer ConvNetHere you will complete the implementation of the function `three_layer_convnet` which will perform the forward pass of a three-layer convolutional network. The network should have the following architecture:1. A convolutional layer (with bias) with `channel_1` filters, each with shape `KW1 x KH1`, and zero-padding of two2. ReLU nonlinearity3. A convolutional layer (with bias) with `channel_2` filters, each with shape `KW2 x KH2`, and zero-padding of one4. ReLU nonlinearity5. Fully-connected layer with bias, producing scores for `C` classes.**HINT**: For convolutions: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/conv2d; be careful with padding!**HINT**: For biases: https://www.tensorflow.org/performance/xla/broadcasting
###Code
def three_layer_convnet(x, params):
"""
A three-layer convolutional network with the architecture described above.
Inputs:
- x: A TensorFlow Tensor of shape (N, H, W, 3) giving a minibatch of images
- params: A list of TensorFlow Tensors giving the weights and biases for the
network; should contain the following:
- conv_w1: TensorFlow Tensor of shape (KH1, KW1, 3, channel_1) giving
weights for the first convolutional layer.
- conv_b1: TensorFlow Tensor of shape (channel_1,) giving biases for the
first convolutional layer.
- conv_w2: TensorFlow Tensor of shape (KH2, KW2, channel_1, channel_2)
giving weights for the second convolutional layer
- conv_b2: TensorFlow Tensor of shape (channel_2,) giving biases for the
second convolutional layer.
- fc_w: TensorFlow Tensor giving weights for the fully-connected layer.
Can you figure out what the shape should be?
- fc_b: TensorFlow Tensor giving biases for the fully-connected layer.
Can you figure out what the shape should be?
"""
conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params
scores = None
############################################################################
# TODO: Implement the forward pass for the three-layer ConvNet. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# print('x.shape = ', x.shape)
# print('conv_w1.shape = ', conv_w1.shape)
# print('conv_b1.shape = ', conv_b1.shape)
# print('conv_w2.shape = ', conv_w2.shape)
# print('conv_b2.shape = ', conv_b2.shape)
# print('fc_w.shape = ', fc_w.shape)
# print('fc_b.shape = ', fc_b.shape)
padding1 = tf.constant([[0,0],[2,2],[2,2],[0,0]])
padding2 = tf.constant([[0,0],[1,1],[1,1],[0,0]])
x = tf.pad(x, padding1, 'CONSTANT')
l1 = tf.nn.conv2d(x, conv_w1, strides=[1,1,1,1], padding='VALID') + conv_b1
# print('l1 conv shape = ', l1.shape)
l1 = tf.nn.relu(l1)
# print('l1 relu shape = ', l1.shape)
l1 = tf.pad(l1, padding2, 'CONSTANT')
l2 = tf.nn.conv2d(l1, conv_w2, strides=[1,1,1,1], padding='VALID') + conv_b2
l2 = tf.nn.relu(l2)
l2 = flatten(l2)
scores = tf.matmul(l2, fc_w) + fc_b
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return scores
###Output
_____no_output_____
###Markdown
After defing the forward pass of the three-layer ConvNet above, run the following cell to test your implementation. Like the two-layer network, we run the graph on a batch of zeros just to make sure the function doesn't crash, and produces outputs of the correct shape.When you run this function, `scores_np` should have shape `(64, 10)`.
###Code
def three_layer_convnet_test():
with tf.device(device):
x = tf.zeros((64, 32, 32, 3))
conv_w1 = tf.zeros((5, 5, 3, 6))
conv_b1 = tf.zeros((6,))
conv_w2 = tf.zeros((3, 3, 6, 9))
conv_b2 = tf.zeros((9,))
fc_w = tf.zeros((32 * 32 * 9, 10))
fc_b = tf.zeros((10,))
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
scores = three_layer_convnet(x, params)
# Inputs to convolutional layers are 4-dimensional arrays with shape
# [batch_size, height, width, channels]
print('scores_np has shape: ', scores.shape)
three_layer_convnet_test()
###Output
scores_np has shape: (64, 10)
###Markdown
Barebones TensorFlow: Training StepWe now define the `training_step` function performs a single training step. This will take three basic steps:1. Compute the loss2. Compute the gradient of the loss with respect to all network weights3. Make a weight update step using (stochastic) gradient descent.We need to use a few new TensorFlow functions to do all of this:- For computing the cross-entropy loss we'll use `tf.nn.sparse_softmax_cross_entropy_with_logits`: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits- For averaging the loss across a minibatch of data we'll use `tf.reduce_mean`:https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/reduce_mean- For computing gradients of the loss with respect to the weights we'll use `tf.GradientTape` (useful for Eager execution): https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/GradientTape- We'll mutate the weight values stored in a TensorFlow Tensor using `tf.assign_sub` ("sub" is for subtraction): https://www.tensorflow.org/api_docs/python/tf/assign_sub
###Code
def training_step(model_fn, x, y, params, learning_rate):
with tf.GradientTape() as tape:
scores = model_fn(x, params) # Forward pass of the model
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=scores)
total_loss = tf.reduce_mean(loss)
grad_params = tape.gradient(total_loss, params)
# Make a vanilla gradient descent step on all of the model parameters
# Manually update the weights using assign_sub()
for w, grad_w in zip(params, grad_params):
w.assign_sub(learning_rate * grad_w)
return total_loss
def train_part2(model_fn, init_fn, learning_rate):
"""
Train a model on CIFAR-10.
Inputs:
- model_fn: A Python function that performs the forward pass of the model
using TensorFlow; it should have the following signature:
scores = model_fn(x, params) where x is a TensorFlow Tensor giving a
minibatch of image data, params is a list of TensorFlow Tensors holding
the model weights, and scores is a TensorFlow Tensor of shape (N, C)
giving scores for all elements of x.
- init_fn: A Python function that initializes the parameters of the model.
It should have the signature params = init_fn() where params is a list
of TensorFlow Tensors holding the (randomly initialized) weights of the
model.
- learning_rate: Python float giving the learning rate to use for SGD.
"""
params = init_fn() # Initialize the model parameters
for t, (x_np, y_np) in enumerate(train_dset):
# Run the graph on a batch of training data.
loss = training_step(model_fn, x_np, y_np, params, learning_rate)
# Periodically print the loss and check accuracy on the val set.
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss))
check_accuracy(val_dset, x_np, model_fn, params)
def check_accuracy(dset, x, model_fn, params):
"""
Check accuracy on a classification model, e.g. for validation.
Inputs:
- dset: A Dataset object against which to check accuracy
- x: A TensorFlow placeholder Tensor where input images should be fed
- model_fn: the Model we will be calling to make predictions on x
- params: parameters for the model_fn to work with
Returns: Nothing, but prints the accuracy of the model
"""
num_correct, num_samples = 0, 0
for x_batch, y_batch in dset:
scores_np = model_fn(x_batch, params).numpy()
y_pred = scores_np.argmax(axis=1)
num_samples += x_batch.shape[0]
num_correct += (y_pred == y_batch).sum()
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: InitializationWe'll use the following utility method to initialize the weight matrices for our models using Kaiming's normalization method.[1] He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, ICCV 2015, https://arxiv.org/abs/1502.01852
###Code
def create_matrix_with_kaiming_normal(shape):
if len(shape) == 2:
fan_in, fan_out = shape[0], shape[1]
elif len(shape) == 4:
fan_in, fan_out = np.prod(shape[:3]), shape[3]
return tf.keras.backend.random_normal(shape) * np.sqrt(2.0 / fan_in)
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Train a Two-Layer NetworkWe are finally ready to use all of the pieces defined above to train a two-layer fully-connected network on CIFAR-10.We just need to define a function to initialize the weights of the model, and call `train_part2`.Defining the weights of the network introduces another important piece of TensorFlow API: `tf.Variable`. A TensorFlow Variable is a Tensor whose value is stored in the graph and persists across runs of the computational graph; however unlike constants defined with `tf.zeros` or `tf.random_normal`, the values of a Variable can be mutated as the graph runs; these mutations will persist across graph runs. Learnable parameters of the network are usually stored in Variables.You don't need to tune any hyperparameters, but you should achieve validation accuracies above 40% after one epoch of training.
###Code
def two_layer_fc_init():
"""
Initialize the weights of a two-layer network, for use with the
two_layer_network function defined above.
You can use the `create_matrix_with_kaiming_normal` helper!
Inputs: None
Returns: A list of:
- w1: TensorFlow tf.Variable giving the weights for the first layer
- w2: TensorFlow tf.Variable giving the weights for the second layer
"""
hidden_layer_size = 4000
w1 = tf.Variable(create_matrix_with_kaiming_normal((3 * 32 * 32, 4000)))
w2 = tf.Variable(create_matrix_with_kaiming_normal((4000, 10)))
return [w1, w2]
learning_rate = 1e-2
train_part2(two_layer_fc, two_layer_fc_init, learning_rate)
###Output
Iteration 0, loss = 3.4696
Got 148 / 1000 correct (14.80%)
Iteration 100, loss = 1.9044
Got 366 / 1000 correct (36.60%)
Iteration 200, loss = 1.4232
Got 391 / 1000 correct (39.10%)
Iteration 300, loss = 1.8703
Got 383 / 1000 correct (38.30%)
Iteration 400, loss = 1.7725
Got 424 / 1000 correct (42.40%)
Iteration 500, loss = 1.7954
Got 438 / 1000 correct (43.80%)
Iteration 600, loss = 1.7827
Got 427 / 1000 correct (42.70%)
Iteration 700, loss = 1.9754
Got 445 / 1000 correct (44.50%)
###Markdown
Barebones TensorFlow: Train a three-layer ConvNetWe will now use TensorFlow to train a three-layer ConvNet on CIFAR-10.You need to implement the `three_layer_convnet_init` function. Recall that the architecture of the network is:1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding 22. ReLU3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding 14. ReLU5. Fully-connected layer (with bias) to compute scores for 10 classesYou don't need to do any hyperparameter tuning, but you should see validation accuracies above 43% after one epoch of training.
###Code
x.shape = (64, 32, 32, 3)
conv_w1.shape = (5, 5, 3, 6)
conv_b1.shape = (6,)
conv_w2.shape = (3, 3, 6, 9)
conv_b2.shape = (9,)
l1 conv shape = (64, 32, 32, 6)
l1 relu shape = (64, 32, 32, 6)
scores_np has shape: (64, 10)
9216
def three_layer_convnet_init():
"""
Initialize the weights of a Three-Layer ConvNet, for use with the
three_layer_convnet function defined above.
You can use the `create_matrix_with_kaiming_normal` helper!
Inputs: None
Returns a list containing:
- conv_w1: TensorFlow tf.Variable giving weights for the first conv layer
- conv_b1: TensorFlow tf.Variable giving biases for the first conv layer
- conv_w2: TensorFlow tf.Variable giving weights for the second conv layer
- conv_b2: TensorFlow tf.Variable giving biases for the second conv layer
- fc_w: TensorFlow tf.Variable giving weights for the fully-connected layer
- fc_b: TensorFlow tf.Variable giving biases for the fully-connected layer
"""
params = None
############################################################################
# TODO: Initialize the parameters of the three-layer network. #
#################§###########################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
conv_w1 = tf.Variable(create_matrix_with_kaiming_normal((5,5,3,32)))
conv_b1 = tf.Variable(np.zeros([32]), dtype=tf.float32 )
conv_w2 = tf.Variable(create_matrix_with_kaiming_normal((3,3, 32,16)))
conv_b2 = tf.Variable(np.zeros([16]), dtype=tf.float32 )
fc_w = tf.Variable(create_matrix_with_kaiming_normal((32*32*16,10)))
fc_b = tf.Variable(np.zeros([10]), dtype=tf.float32 )
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return params
learning_rate = 3e-3
train_part2(three_layer_convnet, three_layer_convnet_init, learning_rate)
###Output
_____no_output_____
###Markdown
Part III: Keras Model Subclassing APIImplementing a neural network using the low-level TensorFlow API is a good way to understand how TensorFlow works, but it's a little inconvenient - we had to manually keep track of all Tensors holding learnable parameters. This was fine for a small network, but could quickly become unweildy for a large complex model.Fortunately TensorFlow 2.0 provides higher-level APIs such as `tf.keras` which make it easy to build models out of modular, object-oriented layers. Further, TensorFlow 2.0 uses eager execution that evaluates operations immediately, without explicitly constructing any computational graphs. This makes it easy to write and debug models, and reduces the boilerplate code.In this part of the notebook we will define neural network models using the `tf.keras.Model` API. To implement your own model, you need to do the following:1. Define a new class which subclasses `tf.keras.Model`. Give your class an intuitive name that describes it, like `TwoLayerFC` or `ThreeLayerConvNet`.2. In the initializer `__init__()` for your new class, define all the layers you need as class attributes. The `tf.keras.layers` package provides many common neural-network layers, like `tf.keras.layers.Dense` for fully-connected layers and `tf.keras.layers.Conv2D` for convolutional layers. Under the hood, these layers will construct `Variable` Tensors for any learnable parameters. **Warning**: Don't forget to call `super(YourModelName, self).__init__()` as the first line in your initializer!3. Implement the `call()` method for your class; this implements the forward pass of your model, and defines the *connectivity* of your network. Layers defined in `__init__()` implement `__call__()` so they can be used as function objects that transform input Tensors into output Tensors. Don't define any new layers in `call()`; any layers you want to use in the forward pass should be defined in `__init__()`.After you define your `tf.keras.Model` subclass, you can instantiate it and use it like the model functions from Part II. Keras Model Subclassing API: Two-Layer NetworkHere is a concrete example of using the `tf.keras.Model` API to define a two-layer network. There are a few new bits of API to be aware of here:We use an `Initializer` object to set up the initial values of the learnable parameters of the layers; in particular `tf.initializers.VarianceScaling` gives behavior similar to the Kaiming initialization method we used in Part II. You can read more about it here: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/initializers/VarianceScalingWe construct `tf.keras.layers.Dense` objects to represent the two fully-connected layers of the model. In addition to multiplying their input by a weight matrix and adding a bias vector, these layer can also apply a nonlinearity for you. For the first layer we specify a ReLU activation function by passing `activation='relu'` to the constructor; the second layer uses softmax activation function. Finally, we use `tf.keras.layers.Flatten` to flatten the output from the previous fully-connected layer.
###Code
class TwoLayerFC(tf.keras.Model):
def __init__(self, hidden_size, num_classes):
super(TwoLayerFC, self).__init__()
initializer = tf.initializers.VarianceScaling(scale=2.0)
self.fc1 = tf.keras.layers.Dense(hidden_size, activation='relu',
kernel_initializer=initializer)
self.fc2 = tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer)
self.flatten = tf.keras.layers.Flatten()
def call(self, x, training=False):
x = self.flatten(x)
x = self.fc1(x)
x = self.fc2(x)
return x
def test_TwoLayerFC():
""" A small unit test to exercise the TwoLayerFC model above. """
input_size, hidden_size, num_classes = 50, 42, 10
x = tf.zeros((64, input_size))
model = TwoLayerFC(hidden_size, num_classes)
with tf.device(device):
scores = model(x)
print(scores.shape)
test_TwoLayerFC()
###Output
_____no_output_____
###Markdown
Keras Model Subclassing API: Three-Layer ConvNetNow it's your turn to implement a three-layer ConvNet using the `tf.keras.Model` API. Your model should have the same architecture used in Part II:1. Convolutional layer with 5 x 5 kernels, with zero-padding of 22. ReLU nonlinearity3. Convolutional layer with 3 x 3 kernels, with zero-padding of 14. ReLU nonlinearity5. Fully-connected layer to give class scores6. Softmax nonlinearityYou should initialize the weights of your network using the same initialization method as was used in the two-layer network above.**Hint**: Refer to the documentation for `tf.keras.layers.Conv2D` and `tf.keras.layers.Dense`:https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Conv2Dhttps://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense
###Code
class ThreeLayerConvNet(tf.keras.Model):
def __init__(self, channel_1, channel_2, num_classes):
super(ThreeLayerConvNet, self).__init__()
########################################################################
# TODO: Implement the __init__ method for a three-layer ConvNet. You #
# should instantiate layer objects to be used in the forward pass. #
########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
initializer = tf.initializers.VarianceScaling(scale=2.0)
self.conv1 = tf.keras.layers.Conv2D(filters=channel_1, kernel_size=(5,5), activation='relu', input_shape=(32,32,3), padding="valid")
self.conv2 = tf.keras.layers.Conv2D(filters=channel_2, kernel_size=(3,3), activation='relu')
self.fc = tf.keras.layers.Dense(num_classes, activation='softmax', kernel_initializer=initializer)
self.flatten = tf.keras.layers.Flatten()
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
########################################################################
# END OF YOUR CODE #
########################################################################
def call(self, x, training=False):
scores = None
########################################################################
# TODO: Implement the forward pass for a three-layer ConvNet. You #
# should use the layer objects defined in the __init__ method. #
########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
padding1 = tf.constant([[0,0],[2,2],[2,2],[0,0]])
padding2 = tf.constant([[0,0],[1,1],[1,1],[0,0]])
x = tf.pad(x, padding1, 'CONSTANT')
x = self.conv1(x)
x = tf.pad(x, padding2, 'CONSTANT')
x = self.conv2(x)
x = self.flatten(x)
scores = self.fc(x)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
########################################################################
# END OF YOUR CODE #
########################################################################
return scores
###Output
_____no_output_____
###Markdown
Once you complete the implementation of the `ThreeLayerConvNet` above you can run the following to ensure that your implementation does not crash and produces outputs of the expected shape.
###Code
def test_ThreeLayerConvNet():
channel_1, channel_2, num_classes = 12, 8, 10
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
with tf.device(device):
x = tf.zeros((64, 3, 32, 32))
scores = model(x)
print(scores.shape)
test_ThreeLayerConvNet()
###Output
_____no_output_____
###Markdown
Keras Model Subclassing API: Eager TrainingWhile keras models have a builtin training loop (using the `model.fit`), sometimes you need more customization. Here's an example, of a training loop implemented with eager execution.In particular, notice `tf.GradientTape`. Automatic differentiation is used in the backend for implementing backpropagation in frameworks like TensorFlow. During eager execution, `tf.GradientTape` is used to trace operations for computing gradients later. A particular `tf.GradientTape` can only compute one gradient; subsequent calls to tape will throw a runtime error. TensorFlow 2.0 ships with easy-to-use built-in metrics under `tf.keras.metrics` module. Each metric is an object, and we can use `update_state()` to add observations and `reset_state()` to clear all observations. We can get the current result of a metric by calling `result()` on the metric object.
###Code
def train_part34(model_init_fn, optimizer_init_fn, num_epochs=1, is_training=False):
"""
Simple training loop for use with models defined using tf.keras. It trains
a model for one epoch on the CIFAR-10 training set and periodically checks
accuracy on the CIFAR-10 validation set.
Inputs:
- model_init_fn: A function that takes no parameters; when called it
constructs the model we want to train: model = model_init_fn()
- optimizer_init_fn: A function which takes no parameters; when called it
constructs the Optimizer object we will use to optimize the model:
optimizer = optimizer_init_fn()
- num_epochs: The number of epochs to train for
Returns: Nothing, but prints progress during trainingn
"""
with tf.device(device):
# Compute the loss like we did in Part II
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
model = model_init_fn()
optimizer = optimizer_init_fn()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
val_loss = tf.keras.metrics.Mean(name='val_loss')
val_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='val_accuracy')
t = 0
for epoch in range(num_epochs):
# Reset the metrics - https://www.tensorflow.org/alpha/guide/migration_guide#new-style_metrics
train_loss.reset_states()
train_accuracy.reset_states()
for x_np, y_np in train_dset:
with tf.GradientTape() as tape:
# Use the model function to build the forward pass.
scores = model(x_np, training=is_training)
loss = loss_fn(y_np, scores)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# Update the metrics
train_loss.update_state(loss)
train_accuracy.update_state(y_np, scores)
if t % print_every == 0:
val_loss.reset_states()
val_accuracy.reset_states()
for test_x, test_y in val_dset:
# During validation at end of epoch, training set to False
prediction = model(test_x, training=False)
t_loss = loss_fn(test_y, prediction)
val_loss.update_state(t_loss)
val_accuracy.update_state(test_y, prediction)
template = 'Iteration {}, Epoch {}, Loss: {}, Accuracy: {}, Val Loss: {}, Val Accuracy: {}'
print (template.format(t, epoch+1,
train_loss.result(),
train_accuracy.result()*100,
val_loss.result(),
val_accuracy.result()*100))
t += 1
###Output
_____no_output_____
###Markdown
Keras Model Subclassing API: Train a Two-Layer NetworkWe can now use the tools defined above to train a two-layer network on CIFAR-10. We define the `model_init_fn` and `optimizer_init_fn` that construct the model and optimizer respectively when called. Here we want to train the model using stochastic gradient descent with no momentum, so we construct a `tf.keras.optimizers.SGD` function; you can [read about it here](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers/SGD).You don't need to tune any hyperparameters here, but you should achieve validation accuracies above 40% after one epoch of training.
###Code
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn():
return TwoLayerFC(hidden_size, num_classes)
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
_____no_output_____
###Markdown
Keras Model Subclassing API: Train a Three-Layer ConvNetHere you should use the tools we've defined above to train a three-layer ConvNet on CIFAR-10. Your ConvNet should use 32 filters in the first convolutional layer and 16 filters in the second layer.To train the model you should use gradient descent with Nesterov momentum 0.9. **HINT**: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers/SGDYou don't need to perform any hyperparameter tuning, but you should achieve validation accuracies above 50% after training for one epoch.
###Code
learning_rate = 3e-3
channel_1, channel_2, num_classes = 32, 16, 10
def model_init_fn():
model = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return model
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=0.9)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn)
###Output
_____no_output_____
###Markdown
Part IV: Keras Sequential APIIn Part III we introduced the `tf.keras.Model` API, which allows you to define models with any number of learnable layers and with arbitrary connectivity between layers.However for many models you don't need such flexibility - a lot of models can be expressed as a sequential stack of layers, with the output of each layer fed to the next layer as input. If your model fits this pattern, then there is an even easier way to define your model: using `tf.keras.Sequential`. You don't need to write any custom classes; you simply call the `tf.keras.Sequential` constructor with a list containing a sequence of layer objects.One complication with `tf.keras.Sequential` is that you must define the shape of the input to the model by passing a value to the `input_shape` of the first layer in your model. Keras Sequential API: Two-Layer NetworkIn this subsection, we will rewrite the two-layer fully-connected network using `tf.keras.Sequential`, and train it using the training loop defined above.You don't need to perform any hyperparameter tuning here, but you should see validation accuracies above 40% after training for one epoch.
###Code
learning_rate = 1e-2
def model_init_fn():
input_shape = (32, 32, 3)
hidden_layer_size, num_classes = 4000, 10
initializer = tf.initializers.VarianceScaling(scale=2.0)
layers = [
tf.keras.layers.Flatten(input_shape=input_shape),
tf.keras.layers.Dense(hidden_layer_size, activation='relu',
kernel_initializer=initializer),
tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer),
]
model = tf.keras.Sequential(layers)
return model
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
_____no_output_____
###Markdown
Abstracting Away the Training LoopIn the previous examples, we used a customised training loop to train models (e.g. `train_part34`). Writing your own training loop is only required if you need more flexibility and control during training your model. Alternately, you can also use built-in APIs like `tf.keras.Model.fit()` and `tf.keras.Model.evaluate` to train and evaluate a model. Also remember to configure your model for training by calling `tf.keras.Model.compile.You don't need to perform any hyperparameter tuning here, but you should see validation and test accuracies above 42% after training for one epoch.
###Code
model = model_init_fn()
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate),
loss='sparse_categorical_crossentropy',
metrics=[tf.keras.metrics.sparse_categorical_accuracy])
model.fit(X_train, y_train, batch_size=64, epochs=1, validation_data=(X_val, y_val))
model.evaluate(X_test, y_test)
###Output
_____no_output_____
###Markdown
Keras Sequential API: Three-Layer ConvNetHere you should use `tf.keras.Sequential` to reimplement the same three-layer ConvNet architecture used in Part II and Part III. As a reminder, your model should have the following architecture:1. Convolutional layer with 32 5x5 kernels, using zero padding of 22. ReLU nonlinearity3. Convolutional layer with 16 3x3 kernels, using zero padding of 14. ReLU nonlinearity5. Fully-connected layer giving class scores6. Softmax nonlinearityYou should initialize the weights of the model using a `tf.initializers.VarianceScaling` as above.You should train the model using Nesterov momentum 0.9.You don't need to perform any hyperparameter search, but you should achieve accuracy above 45% after training for one epoch.
###Code
def model_init_fn():
model = None
############################################################################
# TODO: Construct a three-layer ConvNet using tf.keras.Sequential. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
initializer = tf.initializers.VarianceScaling(scale=2.0)
depth1, depth2 = 32, 16
ker_size1, ker_size2 = (5,5), (3,3)
num_classes = 10
input_shape=(32,32,3)
layers = [
tf.keras.layers.InputLayer(input_shape=input_shape),
tf.keras.layers.Conv2D(filters=depth1, kernel_size=ker_size1, padding='same', activation = 'relu',
kernel_initializer=initializer),
# tf.keras.layers.ZeroPadding2D((1,1)),
tf.keras.layers.Conv2D(filters=depth2, kernel_size=ker_size2, padding='same', activation = 'relu',
kernel_initializer=initializer),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(num_classes, activation='softmax', kernel_initializer=initializer),
]
model = tf.keras.Sequential(layers)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return model
learning_rate = 5e-4
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=0.9)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn)
###Output
_____no_output_____
###Markdown
We will also train this model with the built-in training loop APIs provided by TensorFlow.
###Code
model = model_init_fn()
model.compile(optimizer=optimizer_init_fn(),
loss='sparse_categorical_crossentropy',
metrics=[tf.keras.metrics.sparse_categorical_accuracy])
model.fit(X_train, y_train, batch_size=64, epochs=1, validation_data=(X_val, y_val))
model.evaluate(X_test, y_test)
###Output
_____no_output_____
###Markdown
Part IV: Functional API Demonstration with a Two-Layer Network In the previous section, we saw how we can use `tf.keras.Sequential` to stack layers to quickly build simple models. But this comes at the cost of losing flexibility.Often we will have to write complex models that have non-sequential data flows: a layer can have **multiple inputs and/or outputs**, such as stacking the output of 2 previous layers together to feed as input to a third! (Some examples are residual connections and dense blocks.)In such cases, we can use Keras functional API to write models with complex topologies such as: 1. Multi-input models 2. Multi-output models 3. Models with shared layers (the same layer called several times) 4. Models with non-sequential data flows (e.g. residual connections)Writing a model with Functional API requires us to create a `tf.keras.Model` instance and explicitly write input tensors and output tensors for this model.
###Code
def two_layer_fc_functional(input_shape, hidden_size, num_classes):
initializer = tf.initializers.VarianceScaling(scale=2.0)
inputs = tf.keras.Input(shape=input_shape)
flattened_inputs = tf.keras.layers.Flatten()(inputs)
fc1_output = tf.keras.layers.Dense(hidden_size, activation='relu',
kernel_initializer=initializer)(flattened_inputs)
scores = tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer)(fc1_output)
# Instantiate the model given inputs and outputs.
model = tf.keras.Model(inputs=inputs, outputs=scores)
return model
def test_two_layer_fc_functional():
""" A small unit test to exercise the TwoLayerFC model above. """
input_size, hidden_size, num_classes = 50, 42, 10
input_shape = (50,)
x = tf.zeros((64, input_size))
model = two_layer_fc_functional(input_shape, hidden_size, num_classes)
with tf.device(device):
scores = model(x)
print(scores.shape)
test_two_layer_fc_functional()
###Output
_____no_output_____
###Markdown
Keras Functional API: Train a Two-Layer NetworkYou can now train this two-layer network constructed using the functional API.You don't need to perform any hyperparameter tuning here, but you should see validation accuracies above 40% after training for one epoch.
###Code
input_shape = (32, 32, 3)
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn():
return two_layer_fc_functional(input_shape, hidden_size, num_classes)
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
_____no_output_____
###Markdown
Part V: CIFAR-10 open-ended challengeIn this section you can experiment with whatever ConvNet architecture you'd like on CIFAR-10.You should experiment with architectures, hyperparameters, loss functions, regularization, or anything else you can think of to train a model that achieves **at least 70%** accuracy on the **validation** set within 10 epochs. You can use the built-in train function, the `train_part34` function from above, or implement your own training loop.Describe what you did at the end of the notebook. Some things you can try:- **Filter size**: Above we used 5x5 and 3x3; is this optimal?- **Number of filters**: Above we used 16 and 32 filters. Would more or fewer do better?- **Pooling**: We didn't use any pooling above. Would this improve the model?- **Normalization**: Would your model be improved with batch normalization, layer normalization, group normalization, or some other normalization strategy?- **Network architecture**: The ConvNet above has only three layers of trainable parameters. Would a deeper model do better?- **Global average pooling**: Instead of flattening after the final convolutional layer, would global average pooling do better? This strategy is used for example in Google's Inception network and in Residual Networks.- **Regularization**: Would some kind of regularization improve performance? Maybe weight decay or dropout? NOTE: Batch Normalization / DropoutIf you are using Batch Normalization and Dropout, remember to pass `is_training=True` if you use the `train_part34()` function. BatchNorm and Dropout layers have different behaviors at training and inference time. `training` is a specific keyword argument reserved for this purpose in any `tf.keras.Model`'s `call()` function. Read more about this here : https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/BatchNormalizationmethodshttps://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dropoutmethods Tips for trainingFor each network architecture that you try, you should tune the learning rate and other hyperparameters. When doing this there are a couple important things to keep in mind: - If the parameters are working well, you should see improvement within a few hundred iterations- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set. Going above and beyondIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!- Alternative optimizers: you can try Adam, Adagrad, RMSprop, etc.- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.- Model ensembles- Data augmentation- New Architectures - [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output. - [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together. - [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32) Have fun and happy training!
###Code
class CustomConvNet(tf.keras.Model):
def __init__(self):
super(CustomConvNet, self).__init__()
############################################################################
# TODO: Construct a model that performs well on CIFAR-10 #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
initializer = tf.initializers.VarianceScaling(scale=2.0)
self.conv1 = tf.keras.layers.Conv2D(filters=channel_1, kernel_size=(5,5), activation='relu', input_shape=(32,32,3), padding="valid")
self.conv2 = tf.keras.layers.Conv2D(filters=channel_2, kernel_size=(3,3), activation='relu')
self.fc = tf.keras.layers.Dense(num_classes, activation='softmax', kernel_initializer=initializer)
self.flatten = tf.keras.layers.Flatten()
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
########################################################################
# END OF YOUR CODE #
########################################################################
def call(self, x, training=False):
scores = None
########################################################################
# TODO: Implement the forward pass for a three-layer ConvNet. You #
# should use the layer objects defined in the __init__ method. #
########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
padding1 = tf.constant([[0,0],[2,2],[2,2],[0,0]])
padding2 = tf.constant([[0,0],[1,1],[1,1],[0,0]])
x = tf.pad(x, padding1, 'CONSTANT')
x = self.conv1(x)
x = tf.pad(x, padding2, 'CONSTANT')
x = self.conv2(x)
x = self.flatten(x)
x = self.fc(x)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return x
# device = '/device:GPU:0' # Change this to a CPU/GPU as you wish!
device = '/cpu:0' # Change this to a CPU/GPU as you wish!
print_every = 700
num_epochs = 10
model = CustomConvNet()
def model_init_fn():
return CustomConvNet()
def optimizer_init_fn():
learning_rate = 1e-3
return tf.keras.optimizers.Adam(learning_rate)
train_part34(model_init_fn, optimizer_init_fn, num_epochs=num_epochs, is_training=True)
###Output
Iteration 0, Epoch 1, Loss: 2.3400373458862305, Accuracy: 7.8125, Val Loss: 2.9617037773132324, Val Accuracy: 10.0
Iteration 700, Epoch 1, Loss: 1.413520336151123, Accuracy: 50.49705505371094, Val Loss: 1.1875035762786865, Val Accuracy: 58.20000076293945
Iteration 1400, Epoch 2, Loss: 1.0429129600524902, Accuracy: 63.917320251464844, Val Loss: 1.1137940883636475, Val Accuracy: 62.0
Iteration 2100, Epoch 3, Loss: 0.8839472532272339, Accuracy: 69.85116577148438, Val Loss: 1.1555709838867188, Val Accuracy: 63.0
Iteration 2800, Epoch 4, Loss: 0.7628107070922852, Accuracy: 74.03392028808594, Val Loss: 1.2096624374389648, Val Accuracy: 60.900001525878906
Iteration 3500, Epoch 5, Loss: 0.668714165687561, Accuracy: 77.58509826660156, Val Loss: 1.2593955993652344, Val Accuracy: 61.400001525878906
Iteration 4200, Epoch 6, Loss: 0.5919941067695618, Accuracy: 80.41610717773438, Val Loss: 1.4657975435256958, Val Accuracy: 57.79999923706055
Iteration 4900, Epoch 7, Loss: 0.5167434811592102, Accuracy: 82.61270141601562, Val Loss: 1.6434582471847534, Val Accuracy: 58.20000076293945
Iteration 5600, Epoch 8, Loss: 0.459024041891098, Accuracy: 84.54498291015625, Val Loss: 1.7989230155944824, Val Accuracy: 53.20000076293945
Iteration 6300, Epoch 9, Loss: 0.4310932159423828, Accuracy: 84.62789154052734, Val Loss: 2.1613223552703857, Val Accuracy: 54.5
Iteration 7000, Epoch 10, Loss: 0.3747079372406006, Accuracy: 87.16413116455078, Val Loss: 2.216815710067749, Val Accuracy: 57.599998474121094
###Markdown
What's this TensorFlow business?You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you choose to work with that notebook). What is it?TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray. Why?* Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. * We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) * We want you to be exposed to the sort of deep learning code you might run into in academia or industry. How will I learn TensorFlow?TensorFlow has many excellent tutorials available, including those from [Google themselves](https://www.tensorflow.org/get_started/get_started).Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.**NOTE: This notebook is meant to teach you the latest version of Tensorflow 2.0. Most examples on the web today are still in 1.x, so be careful not to confuse the two when looking up documentation**. Install Tensorflow 2.0Tensorflow 2.0 is still not in a fully 100% stable release, but it's still usable and more intuitive than TF 1.x. Please make sure you have it installed before moving on in this notebook! Here are some steps to get started:1. Have the latest version of Anaconda installed on your machine.2. Create a new conda environment starting from Python 3.7. In this setup example, we'll call it `tf_20_env`.3. Run the command: `source activate tf_20_env`4. Then pip install TF 2.0 as described here: https://www.tensorflow.org/install/pip A guide on creating Anaconda enviornments: https://uoa-eresearch.github.io/eresearch-cookbook/recipe/2014/11/20/conda/This will give you an new enviornemnt to play in TF 2.0. Generally, if you plan to also use TensorFlow in your other projects, you might also want to keep a seperate Conda environment or virtualenv in Python 3.7 that has Tensorflow 1.9, so you can switch back and forth at will. Table of ContentsThis notebook has 5 parts. We will walk through TensorFlow at **three different levels of abstraction**, which should help you better understand it and prepare you for working on your project.1. Part I, Preparation: load the CIFAR-10 dataset.2. Part II, Barebone TensorFlow: **Abstraction Level 1**, we will work directly with low-level TensorFlow graphs. 3. Part III, Keras Model API: **Abstraction Level 2**, we will use `tf.keras.Model` to define arbitrary neural network architecture. 4. Part IV, Keras Sequential + Functional API: **Abstraction Level 3**, we will use `tf.keras.Sequential` to define a linear feed-forward network very conveniently, and then explore the functional libraries for building unique and uncommon models that require more flexibility.5. Part V, CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features. We will discuss Keras in more detail later in the notebook.Here is a table of comparison:| API | Flexibility | Convenience ||---------------|-------------|-------------|| Barebone | High | Low || `tf.keras.Model` | High | Medium || `tf.keras.Sequential` | Low | High | Part I: PreparationFirst, we load the CIFAR-10 dataset. This might take a few minutes to download the first time you run it, but after that the files should be cached on disk and loading should be faster.In previous parts of the assignment we used CS231N-specific code to download and read the CIFAR-10 dataset; however the `tf.keras.datasets` package in TensorFlow provides prebuilt utility functions for loading many common datasets.For the purposes of this assignment we will still write our own code to preprocess the data and iterate through it in minibatches. The `tf.data` package in TensorFlow provides tools for automating this process, but working with this package adds extra complication and is beyond the scope of this notebook. However using `tf.data` can be much more efficient than the simple approach used in this notebook, so you should consider using it for your project.
###Code
import os
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
def load_cifar10(num_training=49000, num_validation=1000, num_test=10000):
"""
Fetch the CIFAR-10 dataset from the web and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 dataset and use appropriate data types and shapes
cifar10 = tf.keras.datasets.cifar10.load_data()
(X_train, y_train), (X_test, y_test) = cifar10
X_train = np.asarray(X_train, dtype=np.float32)
y_train = np.asarray(y_train, dtype=np.int32).flatten()
X_test = np.asarray(X_test, dtype=np.float32)
y_test = np.asarray(y_test, dtype=np.int32).flatten()
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean pixel and divide by std
mean_pixel = X_train.mean(axis=(0, 1, 2), keepdims=True)
std_pixel = X_train.std(axis=(0, 1, 2), keepdims=True)
X_train = (X_train - mean_pixel) / std_pixel
X_val = (X_val - mean_pixel) / std_pixel
X_test = (X_test - mean_pixel) / std_pixel
return X_train, y_train, X_val, y_val, X_test, y_test
# If there are errors with SSL downloading involving self-signed certificates,
# it may be that your Python version was recently installed on the current machine.
# See: https://github.com/tensorflow/tensorflow/issues/10779
# To fix, run the command: /Applications/Python\ 3.7/Install\ Certificates.command
# ...replacing paths as necessary.
# Invoke the above function to get our data.
NHW = (0, 1, 2)
X_train, y_train, X_val, y_val, X_test, y_test = load_cifar10()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape, y_train.dtype)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
class Dataset(object):
def __init__(self, X, y, batch_size, shuffle=False):
"""
Construct a Dataset object to iterate over data X and labels y
Inputs:
- X: Numpy array of data, of any shape
- y: Numpy array of labels, of any shape but with y.shape[0] == X.shape[0]
- batch_size: Integer giving number of elements per minibatch
- shuffle: (optional) Boolean, whether to shuffle the data on each epoch
"""
assert X.shape[0] == y.shape[0], 'Got different numbers of data and labels'
self.X, self.y = X, y
self.batch_size, self.shuffle = batch_size, shuffle
def __iter__(self):
N, B = self.X.shape[0], self.batch_size
idxs = np.arange(N)
if self.shuffle:
np.random.shuffle(idxs)
return iter((self.X[i:i+B], self.y[i:i+B]) for i in range(0, N, B))
train_dset = Dataset(X_train, y_train, batch_size=64, shuffle=True)
val_dset = Dataset(X_val, y_val, batch_size=64, shuffle=False)
test_dset = Dataset(X_test, y_test, batch_size=64)
# We can iterate through a dataset like this:
for t, (x, y) in enumerate(train_dset):
print(t, x.shape, y.shape)
if t > 5: break
###Output
_____no_output_____
###Markdown
You can optionally **use GPU by setting the flag to True below**. It's not neccessary to use a GPU for this assignment; if you are working on Google Cloud then we recommend that you do not use a GPU, as it will be significantly more expensive.
###Code
# Set up some global variables
USE_GPU = True
if USE_GPU:
device = '/device:GPU:0'
else:
device = '/cpu:0'
# Constant to control how often we print when training models
print_every = 100
print('Using device: ', device)
###Output
_____no_output_____
###Markdown
Part II: Barebones TensorFlowTensorFlow ships with various high-level APIs which make it very convenient to define and train neural networks; we will cover some of these constructs in Part III and Part IV of this notebook. In this section we will start by building a model with basic TensorFlow constructs to help you better understand what's going on under the hood of the higher-level APIs.**"Barebones Tensorflow" is important to understanding the building blocks of TensorFlow, but much of it involves concepts from TensorFlow 1.x.** We will be working with legacy modules such as `tf.Variable`.Therefore, please read and understand the differences between legacy (1.x) TF and the new (2.0) TF. Historical background on TensorFlow 1.xTensorFlow 1.x is primarily a framework for working with **static computational graphs**. Nodes in the computational graph are Tensors which will hold n-dimensional arrays when the graph is run; edges in the graph represent functions that will operate on Tensors when the graph is run to actually perform useful computation.Before Tensorflow 2.0, we had to configure the graph into two phases. There are plenty of tutorials online that explain this two-step process. The process generally looks like the following for TF 1.x:1. **Build a computational graph that describes the computation that you want to perform**. This stage doesn't actually perform any computation; it just builds up a symbolic representation of your computation. This stage will typically define one or more `placeholder` objects that represent inputs to the computational graph.2. **Run the computational graph many times.** Each time the graph is run (e.g. for one gradient descent step) you will specify which parts of the graph you want to compute, and pass a `feed_dict` dictionary that will give concrete values to any `placeholder`s in the graph. The new paradigm in Tensorflow 2.0Now, with Tensorflow 2.0, we can simply adopt a functional form that is more Pythonic and similar in spirit to PyTorch and direct Numpy operation. Instead of the 2-step paradigm with computation graphs, making it (among other things) easier to debug TF code. You can read more details at https://www.tensorflow.org/guide/eager.The main difference between the TF 1.x and 2.0 approach is that the 2.0 approach doesn't make use of `tf.Session`, `tf.run`, `placeholder`, `feed_dict`. To get more details of what's different between the two version and how to convert between the two, check out the official migration guide: https://www.tensorflow.org/alpha/guide/migration_guideLater, in the rest of this notebook we'll focus on this new, simpler approach. TensorFlow warmup: Flatten FunctionWe can see this in action by defining a simple `flatten` function that will reshape image data for use in a fully-connected network.In TensorFlow, data for convolutional feature maps is typically stored in a Tensor of shape N x H x W x C where:- N is the number of datapoints (minibatch size)- H is the height of the feature map- W is the width of the feature map- C is the number of channels in the feature mapThis is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we use fully connected affine layers to process the image, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "flatten" operation to collapse the `H x W x C` values per representation into a single long vector. Notice the `tf.reshape` call has the target shape as `(N, -1)`, meaning it will reshape/keep the first dimension to be N, and then infer as necessary what the second dimension is in the output, so we can collapse the remaining dimensions from the input properly.**NOTE**: TensorFlow and PyTorch differ on the default Tensor layout; TensorFlow uses N x H x W x C but PyTorch uses N x C x H x W.
###Code
def flatten(x):
"""
Input:
- TensorFlow Tensor of shape (N, D1, ..., DM)
Output:
- TensorFlow Tensor of shape (N, D1 * ... * DM)
"""
N = tf.shape(x)[0]
return tf.reshape(x, (N, -1))
def test_flatten():
# Construct concrete values of the input data x using numpy
x_np = np.arange(24).reshape((2, 3, 4))
print('x_np:\n', x_np, '\n')
# Compute a concrete output value.
x_flat_np = flatten(x_np)
print('x_flat_np:\n', x_flat_np, '\n')
test_flatten()
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Define a Two-Layer NetworkWe will now implement our first neural network with TensorFlow: a fully-connected ReLU network with two hidden layers and no biases on the CIFAR10 dataset. For now we will use only low-level TensorFlow operators to define the network; later we will see how to use the higher-level abstractions provided by `tf.keras` to simplify the process.We will define the forward pass of the network in the function `two_layer_fc`; this will accept TensorFlow Tensors for the inputs and weights of the network, and return a TensorFlow Tensor for the scores. After defining the network architecture in the `two_layer_fc` function, we will test the implementation by checking the shape of the output.**It's important that you read and understand this implementation.**
###Code
def two_layer_fc(x, params):
"""
A fully-connected neural network; the architecture is:
fully-connected layer -> ReLU -> fully connected layer.
Note that we only need to define the forward pass here; TensorFlow will take
care of computing the gradients for us.
The input to the network will be a minibatch of data, of shape
(N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,
and the output layer will produce scores for C classes.
Inputs:
- x: A TensorFlow Tensor of shape (N, d1, ..., dM) giving a minibatch of
input data.
- params: A list [w1, w2] of TensorFlow Tensors giving weights for the
network, where w1 has shape (D, H) and w2 has shape (H, C).
Returns:
- scores: A TensorFlow Tensor of shape (N, C) giving classification scores
for the input data x.
"""
w1, w2 = params # Unpack the parameters
x = flatten(x) # Flatten the input; now x has shape (N, D)
h = tf.nn.relu(tf.matmul(x, w1)) # Hidden layer: h has shape (N, H)
scores = tf.matmul(h, w2) # Compute scores of shape (N, C)
return scores
def two_layer_fc_test():
hidden_layer_size = 42
# Scoping our TF operations under a tf.device context manager
# lets us tell TensorFlow where we want these Tensors to be
# multiplied and/or operated on, e.g. on a CPU or a GPU.
with tf.device(device):
x = tf.zeros((64, 32, 32, 3))
w1 = tf.zeros((32 * 32 * 3, hidden_layer_size))
w2 = tf.zeros((hidden_layer_size, 10))
# Call our two_layer_fc function for the forward pass of the network.
scores = two_layer_fc(x, [w1, w2])
print(scores.shape)
two_layer_fc_test()
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Three-Layer ConvNetHere you will complete the implementation of the function `three_layer_convnet` which will perform the forward pass of a three-layer convolutional network. The network should have the following architecture:1. A convolutional layer (with bias) with `channel_1` filters, each with shape `KW1 x KH1`, and zero-padding of two2. ReLU nonlinearity3. A convolutional layer (with bias) with `channel_2` filters, each with shape `KW2 x KH2`, and zero-padding of one4. ReLU nonlinearity5. Fully-connected layer with bias, producing scores for `C` classes.**HINT**: For convolutions: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/conv2d; be careful with padding!**HINT**: For biases: https://www.tensorflow.org/performance/xla/broadcasting
###Code
def three_layer_convnet(x, params):
"""
A three-layer convolutional network with the architecture described above.
Inputs:
- x: A TensorFlow Tensor of shape (N, H, W, 3) giving a minibatch of images
- params: A list of TensorFlow Tensors giving the weights and biases for the
network; should contain the following:
- conv_w1: TensorFlow Tensor of shape (KH1, KW1, 3, channel_1) giving
weights for the first convolutional layer.
- conv_b1: TensorFlow Tensor of shape (channel_1,) giving biases for the
first convolutional layer.
- conv_w2: TensorFlow Tensor of shape (KH2, KW2, channel_1, channel_2)
giving weights for the second convolutional layer
- conv_b2: TensorFlow Tensor of shape (channel_2,) giving biases for the
second convolutional layer.
- fc_w: TensorFlow Tensor giving weights for the fully-connected layer.
Can you figure out what the shape should be?
- fc_b: TensorFlow Tensor giving biases for the fully-connected layer.
Can you figure out what the shape should be?
"""
conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params
scores = None
############################################################################
# TODO: Implement the forward pass for the three-layer ConvNet. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return scores
###Output
_____no_output_____
###Markdown
After defing the forward pass of the three-layer ConvNet above, run the following cell to test your implementation. Like the two-layer network, we run the graph on a batch of zeros just to make sure the function doesn't crash, and produces outputs of the correct shape.When you run this function, `scores_np` should have shape `(64, 10)`.
###Code
def three_layer_convnet_test():
with tf.device(device):
x = tf.zeros((64, 32, 32, 3))
conv_w1 = tf.zeros((5, 5, 3, 6))
conv_b1 = tf.zeros((6,))
conv_w2 = tf.zeros((3, 3, 6, 9))
conv_b2 = tf.zeros((9,))
fc_w = tf.zeros((32 * 32 * 9, 10))
fc_b = tf.zeros((10,))
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
scores = three_layer_convnet(x, params)
# Inputs to convolutional layers are 4-dimensional arrays with shape
# [batch_size, height, width, channels]
print('scores_np has shape: ', scores.shape)
three_layer_convnet_test()
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Training StepWe now define the `training_step` function performs a single training step. This will take three basic steps:1. Compute the loss2. Compute the gradient of the loss with respect to all network weights3. Make a weight update step using (stochastic) gradient descent.We need to use a few new TensorFlow functions to do all of this:- For computing the cross-entropy loss we'll use `tf.nn.sparse_softmax_cross_entropy_with_logits`: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits- For averaging the loss across a minibatch of data we'll use `tf.reduce_mean`:https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/reduce_mean- For computing gradients of the loss with respect to the weights we'll use `tf.GradientTape` (useful for Eager execution): https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/GradientTape- We'll mutate the weight values stored in a TensorFlow Tensor using `tf.assign_sub` ("sub" is for subtraction): https://www.tensorflow.org/api_docs/python/tf/assign_sub
###Code
def training_step(model_fn, x, y, params, learning_rate):
with tf.GradientTape() as tape:
scores = model_fn(x, params) # Forward pass of the model
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=scores)
total_loss = tf.reduce_mean(loss)
grad_params = tape.gradient(total_loss, params)
# Make a vanilla gradient descent step on all of the model parameters
# Manually update the weights using assign_sub()
for w, grad_w in zip(params, grad_params):
w.assign_sub(learning_rate * grad_w)
return total_loss
def train_part2(model_fn, init_fn, learning_rate):
"""
Train a model on CIFAR-10.
Inputs:
- model_fn: A Python function that performs the forward pass of the model
using TensorFlow; it should have the following signature:
scores = model_fn(x, params) where x is a TensorFlow Tensor giving a
minibatch of image data, params is a list of TensorFlow Tensors holding
the model weights, and scores is a TensorFlow Tensor of shape (N, C)
giving scores for all elements of x.
- init_fn: A Python function that initializes the parameters of the model.
It should have the signature params = init_fn() where params is a list
of TensorFlow Tensors holding the (randomly initialized) weights of the
model.
- learning_rate: Python float giving the learning rate to use for SGD.
"""
params = init_fn() # Initialize the model parameters
for t, (x_np, y_np) in enumerate(train_dset):
# Run the graph on a batch of training data.
loss = training_step(model_fn, x_np, y_np, params, learning_rate)
# Periodically print the loss and check accuracy on the val set.
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss))
check_accuracy(val_dset, x_np, model_fn, params)
def check_accuracy(dset, x, model_fn, params):
"""
Check accuracy on a classification model, e.g. for validation.
Inputs:
- dset: A Dataset object against which to check accuracy
- x: A TensorFlow placeholder Tensor where input images should be fed
- model_fn: the Model we will be calling to make predictions on x
- params: parameters for the model_fn to work with
Returns: Nothing, but prints the accuracy of the model
"""
num_correct, num_samples = 0, 0
for x_batch, y_batch in dset:
scores_np = model_fn(x_batch, params).numpy()
y_pred = scores_np.argmax(axis=1)
num_samples += x_batch.shape[0]
num_correct += (y_pred == y_batch).sum()
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: InitializationWe'll use the following utility method to initialize the weight matrices for our models using Kaiming's normalization method.[1] He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, ICCV 2015, https://arxiv.org/abs/1502.01852
###Code
def create_matrix_with_kaiming_normal(shape):
if len(shape) == 2:
fan_in, fan_out = shape[0], shape[1]
elif len(shape) == 4:
fan_in, fan_out = np.prod(shape[:3]), shape[3]
return tf.keras.backend.random_normal(shape) * np.sqrt(2.0 / fan_in)
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Train a Two-Layer NetworkWe are finally ready to use all of the pieces defined above to train a two-layer fully-connected network on CIFAR-10.We just need to define a function to initialize the weights of the model, and call `train_part2`.Defining the weights of the network introduces another important piece of TensorFlow API: `tf.Variable`. A TensorFlow Variable is a Tensor whose value is stored in the graph and persists across runs of the computational graph; however unlike constants defined with `tf.zeros` or `tf.random_normal`, the values of a Variable can be mutated as the graph runs; these mutations will persist across graph runs. Learnable parameters of the network are usually stored in Variables.You don't need to tune any hyperparameters, but you should achieve validation accuracies above 40% after one epoch of training.
###Code
def two_layer_fc_init():
"""
Initialize the weights of a two-layer network, for use with the
two_layer_network function defined above.
You can use the `create_matrix_with_kaiming_normal` helper!
Inputs: None
Returns: A list of:
- w1: TensorFlow tf.Variable giving the weights for the first layer
- w2: TensorFlow tf.Variable giving the weights for the second layer
"""
hidden_layer_size = 4000
w1 = tf.Variable(create_matrix_with_kaiming_normal((3 * 32 * 32, 4000)))
w2 = tf.Variable(create_matrix_with_kaiming_normal((4000, 10)))
return [w1, w2]
learning_rate = 1e-2
train_part2(two_layer_fc, two_layer_fc_init, learning_rate)
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Train a three-layer ConvNetWe will now use TensorFlow to train a three-layer ConvNet on CIFAR-10.You need to implement the `three_layer_convnet_init` function. Recall that the architecture of the network is:1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding 22. ReLU3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding 14. ReLU5. Fully-connected layer (with bias) to compute scores for 10 classesYou don't need to do any hyperparameter tuning, but you should see validation accuracies above 43% after one epoch of training.
###Code
def three_layer_convnet_init():
"""
Initialize the weights of a Three-Layer ConvNet, for use with the
three_layer_convnet function defined above.
You can use the `create_matrix_with_kaiming_normal` helper!
Inputs: None
Returns a list containing:
- conv_w1: TensorFlow tf.Variable giving weights for the first conv layer
- conv_b1: TensorFlow tf.Variable giving biases for the first conv layer
- conv_w2: TensorFlow tf.Variable giving weights for the second conv layer
- conv_b2: TensorFlow tf.Variable giving biases for the second conv layer
- fc_w: TensorFlow tf.Variable giving weights for the fully-connected layer
- fc_b: TensorFlow tf.Variable giving biases for the fully-connected layer
"""
params = None
############################################################################
# TODO: Initialize the parameters of the three-layer network. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return params
learning_rate = 3e-3
train_part2(three_layer_convnet, three_layer_convnet_init, learning_rate)
###Output
_____no_output_____
###Markdown
Part III: Keras Model Subclassing APIImplementing a neural network using the low-level TensorFlow API is a good way to understand how TensorFlow works, but it's a little inconvenient - we had to manually keep track of all Tensors holding learnable parameters. This was fine for a small network, but could quickly become unweildy for a large complex model.Fortunately TensorFlow 2.0 provides higher-level APIs such as `tf.keras` which make it easy to build models out of modular, object-oriented layers. Further, TensorFlow 2.0 uses eager execution that evaluates operations immediately, without explicitly constructing any computational graphs. This makes it easy to write and debug models, and reduces the boilerplate code.In this part of the notebook we will define neural network models using the `tf.keras.Model` API. To implement your own model, you need to do the following:1. Define a new class which subclasses `tf.keras.Model`. Give your class an intuitive name that describes it, like `TwoLayerFC` or `ThreeLayerConvNet`.2. In the initializer `__init__()` for your new class, define all the layers you need as class attributes. The `tf.keras.layers` package provides many common neural-network layers, like `tf.keras.layers.Dense` for fully-connected layers and `tf.keras.layers.Conv2D` for convolutional layers. Under the hood, these layers will construct `Variable` Tensors for any learnable parameters. **Warning**: Don't forget to call `super(YourModelName, self).__init__()` as the first line in your initializer!3. Implement the `call()` method for your class; this implements the forward pass of your model, and defines the *connectivity* of your network. Layers defined in `__init__()` implement `__call__()` so they can be used as function objects that transform input Tensors into output Tensors. Don't define any new layers in `call()`; any layers you want to use in the forward pass should be defined in `__init__()`.After you define your `tf.keras.Model` subclass, you can instantiate it and use it like the model functions from Part II. Keras Model Subclassing API: Two-Layer NetworkHere is a concrete example of using the `tf.keras.Model` API to define a two-layer network. There are a few new bits of API to be aware of here:We use an `Initializer` object to set up the initial values of the learnable parameters of the layers; in particular `tf.initializers.VarianceScaling` gives behavior similar to the Kaiming initialization method we used in Part II. You can read more about it here: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/initializers/VarianceScalingWe construct `tf.keras.layers.Dense` objects to represent the two fully-connected layers of the model. In addition to multiplying their input by a weight matrix and adding a bias vector, these layer can also apply a nonlinearity for you. For the first layer we specify a ReLU activation function by passing `activation='relu'` to the constructor; the second layer uses softmax activation function. Finally, we use `tf.keras.layers.Flatten` to flatten the output from the previous fully-connected layer.
###Code
class TwoLayerFC(tf.keras.Model):
def __init__(self, hidden_size, num_classes):
super(TwoLayerFC, self).__init__()
initializer = tf.initializers.VarianceScaling(scale=2.0)
self.fc1 = tf.keras.layers.Dense(hidden_size, activation='relu',
kernel_initializer=initializer)
self.fc2 = tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer)
self.flatten = tf.keras.layers.Flatten()
def call(self, x, training=False):
x = self.flatten(x)
x = self.fc1(x)
x = self.fc2(x)
return x
def test_TwoLayerFC():
""" A small unit test to exercise the TwoLayerFC model above. """
input_size, hidden_size, num_classes = 50, 42, 10
x = tf.zeros((64, input_size))
model = TwoLayerFC(hidden_size, num_classes)
with tf.device(device):
scores = model(x)
print(scores.shape)
test_TwoLayerFC()
###Output
_____no_output_____
###Markdown
Keras Model Subclassing API: Three-Layer ConvNetNow it's your turn to implement a three-layer ConvNet using the `tf.keras.Model` API. Your model should have the same architecture used in Part II:1. Convolutional layer with 5 x 5 kernels, with zero-padding of 22. ReLU nonlinearity3. Convolutional layer with 3 x 3 kernels, with zero-padding of 14. ReLU nonlinearity5. Fully-connected layer to give class scores6. Softmax nonlinearityYou should initialize the weights of your network using the same initialization method as was used in the two-layer network above.**Hint**: Refer to the documentation for `tf.keras.layers.Conv2D` and `tf.keras.layers.Dense`:https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Conv2Dhttps://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense
###Code
class ThreeLayerConvNet(tf.keras.Model):
def __init__(self, channel_1, channel_2, num_classes):
super(ThreeLayerConvNet, self).__init__()
########################################################################
# TODO: Implement the __init__ method for a three-layer ConvNet. You #
# should instantiate layer objects to be used in the forward pass. #
########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
########################################################################
# END OF YOUR CODE #
########################################################################
def call(self, x, training=False):
scores = None
########################################################################
# TODO: Implement the forward pass for a three-layer ConvNet. You #
# should use the layer objects defined in the __init__ method. #
########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
########################################################################
# END OF YOUR CODE #
########################################################################
return scores
###Output
_____no_output_____
###Markdown
Once you complete the implementation of the `ThreeLayerConvNet` above you can run the following to ensure that your implementation does not crash and produces outputs of the expected shape.
###Code
def test_ThreeLayerConvNet():
channel_1, channel_2, num_classes = 12, 8, 10
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
with tf.device(device):
x = tf.zeros((64, 3, 32, 32))
scores = model(x)
print(scores.shape)
test_ThreeLayerConvNet()
###Output
_____no_output_____
###Markdown
Keras Model Subclassing API: Eager TrainingWhile keras models have a builtin training loop (using the `model.fit`), sometimes you need more customization. Here's an example, of a training loop implemented with eager execution.In particular, notice `tf.GradientTape`. Automatic differentiation is used in the backend for implementing backpropagation in frameworks like TensorFlow. During eager execution, `tf.GradientTape` is used to trace operations for computing gradients later. A particular `tf.GradientTape` can only compute one gradient; subsequent calls to tape will throw a runtime error. TensorFlow 2.0 ships with easy-to-use built-in metrics under `tf.keras.metrics` module. Each metric is an object, and we can use `update_state()` to add observations and `reset_state()` to clear all observations. We can get the current result of a metric by calling `result()` on the metric object.
###Code
def train_part34(model_init_fn, optimizer_init_fn, num_epochs=1, is_training=False):
"""
Simple training loop for use with models defined using tf.keras. It trains
a model for one epoch on the CIFAR-10 training set and periodically checks
accuracy on the CIFAR-10 validation set.
Inputs:
- model_init_fn: A function that takes no parameters; when called it
constructs the model we want to train: model = model_init_fn()
- optimizer_init_fn: A function which takes no parameters; when called it
constructs the Optimizer object we will use to optimize the model:
optimizer = optimizer_init_fn()
- num_epochs: The number of epochs to train for
Returns: Nothing, but prints progress during trainingn
"""
with tf.device(device):
# Compute the loss like we did in Part II
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
model = model_init_fn()
optimizer = optimizer_init_fn()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
val_loss = tf.keras.metrics.Mean(name='val_loss')
val_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='val_accuracy')
t = 0
for epoch in range(num_epochs):
# Reset the metrics - https://www.tensorflow.org/alpha/guide/migration_guide#new-style_metrics
train_loss.reset_states()
train_accuracy.reset_states()
for x_np, y_np in train_dset:
with tf.GradientTape() as tape:
# Use the model function to build the forward pass.
scores = model(x_np, training=is_training)
loss = loss_fn(y_np, scores)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# Update the metrics
train_loss.update_state(loss)
train_accuracy.update_state(y_np, scores)
if t % print_every == 0:
val_loss.reset_states()
val_accuracy.reset_states()
for test_x, test_y in val_dset:
# During validation at end of epoch, training set to False
prediction = model(test_x, training=False)
t_loss = loss_fn(test_y, prediction)
val_loss.update_state(t_loss)
val_accuracy.update_state(test_y, prediction)
template = 'Iteration {}, Epoch {}, Loss: {}, Accuracy: {}, Val Loss: {}, Val Accuracy: {}'
print (template.format(t, epoch+1,
train_loss.result(),
train_accuracy.result()*100,
val_loss.result(),
val_accuracy.result()*100))
t += 1
###Output
_____no_output_____
###Markdown
Keras Model Subclassing API: Train a Two-Layer NetworkWe can now use the tools defined above to train a two-layer network on CIFAR-10. We define the `model_init_fn` and `optimizer_init_fn` that construct the model and optimizer respectively when called. Here we want to train the model using stochastic gradient descent with no momentum, so we construct a `tf.keras.optimizers.SGD` function; you can [read about it here](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers/SGD).You don't need to tune any hyperparameters here, but you should achieve validation accuracies above 40% after one epoch of training.
###Code
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn():
return TwoLayerFC(hidden_size, num_classes)
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
_____no_output_____
###Markdown
Keras Model Subclassing API: Train a Three-Layer ConvNetHere you should use the tools we've defined above to train a three-layer ConvNet on CIFAR-10. Your ConvNet should use 32 filters in the first convolutional layer and 16 filters in the second layer.To train the model you should use gradient descent with Nesterov momentum 0.9. **HINT**: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers/SGDYou don't need to perform any hyperparameter tuning, but you should achieve validation accuracies above 50% after training for one epoch.
###Code
learning_rate = 3e-3
channel_1, channel_2, num_classes = 32, 16, 10
def model_init_fn():
model = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return model
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn)
###Output
_____no_output_____
###Markdown
Part IV: Keras Sequential APIIn Part III we introduced the `tf.keras.Model` API, which allows you to define models with any number of learnable layers and with arbitrary connectivity between layers.However for many models you don't need such flexibility - a lot of models can be expressed as a sequential stack of layers, with the output of each layer fed to the next layer as input. If your model fits this pattern, then there is an even easier way to define your model: using `tf.keras.Sequential`. You don't need to write any custom classes; you simply call the `tf.keras.Sequential` constructor with a list containing a sequence of layer objects.One complication with `tf.keras.Sequential` is that you must define the shape of the input to the model by passing a value to the `input_shape` of the first layer in your model. Keras Sequential API: Two-Layer NetworkIn this subsection, we will rewrite the two-layer fully-connected network using `tf.keras.Sequential`, and train it using the training loop defined above.You don't need to perform any hyperparameter tuning here, but you should see validation accuracies above 40% after training for one epoch.
###Code
learning_rate = 1e-2
def model_init_fn():
input_shape = (32, 32, 3)
hidden_layer_size, num_classes = 4000, 10
initializer = tf.initializers.VarianceScaling(scale=2.0)
layers = [
tf.keras.layers.Flatten(input_shape=input_shape),
tf.keras.layers.Dense(hidden_layer_size, activation='relu',
kernel_initializer=initializer),
tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer),
]
model = tf.keras.Sequential(layers)
return model
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
_____no_output_____
###Markdown
Abstracting Away the Training LoopIn the previous examples, we used a customised training loop to train models (e.g. `train_part34`). Writing your own training loop is only required if you need more flexibility and control during training your model. Alternately, you can also use built-in APIs like `tf.keras.Model.fit()` and `tf.keras.Model.evaluate` to train and evaluate a model. Also remember to configure your model for training by calling `tf.keras.Model.compile.You don't need to perform any hyperparameter tuning here, but you should see validation and test accuracies above 42% after training for one epoch.
###Code
model = model_init_fn()
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate),
loss='sparse_categorical_crossentropy',
metrics=[tf.keras.metrics.sparse_categorical_accuracy])
model.fit(X_train, y_train, batch_size=64, epochs=1, validation_data=(X_val, y_val))
model.evaluate(X_test, y_test)
###Output
_____no_output_____
###Markdown
Keras Sequential API: Three-Layer ConvNetHere you should use `tf.keras.Sequential` to reimplement the same three-layer ConvNet architecture used in Part II and Part III. As a reminder, your model should have the following architecture:1. Convolutional layer with 32 5x5 kernels, using zero padding of 22. ReLU nonlinearity3. Convolutional layer with 16 3x3 kernels, using zero padding of 14. ReLU nonlinearity5. Fully-connected layer giving class scores6. Softmax nonlinearityYou should initialize the weights of the model using a `tf.initializers.VarianceScaling` as above.You should train the model using Nesterov momentum 0.9.You don't need to perform any hyperparameter search, but you should achieve accuracy above 45% after training for one epoch.
###Code
def model_init_fn():
model = None
############################################################################
# TODO: Construct a three-layer ConvNet using tf.keras.Sequential. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return model
learning_rate = 5e-4
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn)
###Output
_____no_output_____
###Markdown
We will also train this model with the built-in training loop APIs provided by TensorFlow.
###Code
model = model_init_fn()
model.compile(optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=[tf.keras.metrics.sparse_categorical_accuracy])
model.fit(X_train, y_train, batch_size=64, epochs=1, validation_data=(X_val, y_val))
model.evaluate(X_test, y_test)
###Output
_____no_output_____
###Markdown
Part IV: Functional API Demonstration with a Two-Layer Network In the previous section, we saw how we can use `tf.keras.Sequential` to stack layers to quickly build simple models. But this comes at the cost of losing flexibility.Often we will have to write complex models that have non-sequential data flows: a layer can have **multiple inputs and/or outputs**, such as stacking the output of 2 previous layers together to feed as input to a third! (Some examples are residual connections and dense blocks.)In such cases, we can use Keras functional API to write models with complex topologies such as: 1. Multi-input models 2. Multi-output models 3. Models with shared layers (the same layer called several times) 4. Models with non-sequential data flows (e.g. residual connections)Writing a model with Functional API requires us to create a `tf.keras.Model` instance and explicitly write input tensors and output tensors for this model.
###Code
def two_layer_fc_functional(input_shape, hidden_size, num_classes):
initializer = tf.initializers.VarianceScaling(scale=2.0)
inputs = tf.keras.Input(shape=input_shape)
flattened_inputs = tf.keras.layers.Flatten()(inputs)
fc1_output = tf.keras.layers.Dense(hidden_size, activation='relu',
kernel_initializer=initializer)(flattened_inputs)
scores = tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer)(fc1_output)
# Instantiate the model given inputs and outputs.
model = tf.keras.Model(inputs=inputs, outputs=scores)
return model
def test_two_layer_fc_functional():
""" A small unit test to exercise the TwoLayerFC model above. """
input_size, hidden_size, num_classes = 50, 42, 10
input_shape = (50,)
x = tf.zeros((64, input_size))
model = two_layer_fc_functional(input_shape, hidden_size, num_classes)
with tf.device(device):
scores = model(x)
print(scores.shape)
test_two_layer_fc_functional()
###Output
_____no_output_____
###Markdown
Keras Functional API: Train a Two-Layer NetworkYou can now train this two-layer network constructed using the functional API.You don't need to perform any hyperparameter tuning here, but you should see validation accuracies above 40% after training for one epoch.
###Code
input_shape = (32, 32, 3)
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn():
return two_layer_fc_functional(input_shape, hidden_size, num_classes)
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
_____no_output_____
###Markdown
Part V: CIFAR-10 open-ended challengeIn this section you can experiment with whatever ConvNet architecture you'd like on CIFAR-10.You should experiment with architectures, hyperparameters, loss functions, regularization, or anything else you can think of to train a model that achieves **at least 70%** accuracy on the **validation** set within 10 epochs. You can use the built-in train function, the `train_part34` function from above, or implement your own training loop.Describe what you did at the end of the notebook. Some things you can try:- **Filter size**: Above we used 5x5 and 3x3; is this optimal?- **Number of filters**: Above we used 16 and 32 filters. Would more or fewer do better?- **Pooling**: We didn't use any pooling above. Would this improve the model?- **Normalization**: Would your model be improved with batch normalization, layer normalization, group normalization, or some other normalization strategy?- **Network architecture**: The ConvNet above has only three layers of trainable parameters. Would a deeper model do better?- **Global average pooling**: Instead of flattening after the final convolutional layer, would global average pooling do better? This strategy is used for example in Google's Inception network and in Residual Networks.- **Regularization**: Would some kind of regularization improve performance? Maybe weight decay or dropout? NOTE: Batch Normalization / DropoutIf you are using Batch Normalization and Dropout, remember to pass `is_training=True` if you use the `train_part34()` function. BatchNorm and Dropout layers have different behaviors at training and inference time. `training` is a specific keyword argument reserved for this purpose in any `tf.keras.Model`'s `call()` function. Read more about this here : https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/BatchNormalizationmethodshttps://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dropoutmethods Tips for trainingFor each network architecture that you try, you should tune the learning rate and other hyperparameters. When doing this there are a couple important things to keep in mind: - If the parameters are working well, you should see improvement within a few hundred iterations- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set. Going above and beyondIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!- Alternative optimizers: you can try Adam, Adagrad, RMSprop, etc.- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.- Model ensembles- Data augmentation- New Architectures - [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output. - [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together. - [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32) Have fun and happy training!
###Code
class CustomConvNet(tf.keras.Model):
def __init__(self):
super(CustomConvNet, self).__init__()
############################################################################
# TODO: Construct a model that performs well on CIFAR-10 #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
def call(self, input_tensor, training=False):
############################################################################
# TODO: Construct a model that performs well on CIFAR-10 #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return x
device = '/device:GPU:0' # Change this to a CPU/GPU as you wish!
# device = '/cpu:0' # Change this to a CPU/GPU as you wish!
print_every = 700
num_epochs = 10
model = CustomConvNet()
def model_init_fn():
return CustomConvNet()
def optimizer_init_fn():
learning_rate = 1e-3
return tf.keras.optimizers.Adam(learning_rate)
train_part34(model_init_fn, optimizer_init_fn, num_epochs=num_epochs, is_training=True)
###Output
_____no_output_____
###Markdown
What's this TensorFlow business?You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you choose to work with that notebook). What is it?TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray. Why?* Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. * We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) * We want you to be exposed to the sort of deep learning code you might run into in academia or industry. How will I learn TensorFlow?TensorFlow has many excellent tutorials available, including those from [Google themselves](https://www.tensorflow.org/get_started/get_started).Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.**NOTE: This notebook is meant to teach you the latest version of Tensorflow 2.0. Most examples on the web today are still in 1.x, so be careful not to confuse the two when looking up documentation**. Install Tensorflow 2.0Tensorflow 2.0 is still not in a fully 100% stable release, but it's still usable and more intuitive than TF 1.x. Please make sure you have it installed before moving on in this notebook! Here are some steps to get started:1. Have the latest version of Anaconda installed on your machine.2. Create a new conda environment starting from Python 3.7. In this setup example, we'll call it `tf_20_env`.3. Run the command: `source activate tf_20_env`4. Then pip install TF 2.0 as described here: https://www.tensorflow.org/install/pip A guide on creating Anaconda enviornments: https://uoa-eresearch.github.io/eresearch-cookbook/recipe/2014/11/20/conda/This will give you an new enviornemnt to play in TF 2.0. Generally, if you plan to also use TensorFlow in your other projects, you might also want to keep a seperate Conda environment or virtualenv in Python 3.7 that has Tensorflow 1.9, so you can switch back and forth at will. Table of ContentsThis notebook has 5 parts. We will walk through TensorFlow at **three different levels of abstraction**, which should help you better understand it and prepare you for working on your project.1. Part I, Preparation: load the CIFAR-10 dataset.2. Part II, Barebone TensorFlow: **Abstraction Level 1**, we will work directly with low-level TensorFlow graphs. 3. Part III, Keras Model API: **Abstraction Level 2**, we will use `tf.keras.Model` to define arbitrary neural network architecture. 4. Part IV, Keras Sequential + Functional API: **Abstraction Level 3**, we will use `tf.keras.Sequential` to define a linear feed-forward network very conveniently, and then explore the functional libraries for building unique and uncommon models that require more flexibility.5. Part V, CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features. We will discuss Keras in more detail later in the notebook.Here is a table of comparison:| API | Flexibility | Convenience ||---------------|-------------|-------------|| Barebone | High | Low || `tf.keras.Model` | High | Medium || `tf.keras.Sequential` | Low | High | Part I: PreparationFirst, we load the CIFAR-10 dataset. This might take a few minutes to download the first time you run it, but after that the files should be cached on disk and loading should be faster.In previous parts of the assignment we used CS231N-specific code to download and read the CIFAR-10 dataset; however the `tf.keras.datasets` package in TensorFlow provides prebuilt utility functions for loading many common datasets.For the purposes of this assignment we will still write our own code to preprocess the data and iterate through it in minibatches. The `tf.data` package in TensorFlow provides tools for automating this process, but working with this package adds extra complication and is beyond the scope of this notebook. However using `tf.data` can be much more efficient than the simple approach used in this notebook, so you should consider using it for your project.
###Code
import os
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
print(tf.__version__) # need tf 2.0
def load_cifar10(num_training=49000, num_validation=1000, num_test=10000):
"""
Fetch the CIFAR-10 dataset from the web and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 dataset and use appropriate data types and shapes
cifar10 = tf.keras.datasets.cifar10.load_data()
(X_train, y_train), (X_test, y_test) = cifar10
X_train = np.asarray(X_train, dtype=np.float32)
y_train = np.asarray(y_train, dtype=np.int32).flatten()
X_test = np.asarray(X_test, dtype=np.float32)
y_test = np.asarray(y_test, dtype=np.int32).flatten()
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean pixel and divide by std
mean_pixel = X_train.mean(axis=(0, 1, 2), keepdims=True)
std_pixel = X_train.std(axis=(0, 1, 2), keepdims=True)
X_train = (X_train - mean_pixel) / std_pixel
X_val = (X_val - mean_pixel) / std_pixel
X_test = (X_test - mean_pixel) / std_pixel
return X_train, y_train, X_val, y_val, X_test, y_test
# If there are errors with SSL downloading involving self-signed certificates,
# it may be that your Python version was recently installed on the current machine.
# See: https://github.com/tensorflow/tensorflow/issues/10779
# To fix, run the command: /Applications/Python\ 3.7/Install\ Certificates.command
# ...replacing paths as necessary.
# Invoke the above function to get our data.
NHW = (0, 1, 2)
X_train, y_train, X_val, y_val, X_test, y_test = load_cifar10()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape, y_train.dtype)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
class Dataset(object):
def __init__(self, X, y, batch_size, shuffle=False):
"""
Construct a Dataset object to iterate over data X and labels y
Inputs:
- X: Numpy array of data, of any shape
- y: Numpy array of labels, of any shape but with y.shape[0] == X.shape[0]
- batch_size: Integer giving number of elements per minibatch
- shuffle: (optional) Boolean, whether to shuffle the data on each epoch
"""
assert X.shape[0] == y.shape[0], 'Got different numbers of data and labels'
self.X, self.y = X, y
self.batch_size, self.shuffle = batch_size, shuffle
def __iter__(self):
N, B = self.X.shape[0], self.batch_size
idxs = np.arange(N)
if self.shuffle:
np.random.shuffle(idxs)
return iter((self.X[i:i+B], self.y[i:i+B]) for i in range(0, N, B))
train_dset = Dataset(X_train, y_train, batch_size=64, shuffle=True)
val_dset = Dataset(X_val, y_val, batch_size=64, shuffle=False)
test_dset = Dataset(X_test, y_test, batch_size=64)
# We can iterate through a dataset like this:
for t, (x, y) in enumerate(train_dset):
print(t, x.shape, y.shape)
if t > 5: break
###Output
0 (64, 32, 32, 3) (64,)
1 (64, 32, 32, 3) (64,)
2 (64, 32, 32, 3) (64,)
3 (64, 32, 32, 3) (64,)
4 (64, 32, 32, 3) (64,)
5 (64, 32, 32, 3) (64,)
6 (64, 32, 32, 3) (64,)
###Markdown
You can optionally **use GPU by setting the flag to True below**. It's not neccessary to use a GPU for this assignment; if you are working on Google Cloud then we recommend that you do not use a GPU, as it will be significantly more expensive.
###Code
# Set up some global variables
USE_GPU = True
if USE_GPU:
device = '/device:GPU:0'
else:
device = '/cpu:0'
# Constant to control how often we print when training models
print_every = 100
print('Using device: ', device)
###Output
Using device: /device:GPU:0
###Markdown
Part II: Barebones TensorFlowTensorFlow ships with various high-level APIs which make it very convenient to define and train neural networks; we will cover some of these constructs in Part III and Part IV of this notebook. In this section we will start by building a model with basic TensorFlow constructs to help you better understand what's going on under the hood of the higher-level APIs.**"Barebones Tensorflow" is important to understanding the building blocks of TensorFlow, but much of it involves concepts from TensorFlow 1.x.** We will be working with legacy modules such as `tf.Variable`.Therefore, please read and understand the differences between legacy (1.x) TF and the new (2.0) TF. Historical background on TensorFlow 1.xTensorFlow 1.x is primarily a framework for working with **static computational graphs**. Nodes in the computational graph are Tensors which will hold n-dimensional arrays when the graph is run; edges in the graph represent functions that will operate on Tensors when the graph is run to actually perform useful computation.Before Tensorflow 2.0, we had to configure the graph into two phases. There are plenty of tutorials online that explain this two-step process. The process generally looks like the following for TF 1.x:1. **Build a computational graph that describes the computation that you want to perform**. This stage doesn't actually perform any computation; it just builds up a symbolic representation of your computation. This stage will typically define one or more `placeholder` objects that represent inputs to the computational graph.2. **Run the computational graph many times.** Each time the graph is run (e.g. for one gradient descent step) you will specify which parts of the graph you want to compute, and pass a `feed_dict` dictionary that will give concrete values to any `placeholder`s in the graph. The new paradigm in Tensorflow 2.0Now, with Tensorflow 2.0, we can simply adopt a functional form that is more Pythonic and similar in spirit to PyTorch and direct Numpy operation. Instead of the 2-step paradigm with computation graphs, making it (among other things) easier to debug TF code. You can read more details at https://www.tensorflow.org/guide/eager.The main difference between the TF 1.x and 2.0 approach is that the 2.0 approach doesn't make use of `tf.Session`, `tf.run`, `placeholder`, `feed_dict`. To get more details of what's different between the two version and how to convert between the two, check out the official migration guide: https://www.tensorflow.org/alpha/guide/migration_guideLater, in the rest of this notebook we'll focus on this new, simpler approach. TensorFlow warmup: Flatten FunctionWe can see this in action by defining a simple `flatten` function that will reshape image data for use in a fully-connected network.In TensorFlow, data for convolutional feature maps is typically stored in a Tensor of shape N x H x W x C where:- N is the number of datapoints (minibatch size)- H is the height of the feature map- W is the width of the feature map- C is the number of channels in the feature mapThis is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we use fully connected affine layers to process the image, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "flatten" operation to collapse the `H x W x C` values per representation into a single long vector. Notice the `tf.reshape` call has the target shape as `(N, -1)`, meaning it will reshape/keep the first dimension to be N, and then infer as necessary what the second dimension is in the output, so we can collapse the remaining dimensions from the input properly.**NOTE**: TensorFlow and PyTorch differ on the default Tensor layout; TensorFlow uses N x H x W x C but PyTorch uses N x C x H x W.
###Code
def flatten(x):
"""
Input:
- TensorFlow Tensor of shape (N, D1, ..., DM)
Output:
- TensorFlow Tensor of shape (N, D1 * ... * DM)
"""
N = tf.shape(x)[0]
return tf.reshape(x, (N, -1))
def test_flatten():
# Construct concrete values of the input data x using numpy
x_np = np.arange(24).reshape((2, 3, 4))
print('x_np:\n', x_np, '\n')
# Compute a concrete output value.
x_flat_np = flatten(x_np)
print('x_flat_np:\n', x_flat_np, '\n')
test_flatten()
###Output
x_np:
[[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[12 13 14 15]
[16 17 18 19]
[20 21 22 23]]]
x_flat_np:
tf.Tensor(
[[ 0 1 2 3 4 5 6 7 8 9 10 11]
[12 13 14 15 16 17 18 19 20 21 22 23]], shape=(2, 12), dtype=int32)
###Markdown
Barebones TensorFlow: Define a Two-Layer NetworkWe will now implement our first neural network with TensorFlow: a fully-connected ReLU network with two hidden layers and no biases on the CIFAR10 dataset. For now we will use only low-level TensorFlow operators to define the network; later we will see how to use the higher-level abstractions provided by `tf.keras` to simplify the process.We will define the forward pass of the network in the function `two_layer_fc`; this will accept TensorFlow Tensors for the inputs and weights of the network, and return a TensorFlow Tensor for the scores. After defining the network architecture in the `two_layer_fc` function, we will test the implementation by checking the shape of the output.**It's important that you read and understand this implementation.**
###Code
def two_layer_fc(x, params):
"""
A fully-connected neural network; the architecture is:
fully-connected layer -> ReLU -> fully connected layer.
Note that we only need to define the forward pass here; TensorFlow will take
care of computing the gradients for us.
The input to the network will be a minibatch of data, of shape
(N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,
and the output layer will produce scores for C classes.
Inputs:
- x: A TensorFlow Tensor of shape (N, d1, ..., dM) giving a minibatch of
input data.
- params: A list [w1, w2] of TensorFlow Tensors giving weights for the
network, where w1 has shape (D, H) and w2 has shape (H, C).
Returns:
- scores: A TensorFlow Tensor of shape (N, C) giving classification scores
for the input data x.
"""
w1, w2 = params # Unpack the parameters
x = flatten(x) # Flatten the input; now x has shape (N, D)
h = tf.nn.relu(tf.matmul(x, w1)) # Hidden layer: h has shape (N, H)
scores = tf.matmul(h, w2) # Compute scores of shape (N, C)
return scores
def two_layer_fc_test():
hidden_layer_size = 42
# Scoping our TF operations under a tf.device context manager
# lets us tell TensorFlow where we want these Tensors to be
# multiplied and/or operated on, e.g. on a CPU or a GPU.
with tf.device(device):
x = tf.zeros((64, 32, 32, 3))
w1 = tf.zeros((32 * 32 * 3, hidden_layer_size))
w2 = tf.zeros((hidden_layer_size, 10))
# Call our two_layer_fc function for the forward pass of the network.
scores = two_layer_fc(x, [w1, w2])
print(scores.shape)
two_layer_fc_test()
###Output
(64, 10)
###Markdown
Barebones TensorFlow: Three-Layer ConvNetHere you will complete the implementation of the function `three_layer_convnet` which will perform the forward pass of a three-layer convolutional network. The network should have the following architecture:1. A convolutional layer (with bias) with `channel_1` filters, each with shape `KW1 x KH1`, and zero-padding of two2. ReLU nonlinearity3. A convolutional layer (with bias) with `channel_2` filters, each with shape `KW2 x KH2`, and zero-padding of one4. ReLU nonlinearity5. Fully-connected layer with bias, producing scores for `C` classes.**HINT**: For convolutions: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/conv2d; be careful with padding!**HINT**: For biases: https://www.tensorflow.org/performance/xla/broadcasting
###Code
def three_layer_convnet(x, params):
"""
A three-layer convolutional network with the architecture described above.
Inputs:
- x: A TensorFlow Tensor of shape (N, H, W, 3) giving a minibatch of images
- params: A list of TensorFlow Tensors giving the weights and biases for the
network; should contain the following:
- conv_w1: TensorFlow Tensor of shape (KH1, KW1, 3, channel_1) giving
weights for the first convolutional layer.
- conv_b1: TensorFlow Tensor of shape (channel_1,) giving biases for the
first convolutional layer.
- conv_w2: TensorFlow Tensor of shape (KH2, KW2, channel_1, channel_2)
giving weights for the second convolutional layer
- conv_b2: TensorFlow Tensor of shape (channel_2,) giving biases for the
second convolutional layer.
- fc_w: TensorFlow Tensor giving weights for the fully-connected layer.
Can you figure out what the shape should be?
- fc_b: TensorFlow Tensor giving biases for the fully-connected layer.
Can you figure out what the shape should be?
"""
conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params
scores = None
############################################################################
# TODO: Implement the forward pass for the three-layer ConvNet. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
conv1 = tf.nn.conv2d(x, conv_w1, 1, [[0, 0], [2, 2], [2, 2], [0, 0]]) + conv_b1
relu1 = tf.nn.relu(conv1)
conv2 = tf.nn.conv2d(relu1, conv_w2, 1, [[0, 0], [1, 1], [1, 1], [0, 0]]) + conv_b2
relu2 = tf.nn.relu(conv2)
relu2_flat = flatten(relu2)
scores = tf.matmul(relu2_flat, fc_w) + fc_b
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return scores
###Output
_____no_output_____
###Markdown
After defing the forward pass of the three-layer ConvNet above, run the following cell to test your implementation. Like the two-layer network, we run the graph on a batch of zeros just to make sure the function doesn't crash, and produces outputs of the correct shape.When you run this function, `scores_np` should have shape `(64, 10)`.
###Code
def three_layer_convnet_test():
with tf.device(device):
x = tf.zeros((64, 32, 32, 3))
conv_w1 = tf.zeros((5, 5, 3, 6))
conv_b1 = tf.zeros((6,))
conv_w2 = tf.zeros((3, 3, 6, 9))
conv_b2 = tf.zeros((9,))
fc_w = tf.zeros((32 * 32 * 9, 10))
fc_b = tf.zeros((10,))
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
scores = three_layer_convnet(x, params)
# Inputs to convolutional layers are 4-dimensional arrays with shape
# [batch_size, height, width, channels]
print('scores_np has shape: ', scores.shape)
three_layer_convnet_test()
###Output
scores_np has shape: (64, 10)
###Markdown
Barebones TensorFlow: Training StepWe now define the `training_step` function performs a single training step. This will take three basic steps:1. Compute the loss2. Compute the gradient of the loss with respect to all network weights3. Make a weight update step using (stochastic) gradient descent.We need to use a few new TensorFlow functions to do all of this:- For computing the cross-entropy loss we'll use `tf.nn.sparse_softmax_cross_entropy_with_logits`: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits- For averaging the loss across a minibatch of data we'll use `tf.reduce_mean`:https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/reduce_mean- For computing gradients of the loss with respect to the weights we'll use `tf.GradientTape` (useful for Eager execution): https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/GradientTape- We'll mutate the weight values stored in a TensorFlow Tensor using `tf.assign_sub` ("sub" is for subtraction): https://www.tensorflow.org/api_docs/python/tf/assign_sub
###Code
def training_step(model_fn, x, y, params, learning_rate):
with tf.GradientTape() as tape:
scores = model_fn(x, params) # Forward pass of the model
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=scores)
total_loss = tf.reduce_mean(loss)
grad_params = tape.gradient(total_loss, params)
# Make a vanilla gradient descent step on all of the model parameters
# Manually update the weights using assign_sub()
for w, grad_w in zip(params, grad_params):
w.assign_sub(learning_rate * grad_w)
return total_loss
def train_part2(model_fn, init_fn, learning_rate):
"""
Train a model on CIFAR-10.
Inputs:
- model_fn: A Python function that performs the forward pass of the model
using TensorFlow; it should have the following signature:
scores = model_fn(x, params) where x is a TensorFlow Tensor giving a
minibatch of image data, params is a list of TensorFlow Tensors holding
the model weights, and scores is a TensorFlow Tensor of shape (N, C)
giving scores for all elements of x.
- init_fn: A Python function that initializes the parameters of the model.
It should have the signature params = init_fn() where params is a list
of TensorFlow Tensors holding the (randomly initialized) weights of the
model.
- learning_rate: Python float giving the learning rate to use for SGD.
"""
params = init_fn() # Initialize the model parameters
for t, (x_np, y_np) in enumerate(train_dset):
# Run the graph on a batch of training data.
loss = training_step(model_fn, x_np, y_np, params, learning_rate)
# Periodically print the loss and check accuracy on the val set.
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss))
check_accuracy(val_dset, x_np, model_fn, params)
def check_accuracy(dset, x, model_fn, params):
"""
Check accuracy on a classification model, e.g. for validation.
Inputs:
- dset: A Dataset object against which to check accuracy
- x: A TensorFlow placeholder Tensor where input images should be fed
- model_fn: the Model we will be calling to make predictions on x
- params: parameters for the model_fn to work with
Returns: Nothing, but prints the accuracy of the model
"""
num_correct, num_samples = 0, 0
for x_batch, y_batch in dset:
scores_np = model_fn(x_batch, params).numpy()
y_pred = scores_np.argmax(axis=1)
num_samples += x_batch.shape[0]
num_correct += (y_pred == y_batch).sum()
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: InitializationWe'll use the following utility method to initialize the weight matrices for our models using Kaiming's normalization method.[1] He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, ICCV 2015, https://arxiv.org/abs/1502.01852
###Code
def create_matrix_with_kaiming_normal(shape):
if len(shape) == 2:
fan_in, fan_out = shape[0], shape[1]
elif len(shape) == 4:
fan_in, fan_out = np.prod(shape[:3]), shape[3]
return tf.keras.backend.random_normal(shape) * np.sqrt(2.0 / fan_in)
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Train a Two-Layer NetworkWe are finally ready to use all of the pieces defined above to train a two-layer fully-connected network on CIFAR-10.We just need to define a function to initialize the weights of the model, and call `train_part2`.Defining the weights of the network introduces another important piece of TensorFlow API: `tf.Variable`. A TensorFlow Variable is a Tensor whose value is stored in the graph and persists across runs of the computational graph; however unlike constants defined with `tf.zeros` or `tf.random_normal`, the values of a Variable can be mutated as the graph runs; these mutations will persist across graph runs. Learnable parameters of the network are usually stored in Variables.You don't need to tune any hyperparameters, but you should achieve validation accuracies above 40% after one epoch of training.
###Code
def two_layer_fc_init():
"""
Initialize the weights of a two-layer network, for use with the
two_layer_network function defined above.
You can use the `create_matrix_with_kaiming_normal` helper!
Inputs: None
Returns: A list of:
- w1: TensorFlow tf.Variable giving the weights for the first layer
- w2: TensorFlow tf.Variable giving the weights for the second layer
"""
hidden_layer_size = 4000
w1 = tf.Variable(create_matrix_with_kaiming_normal((3 * 32 * 32, 4000)))
w2 = tf.Variable(create_matrix_with_kaiming_normal((4000, 10)))
return [w1, w2]
learning_rate = 1e-2
train_part2(two_layer_fc, two_layer_fc_init, learning_rate)
###Output
Iteration 0, loss = 3.1796
Got 138 / 1000 correct (13.80%)
Iteration 100, loss = 1.8606
Got 371 / 1000 correct (37.10%)
Iteration 200, loss = 1.4913
Got 389 / 1000 correct (38.90%)
Iteration 300, loss = 1.8782
Got 363 / 1000 correct (36.30%)
Iteration 400, loss = 1.8509
Got 426 / 1000 correct (42.60%)
Iteration 500, loss = 1.8515
Got 419 / 1000 correct (41.90%)
Iteration 600, loss = 1.9118
Got 428 / 1000 correct (42.80%)
Iteration 700, loss = 1.8966
Got 450 / 1000 correct (45.00%)
###Markdown
Barebones TensorFlow: Train a three-layer ConvNetWe will now use TensorFlow to train a three-layer ConvNet on CIFAR-10.You need to implement the `three_layer_convnet_init` function. Recall that the architecture of the network is:1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding 22. ReLU3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding 14. ReLU5. Fully-connected layer (with bias) to compute scores for 10 classesYou don't need to do any hyperparameter tuning, but you should see validation accuracies above 43% after one epoch of training.
###Code
def three_layer_convnet_init():
"""
Initialize the weights of a Three-Layer ConvNet, for use with the
three_layer_convnet function defined above.
You can use the `create_matrix_with_kaiming_normal` helper!
Inputs: None
Returns a list containing:
- conv_w1: TensorFlow tf.Variable giving weights for the first conv layer
- conv_b1: TensorFlow tf.Variable giving biases for the first conv layer
- conv_w2: TensorFlow tf.Variable giving weights for the second conv layer
- conv_b2: TensorFlow tf.Variable giving biases for the second conv layer
- fc_w: TensorFlow tf.Variable giving weights for the fully-connected layer
- fc_b: TensorFlow tf.Variable giving biases for the fully-connected layer
"""
params = None
############################################################################
# TODO: Initialize the parameters of the three-layer network. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# a sample input is 32 x 32 x 3
conv_w1 = tf.Variable(create_matrix_with_kaiming_normal((5, 5, 3, 32)))
conv_b1 = tf.Variable(create_matrix_with_kaiming_normal((1, 32)))
conv_w2 = tf.Variable(create_matrix_with_kaiming_normal((3, 3, 32, 16)))
conv_b2 = tf.Variable(create_matrix_with_kaiming_normal((1, 16)))
fc_w = tf.Variable(create_matrix_with_kaiming_normal((32 * 32 * 16, 10))) # the input size after two convs is 32 x 32 x 16.
fc_b = tf.Variable(create_matrix_with_kaiming_normal((1, 10)))
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return params
learning_rate = 3e-3
train_part2(three_layer_convnet, three_layer_convnet_init, learning_rate)
###Output
Iteration 0, loss = 4.8878
Got 79 / 1000 correct (7.90%)
Iteration 100, loss = 1.9799
Got 348 / 1000 correct (34.80%)
Iteration 200, loss = 1.7410
Got 391 / 1000 correct (39.10%)
Iteration 300, loss = 1.7193
Got 390 / 1000 correct (39.00%)
Iteration 400, loss = 1.7154
Got 427 / 1000 correct (42.70%)
Iteration 500, loss = 1.6703
Got 438 / 1000 correct (43.80%)
Iteration 600, loss = 1.6941
Got 446 / 1000 correct (44.60%)
Iteration 700, loss = 1.6794
Got 460 / 1000 correct (46.00%)
###Markdown
Part III: Keras Model Subclassing APIImplementing a neural network using the low-level TensorFlow API is a good way to understand how TensorFlow works, but it's a little inconvenient - we had to manually keep track of all Tensors holding learnable parameters. This was fine for a small network, but could quickly become unweildy for a large complex model.Fortunately TensorFlow 2.0 provides higher-level APIs such as `tf.keras` which make it easy to build models out of modular, object-oriented layers. Further, TensorFlow 2.0 uses eager execution that evaluates operations immediately, without explicitly constructing any computational graphs. This makes it easy to write and debug models, and reduces the boilerplate code.In this part of the notebook we will define neural network models using the `tf.keras.Model` API. To implement your own model, you need to do the following:1. Define a new class which subclasses `tf.keras.Model`. Give your class an intuitive name that describes it, like `TwoLayerFC` or `ThreeLayerConvNet`.2. In the initializer `__init__()` for your new class, define all the layers you need as class attributes. The `tf.keras.layers` package provides many common neural-network layers, like `tf.keras.layers.Dense` for fully-connected layers and `tf.keras.layers.Conv2D` for convolutional layers. Under the hood, these layers will construct `Variable` Tensors for any learnable parameters. **Warning**: Don't forget to call `super(YourModelName, self).__init__()` as the first line in your initializer!3. Implement the `call()` method for your class; this implements the forward pass of your model, and defines the *connectivity* of your network. Layers defined in `__init__()` implement `__call__()` so they can be used as function objects that transform input Tensors into output Tensors. Don't define any new layers in `call()`; any layers you want to use in the forward pass should be defined in `__init__()`.After you define your `tf.keras.Model` subclass, you can instantiate it and use it like the model functions from Part II. Keras Model Subclassing API: Two-Layer NetworkHere is a concrete example of using the `tf.keras.Model` API to define a two-layer network. There are a few new bits of API to be aware of here:We use an `Initializer` object to set up the initial values of the learnable parameters of the layers; in particular `tf.initializers.VarianceScaling` gives behavior similar to the Kaiming initialization method we used in Part II. You can read more about it here: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/initializers/VarianceScalingWe construct `tf.keras.layers.Dense` objects to represent the two fully-connected layers of the model. In addition to multiplying their input by a weight matrix and adding a bias vector, these layer can also apply a nonlinearity for you. For the first layer we specify a ReLU activation function by passing `activation='relu'` to the constructor; the second layer uses softmax activation function. Finally, we use `tf.keras.layers.Flatten` to flatten the output from the previous fully-connected layer.
###Code
class TwoLayerFC(tf.keras.Model):
def __init__(self, hidden_size, num_classes):
super().__init__() #super(TwoLayerFC, self).__init__()
initializer = tf.initializers.VarianceScaling(scale=2.0)
self.flatten = tf.keras.layers.Flatten()
self.fc1 = tf.keras.layers.Dense(hidden_size, activation='relu',
kernel_initializer=initializer)
self.fc2 = tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer)
def call(self, x, training=False):
x = self.flatten(x)
x = self.fc1(x)
x = self.fc2(x)
return x
def test_TwoLayerFC():
""" A small unit test to exercise the TwoLayerFC model above. """
input_size, hidden_size, num_classes = 50, 42, 10
x = tf.zeros((64, input_size))
model = TwoLayerFC(hidden_size, num_classes)
with tf.device(device):
scores = model(x)
print(scores.shape)
test_TwoLayerFC()
###Output
(64, 10)
###Markdown
Keras Model Subclassing API: Three-Layer ConvNetNow it's your turn to implement a three-layer ConvNet using the `tf.keras.Model` API. Your model should have the same architecture used in Part II:1. Convolutional layer with 5 x 5 kernels, with zero-padding of 22. ReLU nonlinearity3. Convolutional layer with 3 x 3 kernels, with zero-padding of 14. ReLU nonlinearity5. Fully-connected layer to give class scores6. Softmax nonlinearityYou should initialize the weights of your network using the same initialization method as was used in the two-layer network above.**Hint**: Refer to the documentation for `tf.keras.layers.Conv2D` and `tf.keras.layers.Dense`:https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Conv2Dhttps://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense
###Code
class ThreeLayerConvNet(tf.keras.Model):
def __init__(self, channel_1, channel_2, num_classes):
super().__init__()
########################################################################
# TODO: Implement the __init__ method for a three-layer ConvNet. You #
# should instantiate layer objects to be used in the forward pass. #
########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
initializer = tf.initializers.VarianceScaling(scale=2.0)
self.conv1 = tf.keras.layers.Conv2D(channel_1, (5, 5), padding='same', activation='relu', kernel_initializer=initializer)
self.conv2 = tf.keras.layers.Conv2D(channel_2, (3, 3), padding='same', activation='relu', kernel_initializer=initializer)
self.flatten = tf.keras.layers.Flatten()
self.fc = tf.keras.layers.Dense(num_classes, activation='softmax')
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
########################################################################
# END OF YOUR CODE #
########################################################################
def call(self, x, training=False):
scores = None
########################################################################
# TODO: Implement the forward pass for a three-layer ConvNet. You #
# should use the layer objects defined in the __init__ method. #
########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
x = self.conv1(x)
x = self.conv2(x)
x = self.flatten(x)
scores = self.fc(x)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
########################################################################
# END OF YOUR CODE #
########################################################################
return scores
###Output
_____no_output_____
###Markdown
Once you complete the implementation of the `ThreeLayerConvNet` above you can run the following to ensure that your implementation does not crash and produces outputs of the expected shape.
###Code
def test_ThreeLayerConvNet():
channel_1, channel_2, num_classes = 12, 8, 10
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
with tf.device(device):
x = tf.zeros((64, 3, 32, 32))
scores = model(x)
print(scores.shape)
test_ThreeLayerConvNet()
###Output
(64, 10)
###Markdown
Keras Model Subclassing API: Eager TrainingWhile keras models have a builtin training loop (using the `model.fit`), sometimes you need more customization. Here's an example, of a training loop implemented with eager execution.In particular, notice `tf.GradientTape`. Automatic differentiation is used in the backend for implementing backpropagation in frameworks like TensorFlow. During eager execution, `tf.GradientTape` is used to trace operations for computing gradients later. A particular `tf.GradientTape` can only compute one gradient; subsequent calls to tape will throw a runtime error. TensorFlow 2.0 ships with easy-to-use built-in metrics under `tf.keras.metrics` module. Each metric is an object, and we can use `update_state()` to add observations and `reset_state()` to clear all observations. We can get the current result of a metric by calling `result()` on the metric object.
###Code
def train_part34(model_init_fn, optimizer_init_fn, num_epochs=1, is_training=False):
"""
Simple training loop for use with models defined using tf.keras. It trains
a model for one epoch on the CIFAR-10 training set and periodically checks
accuracy on the CIFAR-10 validation set.
Inputs:
- model_init_fn: A function that takes no parameters; when called it
constructs the model we want to train: model = model_init_fn()
- optimizer_init_fn: A function which takes no parameters; when called it
constructs the Optimizer object we will use to optimize the model:
optimizer = optimizer_init_fn()
- num_epochs: The number of epochs to train for
Returns: Nothing, but prints progress during trainingn
"""
with tf.device(device):
# Compute the loss like we did in Part II
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
model = model_init_fn()
optimizer = optimizer_init_fn()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
val_loss = tf.keras.metrics.Mean(name='val_loss')
val_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='val_accuracy')
t = 0
for epoch in range(num_epochs):
# Reset the metrics - https://www.tensorflow.org/alpha/guide/migration_guide#new-style_metrics
train_loss.reset_states()
train_accuracy.reset_states()
for x_np, y_np in train_dset:
with tf.GradientTape() as tape:
# Use the model function to build the forward pass.
scores = model(x_np, training=is_training)
loss = loss_fn(y_np, scores)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# Update the metrics
train_loss.update_state(loss)
train_accuracy.update_state(y_np, scores)
if t % print_every == 0:
val_loss.reset_states()
val_accuracy.reset_states()
for test_x, test_y in val_dset:
# During validation at end of epoch, training set to False
prediction = model(test_x, training=False)
t_loss = loss_fn(test_y, prediction)
val_loss.update_state(t_loss)
val_accuracy.update_state(test_y, prediction)
template = 'Iteration {}, Epoch {}, Loss: {}, Accuracy: {}, Val Loss: {}, Val Accuracy: {}'
print (template.format(t, epoch+1,
train_loss.result(),
train_accuracy.result()*100,
val_loss.result(),
val_accuracy.result()*100))
t += 1
###Output
_____no_output_____
###Markdown
Keras Model Subclassing API: Train a Two-Layer NetworkWe can now use the tools defined above to train a two-layer network on CIFAR-10. We define the `model_init_fn` and `optimizer_init_fn` that construct the model and optimizer respectively when called. Here we want to train the model using stochastic gradient descent with no momentum, so we construct a `tf.keras.optimizers.SGD` function; you can [read about it here](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers/SGD).You don't need to tune any hyperparameters here, but you should achieve validation accuracies above 40% after one epoch of training.
###Code
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn():
return TwoLayerFC(hidden_size, num_classes)
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn, num_epochs=1)
###Output
Iteration 0, Epoch 1, Loss: 2.8405842781066895, Accuracy: 10.9375, Val Loss: 2.9787204265594482, Val Accuracy: 11.800000190734863
Iteration 100, Epoch 1, Loss: 2.2377357482910156, Accuracy: 27.691831588745117, Val Loss: 1.9073878526687622, Val Accuracy: 38.29999923706055
Iteration 200, Epoch 1, Loss: 2.0827560424804688, Accuracy: 31.778606414794922, Val Loss: 1.834181785583496, Val Accuracy: 39.89999771118164
Iteration 300, Epoch 1, Loss: 2.000749349594116, Accuracy: 33.80917739868164, Val Loss: 1.8754873275756836, Val Accuracy: 36.099998474121094
Iteration 400, Epoch 1, Loss: 1.9329524040222168, Accuracy: 35.582916259765625, Val Loss: 1.7133322954177856, Val Accuracy: 41.20000076293945
Iteration 500, Epoch 1, Loss: 1.8873575925827026, Accuracy: 36.76708984375, Val Loss: 1.6488999128341675, Val Accuracy: 41.60000228881836
Iteration 600, Epoch 1, Loss: 1.8568066358566284, Accuracy: 37.72878646850586, Val Loss: 1.6973183155059814, Val Accuracy: 41.60000228881836
Iteration 700, Epoch 1, Loss: 1.830464243888855, Accuracy: 38.46290969848633, Val Loss: 1.6309181451797485, Val Accuracy: 42.5
###Markdown
Keras Model Subclassing API: Train a Three-Layer ConvNetHere you should use the tools we've defined above to train a three-layer ConvNet on CIFAR-10. Your ConvNet should use 32 filters in the first convolutional layer and 16 filters in the second layer.To train the model you should use gradient descent with Nesterov momentum 0.9. **HINT**: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers/SGDYou don't need to perform any hyperparameter tuning, but you should achieve validation accuracies above 50% after training for one epoch.
###Code
learning_rate = 3e-3
channel_1, channel_2, num_classes = 32, 16, 10
def model_init_fn():
model = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return model
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, nesterov=True, momentum=0.9)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn, num_epochs=1)
###Output
Iteration 0, Epoch 1, Loss: 3.154758930206299, Accuracy: 10.9375, Val Loss: 5.975311279296875, Val Accuracy: 8.399999618530273
Iteration 100, Epoch 1, Loss: 2.041959047317505, Accuracy: 31.07982635498047, Val Loss: 1.68193519115448, Val Accuracy: 40.70000076293945
Iteration 200, Epoch 1, Loss: 1.8226760625839233, Accuracy: 37.31343078613281, Val Loss: 1.5124176740646362, Val Accuracy: 47.400001525878906
Iteration 300, Epoch 1, Loss: 1.714808464050293, Accuracy: 40.785919189453125, Val Loss: 1.4400432109832764, Val Accuracy: 49.0
Iteration 400, Epoch 1, Loss: 1.6360254287719727, Accuracy: 42.94731903076172, Val Loss: 1.3696250915527344, Val Accuracy: 52.29999923706055
Iteration 500, Epoch 1, Loss: 1.5823934078216553, Accuracy: 44.61701583862305, Val Loss: 1.3425835371017456, Val Accuracy: 51.5
Iteration 600, Epoch 1, Loss: 1.5470515489578247, Accuracy: 45.705074310302734, Val Loss: 1.3165476322174072, Val Accuracy: 52.79999923706055
Iteration 700, Epoch 1, Loss: 1.5153937339782715, Accuracy: 46.83710479736328, Val Loss: 1.3428152799606323, Val Accuracy: 52.79999923706055
###Markdown
Part IV: Keras Sequential APIIn Part III we introduced the `tf.keras.Model` API, which allows you to define models with any number of learnable layers and with arbitrary connectivity between layers.However for many models you don't need such flexibility - a lot of models can be expressed as a sequential stack of layers, with the output of each layer fed to the next layer as input. If your model fits this pattern, then there is an even easier way to define your model: using `tf.keras.Sequential`. You don't need to write any custom classes; you simply call the `tf.keras.Sequential` constructor with a list containing a sequence of layer objects.One complication with `tf.keras.Sequential` is that you must define the shape of the input to the model by passing a value to the `input_shape` of the first layer in your model. Keras Sequential API: Two-Layer NetworkIn this subsection, we will rewrite the two-layer fully-connected network using `tf.keras.Sequential`, and train it using the training loop defined above.You don't need to perform any hyperparameter tuning here, but you should see validation accuracies above 40% after training for one epoch.
###Code
learning_rate = 1e-2
def model_init_fn():
input_shape = (32, 32, 3)
hidden_layer_size, num_classes = 4000, 10
initializer = tf.initializers.VarianceScaling(scale=2.0)
layers = [
tf.keras.layers.Flatten(input_shape=input_shape),
tf.keras.layers.Dense(hidden_layer_size, activation='relu',
kernel_initializer=initializer),
tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer),
]
model = tf.keras.Sequential(layers)
return model
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
Iteration 0, Epoch 1, Loss: 3.049715995788574, Accuracy: 9.375, Val Loss: 2.892936944961548, Val Accuracy: 11.699999809265137
Iteration 100, Epoch 1, Loss: 2.2131028175354004, Accuracy: 29.068687438964844, Val Loss: 1.8789817094802856, Val Accuracy: 39.89999771118164
Iteration 200, Epoch 1, Loss: 2.063352584838867, Accuracy: 32.75808334350586, Val Loss: 1.8269221782684326, Val Accuracy: 39.79999923706055
Iteration 300, Epoch 1, Loss: 1.9906622171401978, Accuracy: 34.55668640136719, Val Loss: 1.851961374282837, Val Accuracy: 39.20000076293945
Iteration 400, Epoch 1, Loss: 1.9242477416992188, Accuracy: 36.264808654785156, Val Loss: 1.7083684206008911, Val Accuracy: 43.70000076293945
Iteration 500, Epoch 1, Loss: 1.8799132108688354, Accuracy: 37.34406280517578, Val Loss: 1.6594358682632446, Val Accuracy: 44.29999923706055
Iteration 600, Epoch 1, Loss: 1.848968744277954, Accuracy: 38.2149543762207, Val Loss: 1.6731466054916382, Val Accuracy: 43.29999923706055
Iteration 700, Epoch 1, Loss: 1.8228310346603394, Accuracy: 38.906471252441406, Val Loss: 1.6165885925292969, Val Accuracy: 45.69999694824219
###Markdown
Abstracting Away the Training LoopIn the previous examples, we used a customised training loop to train models (e.g. `train_part34`). Writing your own training loop is only required if you need more flexibility and control during training your model. Alternately, you can also use built-in APIs like `tf.keras.Model.fit()` and `tf.keras.Model.evaluate` to train and evaluate a model. Also remember to configure your model for training by calling `tf.keras.Model.compile.You don't need to perform any hyperparameter tuning here, but you should see validation and test accuracies above 42% after training for one epoch.
###Code
model = model_init_fn()
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate),
loss='sparse_categorical_crossentropy',
metrics=[tf.keras.metrics.sparse_categorical_accuracy])
model.fit(X_train, y_train, batch_size=64, epochs=1, validation_data=(X_val, y_val))
model.evaluate(X_test, y_test)
###Output
Train on 49000 samples, validate on 1000 samples
49000/49000 [==============================] - 3s 57us/sample - loss: 1.8204 - sparse_categorical_accuracy: 0.3874 - val_loss: 1.6748 - val_sparse_categorical_accuracy: 0.4180
10000/1 [================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================] - 1s 77us/sample - loss: 1.6879 - sparse_categorical_accuracy: 0.4223
###Markdown
Keras Sequential API: Three-Layer ConvNetHere you should use `tf.keras.Sequential` to reimplement the same three-layer ConvNet architecture used in Part II and Part III. As a reminder, your model should have the following architecture:1. Convolutional layer with 32 5x5 kernels, using zero padding of 22. ReLU nonlinearity3. Convolutional layer with 16 3x3 kernels, using zero padding of 14. ReLU nonlinearity5. Fully-connected layer giving class scores6. Softmax nonlinearityYou should initialize the weights of the model using a `tf.initializers.VarianceScaling` as above.You should train the model using Nesterov momentum 0.9.You don't need to perform any hyperparameter search, but you should achieve accuracy above 45% after training for one epoch.
###Code
def model_init_fn():
model = None
############################################################################
# TODO: Construct a three-layer ConvNet using tf.keras.Sequential. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
input_shape = (32, 32, 3)
num_classes = 10
initializer = tf.initializers.VarianceScaling(scale=2.0)
layers = [
tf.keras.layers.Conv2D(32, (5, 5), padding='same', activation='relu', kernel_initializer=initializer,
input_shape=input_shape),
tf.keras.layers.Conv2D(16, (3, 3), padding='same', activation='relu', kernel_initializer=initializer),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(num_classes, activation='softmax', kernel_initializer=initializer),
]
model = tf.keras.Sequential(layers)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return model
learning_rate = 5e-4
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, nesterov=True, momentum=0.9)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn)
###Output
Iteration 0, Epoch 1, Loss: 2.9712705612182617, Accuracy: 9.375, Val Loss: 3.2196428775787354, Val Accuracy: 12.700000762939453
Iteration 100, Epoch 1, Loss: 2.0532257556915283, Accuracy: 29.981433868408203, Val Loss: 1.8071397542953491, Val Accuracy: 36.29999923706055
Iteration 200, Epoch 1, Loss: 1.899694800376892, Accuracy: 34.16511154174805, Val Loss: 1.6743762493133545, Val Accuracy: 41.0
Iteration 300, Epoch 1, Loss: 1.8233054876327515, Accuracy: 36.59675979614258, Val Loss: 1.6346663236618042, Val Accuracy: 45.0
Iteration 400, Epoch 1, Loss: 1.7578901052474976, Accuracy: 38.88325881958008, Val Loss: 1.5955396890640259, Val Accuracy: 45.20000076293945
Iteration 500, Epoch 1, Loss: 1.7127082347869873, Accuracy: 40.272579193115234, Val Loss: 1.5494569540023804, Val Accuracy: 46.29999923706055
Iteration 600, Epoch 1, Loss: 1.6833456754684448, Accuracy: 41.28015899658203, Val Loss: 1.5043326616287231, Val Accuracy: 48.79999923706055
Iteration 700, Epoch 1, Loss: 1.6563769578933716, Accuracy: 42.189727783203125, Val Loss: 1.480652928352356, Val Accuracy: 48.0
###Markdown
We will also train this model with the built-in training loop APIs provided by TensorFlow.
###Code
model = model_init_fn()
model.compile(optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=[tf.keras.metrics.sparse_categorical_accuracy])
model.fit(X_train, y_train, batch_size=64, epochs=1, validation_data=(X_val, y_val))
model.evaluate(X_test, y_test)
###Output
Train on 49000 samples, validate on 1000 samples
49000/49000 [==============================] - 4s 89us/sample - loss: 1.5548 - sparse_categorical_accuracy: 0.4531 - val_loss: 1.4325 - val_sparse_categorical_accuracy: 0.4920
10000/1 [================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================] - 1s 93us/sample - loss: 1.3228 - sparse_categorical_accuracy: 0.4854
###Markdown
Part IV: Functional API Demonstration with a Two-Layer Network In the previous section, we saw how we can use `tf.keras.Sequential` to stack layers to quickly build simple models. But this comes at the cost of losing flexibility.Often we will have to write complex models that have non-sequential data flows: a layer can have **multiple inputs and/or outputs**, such as stacking the output of 2 previous layers together to feed as input to a third! (Some examples are residual connections and dense blocks.)In such cases, we can use Keras functional API to write models with complex topologies such as: 1. Multi-input models 2. Multi-output models 3. Models with shared layers (the same layer called several times) 4. Models with non-sequential data flows (e.g. residual connections)Writing a model with Functional API requires us to create a `tf.keras.Model` instance and explicitly write input tensors and output tensors for this model.
###Code
def two_layer_fc_functional(input_shape, hidden_size, num_classes):
initializer = tf.initializers.VarianceScaling(scale=2.0)
inputs = tf.keras.Input(shape=input_shape)
flattened_inputs = tf.keras.layers.Flatten()(inputs)
fc1_output = tf.keras.layers.Dense(hidden_size, activation='relu',
kernel_initializer=initializer)(flattened_inputs)
scores = tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer)(fc1_output)
# Instantiate the model given inputs and outputs.
model = tf.keras.Model(inputs=inputs, outputs=scores)
return model
def test_two_layer_fc_functional():
""" A small unit test to exercise the TwoLayerFC model above. """
input_size, hidden_size, num_classes = 50, 42, 10
input_shape = (50,)
x = tf.zeros((64, input_size))
model = two_layer_fc_functional(input_shape, hidden_size, num_classes)
with tf.device(device):
scores = model(x)
print(scores.shape)
test_two_layer_fc_functional()
###Output
(64, 10)
###Markdown
Keras Functional API: Train a Two-Layer NetworkYou can now train this two-layer network constructed using the functional API.You don't need to perform any hyperparameter tuning here, but you should see validation accuracies above 40% after training for one epoch.
###Code
input_shape = (32, 32, 3)
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn():
return two_layer_fc_functional(input_shape, hidden_size, num_classes)
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
Iteration 0, Epoch 1, Loss: 2.8402795791625977, Accuracy: 17.1875, Val Loss: 2.810950517654419, Val Accuracy: 15.09999942779541
Iteration 100, Epoch 1, Loss: 2.226226568222046, Accuracy: 28.496286392211914, Val Loss: 1.9141902923583984, Val Accuracy: 38.60000228881836
Iteration 200, Epoch 1, Loss: 2.080077886581421, Accuracy: 32.06623077392578, Val Loss: 1.8725242614746094, Val Accuracy: 38.29999923706055
Iteration 300, Epoch 1, Loss: 2.0010735988616943, Accuracy: 34.29194259643555, Val Loss: 1.8864822387695312, Val Accuracy: 37.400001525878906
Iteration 400, Epoch 1, Loss: 1.9321510791778564, Accuracy: 36.06218719482422, Val Loss: 1.7192223072052002, Val Accuracy: 41.70000076293945
Iteration 500, Epoch 1, Loss: 1.886867642402649, Accuracy: 37.12263107299805, Val Loss: 1.6607885360717773, Val Accuracy: 44.0
Iteration 600, Epoch 1, Loss: 1.8582278490066528, Accuracy: 37.86917495727539, Val Loss: 1.698154330253601, Val Accuracy: 42.39999771118164
Iteration 700, Epoch 1, Loss: 1.8322076797485352, Accuracy: 38.547611236572266, Val Loss: 1.6500277519226074, Val Accuracy: 42.39999771118164
###Markdown
Part V: CIFAR-10 open-ended challengeIn this section you can experiment with whatever ConvNet architecture you'd like on CIFAR-10.You should experiment with architectures, hyperparameters, loss functions, regularization, or anything else you can think of to train a model that achieves **at least 70%** accuracy on the **validation** set within 10 epochs. You can use the built-in train function, the `train_part34` function from above, or implement your own training loop.Describe what you did at the end of the notebook. Some things you can try:- **Filter size**: Above we used 5x5 and 3x3; is this optimal?- **Number of filters**: Above we used 16 and 32 filters. Would more or fewer do better?- **Pooling**: We didn't use any pooling above. Would this improve the model?- **Normalization**: Would your model be improved with batch normalization, layer normalization, group normalization, or some other normalization strategy?- **Network architecture**: The ConvNet above has only three layers of trainable parameters. Would a deeper model do better?- **Global average pooling**: Instead of flattening after the final convolutional layer, would global average pooling do better? This strategy is used for example in Google's Inception network and in Residual Networks.- **Regularization**: Would some kind of regularization improve performance? Maybe weight decay or dropout? NOTE: Batch Normalization / DropoutIf you are using Batch Normalization and Dropout, remember to pass `is_training=True` if you use the `train_part34()` function. BatchNorm and Dropout layers have different behaviors at training and inference time. `training` is a specific keyword argument reserved for this purpose in any `tf.keras.Model`'s `call()` function. Read more about this here : https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/BatchNormalizationmethodshttps://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dropoutmethods Tips for trainingFor each network architecture that you try, you should tune the learning rate and other hyperparameters. When doing this there are a couple important things to keep in mind: - If the parameters are working well, you should see improvement within a few hundred iterations- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set. Going above and beyondIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!- Alternative optimizers: you can try Adam, Adagrad, RMSprop, etc.- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.- Model ensembles- Data augmentation- New Architectures - [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output. - [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together. - [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32) Have fun and happy training!
###Code
class CustomConvNet(tf.keras.Model):
def __init__(self):
super().__init__()
############################################################################
# TODO: Construct a model that performs well on CIFAR-10 #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# input is [N, 32, 32, 3]
initializer = tf.initializers.VarianceScaling(scale=2.0)
self.conv11 = tf.keras.layers.Conv2D(512, (3, 3), padding='same', kernel_initializer=initializer) # 32, 32, 128
self.prelu11 = tf.keras.layers.PReLU(alpha_initializer=initializer)
self.bn11 = tf.keras.layers.BatchNormalization()
self.conv12 = tf.keras.layers.Conv2D(256, (3, 3), padding='same', kernel_initializer=initializer) # 32, 32, 128
self.prelu12 = tf.keras.layers.PReLU(alpha_initializer=initializer)
self.bn12 = tf.keras.layers.BatchNormalization()
self.conv13 = tf.keras.layers.Conv2D(128, (3, 3), padding='same', kernel_initializer=initializer) # 32, 32, 128
self.prelu13 = tf.keras.layers.PReLU(alpha_initializer=initializer)
self.bn13 = tf.keras.layers.BatchNormalization()
self.conv2 = tf.keras.layers.Conv2D(64, (3, 3), padding='same', kernel_initializer=initializer) # 32, 32, 64
self.prelu2 = tf.keras.layers.PReLU(alpha_initializer=initializer)
self.bn2 = tf.keras.layers.BatchNormalization()
self.maxpool2 = tf.keras.layers.MaxPool2D((2, 2), padding='same') # 16, 16, 64
self.conv3 = tf.keras.layers.Conv2D(32, (3, 3), padding='same', kernel_initializer=initializer) # 16, 16, 32
self.prelu3 = tf.keras.layers.PReLU(alpha_initializer=initializer)
self.bn3 = tf.keras.layers.BatchNormalization()
self.maxpool3 = tf.keras.layers.MaxPool2D((2, 2), padding='same') # 8, 8, 32
self.flatten = tf.keras.layers.Flatten()
self.fc = tf.keras.layers.Dense(10, activation='softmax', kernel_initializer=initializer)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
def call(self, input_tensor, training=False):
############################################################################
# TODO: Construct a model that performs well on CIFAR-10 #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
x = input_tensor
x = self.conv11(x)
x = self.prelu11(x)
x = self.bn11(x, training)
x = self.conv12(x)
x = self.prelu12(x)
x = self.bn12(x, training)
x = self.conv13(x)
x = self.prelu13(x)
x = self.bn13(x, training)
x = self.conv2(x)
x = self.prelu2(x)
x = self.bn2(x, training)
x = self.maxpool2(x)
x = self.conv3(x)
x = self.prelu3(x)
x = self.bn3(x, training)
x = self.maxpool3(x)
x = self.flatten(x)
x = self.fc(x)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return x
device = '/device:GPU:0' # Change this to a CPU/GPU as you wish!
# device = '/cpu:0' # Change this to a CPU/GPU as you wish!
print_every = 300
num_epochs = 10
# model = CustomConvNet()
# model = CustomResNet()
def model_init_fn():
return CustomConvNet()
def optimizer_init_fn():
learning_rate = 1e-3
return tf.keras.optimizers.Adam(learning_rate)
train_part34(model_init_fn, optimizer_init_fn, num_epochs=num_epochs, is_training=True)
###Output
Iteration 0, Epoch 1, Loss: 4.086295127868652, Accuracy: 12.5, Val Loss: 4.507420539855957, Val Accuracy: 13.199999809265137
Iteration 300, Epoch 1, Loss: 1.6533818244934082, Accuracy: 43.55793380737305, Val Loss: 1.471709132194519, Val Accuracy: 50.19999694824219
Iteration 600, Epoch 1, Loss: 1.4560245275497437, Accuracy: 49.78681564331055, Val Loss: 1.2510604858398438, Val Accuracy: 57.20000457763672
Iteration 900, Epoch 2, Loss: 1.007521152496338, Accuracy: 64.97685241699219, Val Loss: 1.0304434299468994, Val Accuracy: 63.900001525878906
Iteration 1200, Epoch 2, Loss: 0.9641342163085938, Accuracy: 66.3936767578125, Val Loss: 0.9729200601577759, Val Accuracy: 65.80000305175781
Iteration 1500, Epoch 2, Loss: 0.926266610622406, Accuracy: 67.551025390625, Val Loss: 0.9301822781562805, Val Accuracy: 67.19999694824219
Iteration 1800, Epoch 3, Loss: 0.7934252619743347, Accuracy: 72.49070739746094, Val Loss: 0.8698236346244812, Val Accuracy: 70.30000305175781
Iteration 2100, Epoch 3, Loss: 0.763357937335968, Accuracy: 73.67091369628906, Val Loss: 0.83391273021698, Val Accuracy: 71.5
Iteration 2400, Epoch 4, Loss: 0.6582642793655396, Accuracy: 77.12378692626953, Val Loss: 0.7901626825332642, Val Accuracy: 73.69999694824219
Iteration 2700, Epoch 4, Loss: 0.6490508913993835, Accuracy: 77.60545349121094, Val Loss: 0.8368052840232849, Val Accuracy: 71.80000305175781
Iteration 3000, Epoch 4, Loss: 0.6276915669441223, Accuracy: 78.28724670410156, Val Loss: 0.8429279923439026, Val Accuracy: 72.79999542236328
Iteration 3300, Epoch 5, Loss: 0.5404477119445801, Accuracy: 81.48734283447266, Val Loss: 0.8269108533859253, Val Accuracy: 74.5
Iteration 3600, Epoch 5, Loss: 0.5194686055183411, Accuracy: 82.16072845458984, Val Loss: 0.9136738181114197, Val Accuracy: 71.0
Iteration 3900, Epoch 6, Loss: 0.42960742115974426, Accuracy: 86.35562896728516, Val Loss: 0.8334789276123047, Val Accuracy: 73.69999694824219
Iteration 4200, Epoch 6, Loss: 0.409514844417572, Accuracy: 86.14386749267578, Val Loss: 0.859889030456543, Val Accuracy: 72.0
Iteration 4500, Epoch 6, Loss: 0.3866899609565735, Accuracy: 86.96441650390625, Val Loss: 0.9733158349990845, Val Accuracy: 71.80000305175781
Iteration 4800, Epoch 7, Loss: 0.31196486949920654, Accuracy: 89.12347412109375, Val Loss: 0.9835796356201172, Val Accuracy: 71.9000015258789
Iteration 5100, Epoch 7, Loss: 0.29267334938049316, Accuracy: 90.00928497314453, Val Loss: 1.1022939682006836, Val Accuracy: 72.79999542236328
Iteration 5400, Epoch 8, Loss: 0.24182185530662537, Accuracy: 91.34615325927734, Val Loss: 1.1322561502456665, Val Accuracy: 71.69999694824219
###Markdown
What's this TensorFlow business?You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you choose to work with that notebook). What is it?TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray. Why?* Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. * We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) * We want you to be exposed to the sort of deep learning code you might run into in academia or industry. How will I learn TensorFlow?TensorFlow has many excellent tutorials available, including those from [Google themselves](https://www.tensorflow.org/get_started/get_started).Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.**NOTE: This notebook is meant to teach you the latest version of Tensorflow 2.0. Most examples on the web today are still in 1.x, so be careful not to confuse the two when looking up documentation**. Install Tensorflow 2.0Tensorflow 2.0 is still not in a fully 100% stable release, but it's still usable and more intuitive than TF 1.x. Please make sure you have it installed before moving on in this notebook! Here are some steps to get started:1. Have the latest version of Anaconda installed on your machine.2. Create a new conda environment starting from Python 3.7. In this setup example, we'll call it `tf_20_env`.3. Run the command: `source activate tf_20_env`4. Then pip install TF 2.0 as described here: https://www.tensorflow.org/install/pip A guide on creating Anaconda enviornments: https://uoa-eresearch.github.io/eresearch-cookbook/recipe/2014/11/20/conda/This will give you an new enviornemnt to play in TF 2.0. Generally, if you plan to also use TensorFlow in your other projects, you might also want to keep a seperate Conda environment or virtualenv in Python 3.7 that has Tensorflow 1.9, so you can switch back and forth at will. Table of ContentsThis notebook has 5 parts. We will walk through TensorFlow at **three different levels of abstraction**, which should help you better understand it and prepare you for working on your project.1. Part I, Preparation: load the CIFAR-10 dataset.2. Part II, Barebone TensorFlow: **Abstraction Level 1**, we will work directly with low-level TensorFlow graphs. 3. Part III, Keras Model API: **Abstraction Level 2**, we will use `tf.keras.Model` to define arbitrary neural network architecture. 4. Part IV, Keras Sequential + Functional API: **Abstraction Level 3**, we will use `tf.keras.Sequential` to define a linear feed-forward network very conveniently, and then explore the functional libraries for building unique and uncommon models that require more flexibility.5. Part V, CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features. We will discuss Keras in more detail later in the notebook.Here is a table of comparison:| API | Flexibility | Convenience ||---------------|-------------|-------------|| Barebone | High | Low || `tf.keras.Model` | High | Medium || `tf.keras.Sequential` | Low | High | Part I: PreparationFirst, we load the CIFAR-10 dataset. This might take a few minutes to download the first time you run it, but after that the files should be cached on disk and loading should be faster.In previous parts of the assignment we used CS231N-specific code to download and read the CIFAR-10 dataset; however the `tf.keras.datasets` package in TensorFlow provides prebuilt utility functions for loading many common datasets.For the purposes of this assignment we will still write our own code to preprocess the data and iterate through it in minibatches. The `tf.data` package in TensorFlow provides tools for automating this process, but working with this package adds extra complication and is beyond the scope of this notebook. However using `tf.data` can be much more efficient than the simple approach used in this notebook, so you should consider using it for your project.
###Code
import os
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
def load_cifar10(num_training=49000, num_validation=1000, num_test=10000):
"""
Fetch the CIFAR-10 dataset from the web and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 dataset and use appropriate data types and shapes
cifar10 = tf.keras.datasets.cifar10.load_data()
(X_train, y_train), (X_test, y_test) = cifar10
X_train = np.asarray(X_train, dtype=np.float32)
y_train = np.asarray(y_train, dtype=np.int32).flatten()
X_test = np.asarray(X_test, dtype=np.float32)
y_test = np.asarray(y_test, dtype=np.int32).flatten()
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean pixel and divide by std
mean_pixel = X_train.mean(axis=(0, 1, 2), keepdims=True)
std_pixel = X_train.std(axis=(0, 1, 2), keepdims=True)
X_train = (X_train - mean_pixel) / std_pixel
X_val = (X_val - mean_pixel) / std_pixel
X_test = (X_test - mean_pixel) / std_pixel
return X_train, y_train, X_val, y_val, X_test, y_test
# If there are errors with SSL downloading involving self-signed certificates,
# it may be that your Python version was recently installed on the current machine.
# See: https://github.com/tensorflow/tensorflow/issues/10779
# To fix, run the command: /Applications/Python\ 3.7/Install\ Certificates.command
# ...replacing paths as necessary.
# Invoke the above function to get our data.
NHW = (0, 1, 2)
X_train, y_train, X_val, y_val, X_test, y_test = load_cifar10()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape, y_train.dtype)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
class Dataset(object):
def __init__(self, X, y, batch_size, shuffle=False):
"""
Construct a Dataset object to iterate over data X and labels y
Inputs:
- X: Numpy array of data, of any shape
- y: Numpy array of labels, of any shape but with y.shape[0] == X.shape[0]
- batch_size: Integer giving number of elements per minibatch
- shuffle: (optional) Boolean, whether to shuffle the data on each epoch
"""
assert X.shape[0] == y.shape[0], 'Got different numbers of data and labels'
self.X, self.y = X, y
self.batch_size, self.shuffle = batch_size, shuffle
def __iter__(self):
N, B = self.X.shape[0], self.batch_size
idxs = np.arange(N)
if self.shuffle:
np.random.shuffle(idxs)
return iter((self.X[i:i+B], self.y[i:i+B]) for i in range(0, N, B))
train_dset = Dataset(X_train, y_train, batch_size=64, shuffle=True)
val_dset = Dataset(X_val, y_val, batch_size=64, shuffle=False)
test_dset = Dataset(X_test, y_test, batch_size=64)
# We can iterate through a dataset like this:
for t, (x, y) in enumerate(train_dset):
print(t, x.shape, y.shape)
if t > 5: break
###Output
0 (64, 32, 32, 3) (64,)
1 (64, 32, 32, 3) (64,)
2 (64, 32, 32, 3) (64,)
3 (64, 32, 32, 3) (64,)
4 (64, 32, 32, 3) (64,)
5 (64, 32, 32, 3) (64,)
6 (64, 32, 32, 3) (64,)
###Markdown
You can optionally **use GPU by setting the flag to True below**. It's not neccessary to use a GPU for this assignment; if you are working on Google Cloud then we recommend that you do not use a GPU, as it will be significantly more expensive.
###Code
# Set up some global variables
USE_GPU = True
if USE_GPU:
device = '/device:GPU:0'
else:
device = '/cpu:0'
# Constant to control how often we print when training models
print_every = 100
print('Using device: ', device)
###Output
Using device: /device:GPU:0
###Markdown
Part II: Barebones TensorFlowTensorFlow ships with various high-level APIs which make it very convenient to define and train neural networks; we will cover some of these constructs in Part III and Part IV of this notebook. In this section we will start by building a model with basic TensorFlow constructs to help you better understand what's going on under the hood of the higher-level APIs.**"Barebones Tensorflow" is important to understanding the building blocks of TensorFlow, but much of it involves concepts from TensorFlow 1.x.** We will be working with legacy modules such as `tf.Variable`.Therefore, please read and understand the differences between legacy (1.x) TF and the new (2.0) TF. Historical background on TensorFlow 1.xTensorFlow 1.x is primarily a framework for working with **static computational graphs**. Nodes in the computational graph are Tensors which will hold n-dimensional arrays when the graph is run; edges in the graph represent functions that will operate on Tensors when the graph is run to actually perform useful computation.Before Tensorflow 2.0, we had to configure the graph into two phases. There are plenty of tutorials online that explain this two-step process. The process generally looks like the following for TF 1.x:1. **Build a computational graph that describes the computation that you want to perform**. This stage doesn't actually perform any computation; it just builds up a symbolic representation of your computation. This stage will typically define one or more `placeholder` objects that represent inputs to the computational graph.2. **Run the computational graph many times.** Each time the graph is run (e.g. for one gradient descent step) you will specify which parts of the graph you want to compute, and pass a `feed_dict` dictionary that will give concrete values to any `placeholder`s in the graph. The new paradigm in Tensorflow 2.0Now, with Tensorflow 2.0, we can simply adopt a functional form that is more Pythonic and similar in spirit to PyTorch and direct Numpy operation. Instead of the 2-step paradigm with computation graphs, making it (among other things) easier to debug TF code. You can read more details at https://www.tensorflow.org/guide/eager.The main difference between the TF 1.x and 2.0 approach is that the 2.0 approach doesn't make use of `tf.Session`, `tf.run`, `placeholder`, `feed_dict`. To get more details of what's different between the two version and how to convert between the two, check out the official migration guide: https://www.tensorflow.org/alpha/guide/migration_guideLater, in the rest of this notebook we'll focus on this new, simpler approach. TensorFlow warmup: Flatten FunctionWe can see this in action by defining a simple `flatten` function that will reshape image data for use in a fully-connected network.In TensorFlow, data for convolutional feature maps is typically stored in a Tensor of shape N x H x W x C where:- N is the number of datapoints (minibatch size)- H is the height of the feature map- W is the width of the feature map- C is the number of channels in the feature mapThis is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we use fully connected affine layers to process the image, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "flatten" operation to collapse the `H x W x C` values per representation into a single long vector. Notice the `tf.reshape` call has the target shape as `(N, -1)`, meaning it will reshape/keep the first dimension to be N, and then infer as necessary what the second dimension is in the output, so we can collapse the remaining dimensions from the input properly.**NOTE**: TensorFlow and PyTorch differ on the default Tensor layout; TensorFlow uses N x H x W x C but PyTorch uses N x C x H x W.
###Code
def flatten(x):
"""
Input:
- TensorFlow Tensor of shape (N, D1, ..., DM)
Output:
- TensorFlow Tensor of shape (N, D1 * ... * DM)
"""
N = tf.shape(x)[0]
return tf.reshape(x, (N, -1))
def test_flatten():
# Construct concrete values of the input data x using numpy
x_np = np.arange(24).reshape((2, 3, 4))
print('x_np:\n', x_np, '\n')
# Compute a concrete output value.
x_flat_np = flatten(x_np)
print('x_flat_np:\n', x_flat_np, '\n')
test_flatten()
###Output
x_np:
[[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[12 13 14 15]
[16 17 18 19]
[20 21 22 23]]]
x_flat_np:
tf.Tensor(
[[ 0 1 2 3 4 5 6 7 8 9 10 11]
[12 13 14 15 16 17 18 19 20 21 22 23]], shape=(2, 12), dtype=int64)
###Markdown
Barebones TensorFlow: Define a Two-Layer NetworkWe will now implement our first neural network with TensorFlow: a fully-connected ReLU network with two hidden layers and no biases on the CIFAR10 dataset. For now we will use only low-level TensorFlow operators to define the network; later we will see how to use the higher-level abstractions provided by `tf.keras` to simplify the process.We will define the forward pass of the network in the function `two_layer_fc`; this will accept TensorFlow Tensors for the inputs and weights of the network, and return a TensorFlow Tensor for the scores. After defining the network architecture in the `two_layer_fc` function, we will test the implementation by checking the shape of the output.**It's important that you read and understand this implementation.**
###Code
def two_layer_fc(x, params):
"""
A fully-connected neural network; the architecture is:
fully-connected layer -> ReLU -> fully connected layer.
Note that we only need to define the forward pass here; TensorFlow will take
care of computing the gradients for us.
The input to the network will be a minibatch of data, of shape
(N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,
and the output layer will produce scores for C classes.
Inputs:
- x: A TensorFlow Tensor of shape (N, d1, ..., dM) giving a minibatch of
input data.
- params: A list [w1, w2] of TensorFlow Tensors giving weights for the
network, where w1 has shape (D, H) and w2 has shape (H, C).
Returns:
- scores: A TensorFlow Tensor of shape (N, C) giving classification scores
for the input data x.
"""
w1, w2 = params # Unpack the parameters
x = flatten(x) # Flatten the input; now x has shape (N, D)
h = tf.nn.relu(tf.matmul(x, w1)) # Hidden layer: h has shape (N, H)
scores = tf.matmul(h, w2) # Compute scores of shape (N, C)
return scores
def two_layer_fc_test():
hidden_layer_size = 42
# Scoping our TF operations under a tf.device context manager
# lets us tell TensorFlow where we want these Tensors to be
# multiplied and/or operated on, e.g. on a CPU or a GPU.
with tf.device(device):
x = tf.zeros((64, 32, 32, 3))
w1 = tf.zeros((32 * 32 * 3, hidden_layer_size))
w2 = tf.zeros((hidden_layer_size, 10))
# Call our two_layer_fc function for the forward pass of the network.
scores = two_layer_fc(x, [w1, w2])
print(scores.shape)
two_layer_fc_test()
###Output
(64, 10)
###Markdown
Barebones TensorFlow: Three-Layer ConvNetHere you will complete the implementation of the function `three_layer_convnet` which will perform the forward pass of a three-layer convolutional network. The network should have the following architecture:1. A convolutional layer (with bias) with `channel_1` filters, each with shape `KW1 x KH1`, and zero-padding of two2. ReLU nonlinearity3. A convolutional layer (with bias) with `channel_2` filters, each with shape `KW2 x KH2`, and zero-padding of one4. ReLU nonlinearity5. Fully-connected layer with bias, producing scores for `C` classes.**HINT**: For convolutions: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/conv2d; be careful with padding!**HINT**: For biases: https://www.tensorflow.org/performance/xla/broadcasting
###Code
def three_layer_convnet(x, params):
"""
A three-layer convolutional network with the architecture described above.
Inputs:
- x: A TensorFlow Tensor of shape (N, H, W, 3) giving a minibatch of images
- params: A list of TensorFlow Tensors giving the weights and biases for the
network; should contain the following:
- conv_w1: TensorFlow Tensor of shape (KH1, KW1, 3, channel_1) giving
weights for the first convolutional layer.
- conv_b1: TensorFlow Tensor of shape (channel_1,) giving biases for the
first convolutional layer.
- conv_w2: TensorFlow Tensor of shape (KH2, KW2, channel_1, channel_2)
giving weights for the second convolutional layer
- conv_b2: TensorFlow Tensor of shape (channel_2,) giving biases for the
second convolutional layer.
- fc_w: TensorFlow Tensor giving weights for the fully-connected layer.
Can you figure out what the shape should be?
- fc_b: TensorFlow Tensor giving biases for the fully-connected layer.
Can you figure out what the shape should be?
"""
conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params
scores = None
############################################################################
# TODO: Implement the forward pass for the three-layer ConvNet. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
with tf.device(device):
c1 = tf.nn.conv2d(x, conv_w1, [1], "SAME")
c1b = tf.add(c1, conv_b1)
c1 = tf.nn.conv2d(x, conv_w1, [1], "SAME")
c1b = tf.add(c1, conv_b1)
c1r = tf.nn.relu(c1b)
c2 = tf.nn.conv2d(c1r, conv_w2, [1], "SAME")
c2b = tf.add(c2, conv_b2)
c2r = tf.nn.relu(c2b)
c2f = flatten(c2r)
f = tf.matmul(c2f, fc_w)
scores = tf.add(f, fc_b)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return scores
###Output
_____no_output_____
###Markdown
After defing the forward pass of the three-layer ConvNet above, run the following cell to test your implementation. Like the two-layer network, we run the graph on a batch of zeros just to make sure the function doesn't crash, and produces outputs of the correct shape.When you run this function, `scores_np` should have shape `(64, 10)`.
###Code
def three_layer_convnet_test():
with tf.device(device):
x = tf.zeros((64, 32, 32, 3))
conv_w1 = tf.zeros((5, 5, 3, 6))
conv_b1 = tf.zeros((6,))
conv_w2 = tf.zeros((3, 3, 6, 9))
conv_b2 = tf.zeros((9,))
fc_w = tf.zeros((32 * 32 * 9, 10))
fc_b = tf.zeros((10,))
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
scores = three_layer_convnet(x, params)
# Inputs to convolutional layers are 4-dimensional arrays with shape
# [batch_size, height, width, channels]
print('scores_np has shape: ', scores.shape)
three_layer_convnet_test()
###Output
scores_np has shape: (64, 10)
###Markdown
Barebones TensorFlow: Training StepWe now define the `training_step` function performs a single training step. This will take three basic steps:1. Compute the loss2. Compute the gradient of the loss with respect to all network weights3. Make a weight update step using (stochastic) gradient descent.We need to use a few new TensorFlow functions to do all of this:- For computing the cross-entropy loss we'll use `tf.nn.sparse_softmax_cross_entropy_with_logits`: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits- For averaging the loss across a minibatch of data we'll use `tf.reduce_mean`:https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/reduce_mean- For computing gradients of the loss with respect to the weights we'll use `tf.GradientTape` (useful for Eager execution): https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/GradientTape- We'll mutate the weight values stored in a TensorFlow Tensor using `tf.assign_sub` ("sub" is for subtraction): https://www.tensorflow.org/api_docs/python/tf/assign_sub
###Code
def training_step(model_fn, x, y, params, learning_rate):
with tf.GradientTape() as tape:
scores = model_fn(x, params) # Forward pass of the model
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=scores)
total_loss = tf.reduce_mean(loss)
grad_params = tape.gradient(total_loss, params)
# Make a vanilla gradient descent step on all of the model parameters
# Manually update the weights using assign_sub()
for w, grad_w in zip(params, grad_params):
w.assign_sub(learning_rate * grad_w)
return total_loss
def train_part2(model_fn, init_fn, learning_rate):
"""
Train a model on CIFAR-10.
Inputs:
- model_fn: A Python function that performs the forward pass of the model
using TensorFlow; it should have the following signature:
scores = model_fn(x, params) where x is a TensorFlow Tensor giving a
minibatch of image data, params is a list of TensorFlow Tensors holding
the model weights, and scores is a TensorFlow Tensor of shape (N, C)
giving scores for all elements of x.
- init_fn: A Python function that initializes the parameters of the model.
It should have the signature params = init_fn() where params is a list
of TensorFlow Tensors holding the (randomly initialized) weights of the
model.
- learning_rate: Python float giving the learning rate to use for SGD.
"""
params = init_fn() # Initialize the model parameters
for t, (x_np, y_np) in enumerate(train_dset):
# Run the graph on a batch of training data.
loss = training_step(model_fn, x_np, y_np, params, learning_rate)
# Periodically print the loss and check accuracy on the val set.
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss))
check_accuracy(val_dset, x_np, model_fn, params)
def check_accuracy(dset, x, model_fn, params):
"""
Check accuracy on a classification model, e.g. for validation.
Inputs:
- dset: A Dataset object against which to check accuracy
- x: A TensorFlow placeholder Tensor where input images should be fed
- model_fn: the Model we will be calling to make predictions on x
- params: parameters for the model_fn to work with
Returns: Nothing, but prints the accuracy of the model
"""
num_correct, num_samples = 0, 0
for x_batch, y_batch in dset:
scores_np = model_fn(x_batch, params).numpy()
y_pred = scores_np.argmax(axis=1)
num_samples += x_batch.shape[0]
num_correct += (y_pred == y_batch).sum()
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: InitializationWe'll use the following utility method to initialize the weight matrices for our models using Kaiming's normalization method.[1] He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, ICCV 2015, https://arxiv.org/abs/1502.01852
###Code
def create_matrix_with_kaiming_normal(shape):
if len(shape) == 2:
fan_in, fan_out = shape[0], shape[1]
elif len(shape) == 4:
fan_in, fan_out = np.prod(shape[:3]), shape[3]
return tf.keras.backend.random_normal(shape) * np.sqrt(2.0 / fan_in)
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Train a Two-Layer NetworkWe are finally ready to use all of the pieces defined above to train a two-layer fully-connected network on CIFAR-10.We just need to define a function to initialize the weights of the model, and call `train_part2`.Defining the weights of the network introduces another important piece of TensorFlow API: `tf.Variable`. A TensorFlow Variable is a Tensor whose value is stored in the graph and persists across runs of the computational graph; however unlike constants defined with `tf.zeros` or `tf.random_normal`, the values of a Variable can be mutated as the graph runs; these mutations will persist across graph runs. Learnable parameters of the network are usually stored in Variables.You don't need to tune any hyperparameters, but you should achieve validation accuracies above 40% after one epoch of training.
###Code
def two_layer_fc_init():
"""
Initialize the weights of a two-layer network, for use with the
two_layer_network function defined above.
You can use the `create_matrix_with_kaiming_normal` helper!
Inputs: None
Returns: A list of:
- w1: TensorFlow tf.Variable giving the weights for the first layer
- w2: TensorFlow tf.Variable giving the weights for the second layer
"""
hidden_layer_size = 4000
w1 = tf.Variable(create_matrix_with_kaiming_normal((3 * 32 * 32, 4000)))
w2 = tf.Variable(create_matrix_with_kaiming_normal((4000, 10)))
return [w1, w2]
learning_rate = 1e-2
train_part2(two_layer_fc, two_layer_fc_init, learning_rate)
###Output
Iteration 0, loss = 2.7718
Got 118 / 1000 correct (11.80%)
Iteration 100, loss = 1.8125
Got 372 / 1000 correct (37.20%)
Iteration 200, loss = 1.4477
Got 396 / 1000 correct (39.60%)
Iteration 300, loss = 1.7683
Got 366 / 1000 correct (36.60%)
Iteration 400, loss = 1.7126
Got 426 / 1000 correct (42.60%)
Iteration 500, loss = 1.8523
Got 445 / 1000 correct (44.50%)
Iteration 600, loss = 1.8816
Got 434 / 1000 correct (43.40%)
Iteration 700, loss = 1.9445
Got 442 / 1000 correct (44.20%)
###Markdown
Barebones TensorFlow: Train a three-layer ConvNetWe will now use TensorFlow to train a three-layer ConvNet on CIFAR-10.You need to implement the `three_layer_convnet_init` function. Recall that the architecture of the network is:1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding 22. ReLU3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding 14. ReLU5. Fully-connected layer (with bias) to compute scores for 10 classesYou don't need to do any hyperparameter tuning, but you should see validation accuracies above 43% after one epoch of training.
###Code
def three_layer_convnet_init():
"""
Initialize the weights of a Three-Layer ConvNet, for use with the
three_layer_convnet function defined above.
You can use the `create_matrix_with_kaiming_normal` helper!
Inputs: None
Returns a list containing:
- conv_w1: TensorFlow tf.Variable giving weights for the first conv layer
- conv_b1: TensorFlow tf.Variable giving biases for the first conv layer
- conv_w2: TensorFlow tf.Variable giving weights for the second conv layer
- conv_b2: TensorFlow tf.Variable giving biases for the second conv layer
- fc_w: TensorFlow tf.Variable giving weights for the fully-connected layer
- fc_b: TensorFlow tf.Variable giving biases for the fully-connected layer
"""
params = None
############################################################################
# TODO: Initialize the parameters of the three-layer network. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
conv_w1 = tf.Variable(create_matrix_with_kaiming_normal((5, 5, 3, 32)))
conv_b1 = tf.Variable(tf.zeros((32)))
conv_w2 = tf.Variable(create_matrix_with_kaiming_normal((3, 3, 32, 16)))
conv_b2 = tf.Variable(tf.zeros((16)))
fc_w = tf.Variable(create_matrix_with_kaiming_normal((32 * 32 * 16, 10)))
fc_b = tf.Variable(tf.zeros((10)))
params = (conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return params
learning_rate = 3e-3
train_part2(three_layer_convnet, three_layer_convnet_init, learning_rate)
###Output
Iteration 0, loss = 2.7355
Got 107 / 1000 correct (10.70%)
Iteration 100, loss = 1.8522
Got 361 / 1000 correct (36.10%)
Iteration 200, loss = 1.4544
Got 407 / 1000 correct (40.70%)
Iteration 300, loss = 1.7427
Got 416 / 1000 correct (41.60%)
Iteration 400, loss = 1.6566
Got 450 / 1000 correct (45.00%)
Iteration 500, loss = 1.5731
Got 458 / 1000 correct (45.80%)
Iteration 600, loss = 1.5616
Got 467 / 1000 correct (46.70%)
Iteration 700, loss = 1.6404
Got 503 / 1000 correct (50.30%)
###Markdown
Part III: Keras Model Subclassing APIImplementing a neural network using the low-level TensorFlow API is a good way to understand how TensorFlow works, but it's a little inconvenient - we had to manually keep track of all Tensors holding learnable parameters. This was fine for a small network, but could quickly become unweildy for a large complex model.Fortunately TensorFlow 2.0 provides higher-level APIs such as `tf.keras` which make it easy to build models out of modular, object-oriented layers. Further, TensorFlow 2.0 uses eager execution that evaluates operations immediately, without explicitly constructing any computational graphs. This makes it easy to write and debug models, and reduces the boilerplate code.In this part of the notebook we will define neural network models using the `tf.keras.Model` API. To implement your own model, you need to do the following:1. Define a new class which subclasses `tf.keras.Model`. Give your class an intuitive name that describes it, like `TwoLayerFC` or `ThreeLayerConvNet`.2. In the initializer `__init__()` for your new class, define all the layers you need as class attributes. The `tf.keras.layers` package provides many common neural-network layers, like `tf.keras.layers.Dense` for fully-connected layers and `tf.keras.layers.Conv2D` for convolutional layers. Under the hood, these layers will construct `Variable` Tensors for any learnable parameters. **Warning**: Don't forget to call `super(YourModelName, self).__init__()` as the first line in your initializer!3. Implement the `call()` method for your class; this implements the forward pass of your model, and defines the *connectivity* of your network. Layers defined in `__init__()` implement `__call__()` so they can be used as function objects that transform input Tensors into output Tensors. Don't define any new layers in `call()`; any layers you want to use in the forward pass should be defined in `__init__()`.After you define your `tf.keras.Model` subclass, you can instantiate it and use it like the model functions from Part II. Keras Model Subclassing API: Two-Layer NetworkHere is a concrete example of using the `tf.keras.Model` API to define a two-layer network. There are a few new bits of API to be aware of here:We use an `Initializer` object to set up the initial values of the learnable parameters of the layers; in particular `tf.initializers.VarianceScaling` gives behavior similar to the Kaiming initialization method we used in Part II. You can read more about it here: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/initializers/VarianceScalingWe construct `tf.keras.layers.Dense` objects to represent the two fully-connected layers of the model. In addition to multiplying their input by a weight matrix and adding a bias vector, these layer can also apply a nonlinearity for you. For the first layer we specify a ReLU activation function by passing `activation='relu'` to the constructor; the second layer uses softmax activation function. Finally, we use `tf.keras.layers.Flatten` to flatten the output from the previous fully-connected layer.
###Code
class TwoLayerFC(tf.keras.Model):
def __init__(self, hidden_size, num_classes):
super(TwoLayerFC, self).__init__()
initializer = tf.initializers.VarianceScaling(scale=2.0)
self.fc1 = tf.keras.layers.Dense(hidden_size, activation='relu',
kernel_initializer=initializer)
self.fc2 = tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer)
self.flatten = tf.keras.layers.Flatten()
def call(self, x, training=False):
x = self.flatten(x)
x = self.fc1(x)
x = self.fc2(x)
return x
def test_TwoLayerFC():
""" A small unit test to exercise the TwoLayerFC model above. """
input_size, hidden_size, num_classes = 50, 42, 10
x = tf.zeros((64, input_size))
model = TwoLayerFC(hidden_size, num_classes)
with tf.device(device):
scores = model(x)
print(scores.shape)
test_TwoLayerFC()
###Output
(64, 10)
###Markdown
Keras Model Subclassing API: Three-Layer ConvNetNow it's your turn to implement a three-layer ConvNet using the `tf.keras.Model` API. Your model should have the same architecture used in Part II:1. Convolutional layer with 5 x 5 kernels, with zero-padding of 22. ReLU nonlinearity3. Convolutional layer with 3 x 3 kernels, with zero-padding of 14. ReLU nonlinearity5. Fully-connected layer to give class scores6. Softmax nonlinearityYou should initialize the weights of your network using the same initialization method as was used in the two-layer network above.**Hint**: Refer to the documentation for `tf.keras.layers.Conv2D` and `tf.keras.layers.Dense`:https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Conv2Dhttps://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense
###Code
class ThreeLayerConvNet(tf.keras.Model):
def __init__(self, channel_1, channel_2, num_classes):
super(ThreeLayerConvNet, self).__init__()
########################################################################
# TODO: Implement the __init__ method for a three-layer ConvNet. You #
# should instantiate layer objects to be used in the forward pass. #
########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
super(ThreeLayerConvNet, self).__init__()
initializer = tf.initializers.VarianceScaling(scale=2.0)
self.conv1 = tf.keras.layers.Conv2D(channel_1, (5, 5), (1,1),
padding = 'same',
activation='relu',
kernel_initializer=initializer)
self.conv2 = tf.keras.layers.Conv2D(channel_2, (3, 3), (1,1),
padding = 'same',
activation='relu',
kernel_initializer=initializer)
self.fc1 = tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer)
self.flatten = tf.keras.layers.Flatten()
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
########################################################################
# END OF YOUR CODE #
########################################################################
def call(self, x, training=False):
scores = None
########################################################################
# TODO: Implement the forward pass for a three-layer ConvNet. You #
# should use the layer objects defined in the __init__ method. #
########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
x = self.conv1(x)
x = self.conv2(x)
x = self.flatten(x)
x = self.fc1(x)
return x
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
########################################################################
# END OF YOUR CODE #
########################################################################
return scores
###Output
_____no_output_____
###Markdown
Once you complete the implementation of the `ThreeLayerConvNet` above you can run the following to ensure that your implementation does not crash and produces outputs of the expected shape.
###Code
def test_ThreeLayerConvNet():
channel_1, channel_2, num_classes = 12, 8, 10
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
with tf.device(device):
x = tf.zeros((64, 3, 32, 32))
scores = model(x)
print(scores.shape)
test_ThreeLayerConvNet()
###Output
(64, 10)
###Markdown
Keras Model Subclassing API: Eager TrainingWhile keras models have a builtin training loop (using the `model.fit`), sometimes you need more customization. Here's an example, of a training loop implemented with eager execution.In particular, notice `tf.GradientTape`. Automatic differentiation is used in the backend for implementing backpropagation in frameworks like TensorFlow. During eager execution, `tf.GradientTape` is used to trace operations for computing gradients later. A particular `tf.GradientTape` can only compute one gradient; subsequent calls to tape will throw a runtime error. TensorFlow 2.0 ships with easy-to-use built-in metrics under `tf.keras.metrics` module. Each metric is an object, and we can use `update_state()` to add observations and `reset_state()` to clear all observations. We can get the current result of a metric by calling `result()` on the metric object.
###Code
def train_part34(model_init_fn, optimizer_init_fn, num_epochs=1, is_training=False):
"""
Simple training loop for use with models defined using tf.keras. It trains
a model for one epoch on the CIFAR-10 training set and periodically checks
accuracy on the CIFAR-10 validation set.
Inputs:
- model_init_fn: A function that takes no parameters; when called it
constructs the model we want to train: model = model_init_fn()
- optimizer_init_fn: A function which takes no parameters; when called it
constructs the Optimizer object we will use to optimize the model:
optimizer = optimizer_init_fn()
- num_epochs: The number of epochs to train for
Returns: Nothing, but prints progress during trainingn
"""
with tf.device(device):
# Compute the loss like we did in Part II
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
model = model_init_fn()
optimizer = optimizer_init_fn()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
val_loss = tf.keras.metrics.Mean(name='val_loss')
val_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='val_accuracy')
t = 0
for epoch in range(num_epochs):
# Reset the metrics - https://www.tensorflow.org/alpha/guide/migration_guide#new-style_metrics
train_loss.reset_states()
train_accuracy.reset_states()
for x_np, y_np in train_dset:
with tf.GradientTape() as tape:
# Use the model function to build the forward pass.
scores = model(x_np, training=is_training)
loss = loss_fn(y_np, scores)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# Update the metrics
train_loss.update_state(loss)
train_accuracy.update_state(y_np, scores)
if t % print_every == 0:
val_loss.reset_states()
val_accuracy.reset_states()
for test_x, test_y in val_dset:
# During validation at end of epoch, training set to False
prediction = model(test_x, training=False)
t_loss = loss_fn(test_y, prediction)
val_loss.update_state(t_loss)
val_accuracy.update_state(test_y, prediction)
template = 'Iteration {}, Epoch {}, Loss: {}, Accuracy: {}, Val Loss: {}, Val Accuracy: {}'
print (template.format(t, epoch+1,
train_loss.result(),
train_accuracy.result()*100,
val_loss.result(),
val_accuracy.result()*100))
t += 1
return model
###Output
_____no_output_____
###Markdown
Keras Model Subclassing API: Train a Two-Layer NetworkWe can now use the tools defined above to train a two-layer network on CIFAR-10. We define the `model_init_fn` and `optimizer_init_fn` that construct the model and optimizer respectively when called. Here we want to train the model using stochastic gradient descent with no momentum, so we construct a `tf.keras.optimizers.SGD` function; you can [read about it here](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers/SGD).You don't need to tune any hyperparameters here, but you should achieve validation accuracies above 40% after one epoch of training.
###Code
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn():
return TwoLayerFC(hidden_size, num_classes)
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
Iteration 0, Epoch 1, Loss: 3.2483625411987305, Accuracy: 7.8125, Val Loss: 3.1048901081085205, Val Accuracy: 9.800000190734863
Iteration 100, Epoch 1, Loss: 2.2455251216888428, Accuracy: 29.161510467529297, Val Loss: 1.883112907409668, Val Accuracy: 37.900001525878906
Iteration 200, Epoch 1, Loss: 2.0907742977142334, Accuracy: 32.22947692871094, Val Loss: 1.8729472160339355, Val Accuracy: 38.80000305175781
Iteration 300, Epoch 1, Loss: 2.0080342292785645, Accuracy: 34.05315399169922, Val Loss: 1.9259060621261597, Val Accuracy: 37.20000076293945
Iteration 400, Epoch 1, Loss: 1.9369709491729736, Accuracy: 35.94529342651367, Val Loss: 1.7247862815856934, Val Accuracy: 42.099998474121094
Iteration 500, Epoch 1, Loss: 1.8901138305664062, Accuracy: 37.019710540771484, Val Loss: 1.6591670513153076, Val Accuracy: 42.20000076293945
Iteration 600, Epoch 1, Loss: 1.858939528465271, Accuracy: 37.879573822021484, Val Loss: 1.6988977193832397, Val Accuracy: 42.0
Iteration 700, Epoch 1, Loss: 1.8310751914978027, Accuracy: 38.61670684814453, Val Loss: 1.6381797790527344, Val Accuracy: 46.20000076293945
###Markdown
Keras Model Subclassing API: Train a Three-Layer ConvNetHere you should use the tools we've defined above to train a three-layer ConvNet on CIFAR-10. Your ConvNet should use 32 filters in the first convolutional layer and 16 filters in the second layer.To train the model you should use gradient descent with Nesterov momentum 0.9. **HINT**: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers/SGDYou don't need to perform any hyperparameter tuning, but you should achieve validation accuracies above 50% after training for one epoch.
###Code
learning_rate = 3e-3
channel_1, channel_2, num_classes = 32, 16, 10
def model_init_fn():
model = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return model
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum = .9, nesterov = True)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
model34 = train_part34(model_init_fn, optimizer_init_fn)
###Output
Iteration 0, Epoch 1, Loss: 2.8665101528167725, Accuracy: 7.8125, Val Loss: 3.5604350566864014, Val Accuracy: 9.300000190734863
Iteration 100, Epoch 1, Loss: 1.880402684211731, Accuracy: 34.26670837402344, Val Loss: 1.6414457559585571, Val Accuracy: 44.70000076293945
Iteration 200, Epoch 1, Loss: 1.7286632061004639, Accuracy: 39.42008590698242, Val Loss: 1.4922603368759155, Val Accuracy: 47.60000228881836
Iteration 300, Epoch 1, Loss: 1.6467211246490479, Accuracy: 42.08367919921875, Val Loss: 1.4390604496002197, Val Accuracy: 49.099998474121094
Iteration 400, Epoch 1, Loss: 1.579654574394226, Accuracy: 44.45137023925781, Val Loss: 1.3616528511047363, Val Accuracy: 52.20000076293945
Iteration 500, Epoch 1, Loss: 1.5306543111801147, Accuracy: 46.154563903808594, Val Loss: 1.3258088827133179, Val Accuracy: 54.79999923706055
Iteration 600, Epoch 1, Loss: 1.5000053644180298, Accuracy: 47.18177795410156, Val Loss: 1.281928539276123, Val Accuracy: 55.19999694824219
Iteration 700, Epoch 1, Loss: 1.472947597503662, Accuracy: 48.165565490722656, Val Loss: 1.2728898525238037, Val Accuracy: 57.099998474121094
###Markdown
Part IV: Keras Sequential APIIn Part III we introduced the `tf.keras.Model` API, which allows you to define models with any number of learnable layers and with arbitrary connectivity between layers.However for many models you don't need such flexibility - a lot of models can be expressed as a sequential stack of layers, with the output of each layer fed to the next layer as input. If your model fits this pattern, then there is an even easier way to define your model: using `tf.keras.Sequential`. You don't need to write any custom classes; you simply call the `tf.keras.Sequential` constructor with a list containing a sequence of layer objects.One complication with `tf.keras.Sequential` is that you must define the shape of the input to the model by passing a value to the `input_shape` of the first layer in your model. Keras Sequential API: Two-Layer NetworkIn this subsection, we will rewrite the two-layer fully-connected network using `tf.keras.Sequential`, and train it using the training loop defined above.You don't need to perform any hyperparameter tuning here, but you should see validation accuracies above 40% after training for one epoch.
###Code
learning_rate = 1e-2
def model_init_fn():
input_shape = (32, 32, 3)
hidden_layer_size, num_classes = 4000, 10
initializer = tf.initializers.VarianceScaling(scale=2.0)
layers = [
tf.keras.layers.Flatten(input_shape=input_shape),
tf.keras.layers.Dense(hidden_layer_size, activation='relu',
kernel_initializer=initializer),
tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer),
]
model = tf.keras.Sequential(layers)
return model
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
Iteration 0, Epoch 1, Loss: 3.1283693313598633, Accuracy: 12.5, Val Loss: 2.907970428466797, Val Accuracy: 13.40000057220459
Iteration 100, Epoch 1, Loss: 2.2596824169158936, Accuracy: 27.61448097229004, Val Loss: 1.8809293508529663, Val Accuracy: 39.0
Iteration 200, Epoch 1, Loss: 2.090951681137085, Accuracy: 31.895212173461914, Val Loss: 1.8405581712722778, Val Accuracy: 38.80000305175781
Iteration 300, Epoch 1, Loss: 2.0104644298553467, Accuracy: 33.85589599609375, Val Loss: 1.901405930519104, Val Accuracy: 36.599998474121094
Iteration 400, Epoch 1, Loss: 1.9390699863433838, Accuracy: 35.83618927001953, Val Loss: 1.7299988269805908, Val Accuracy: 41.900001525878906
Iteration 500, Epoch 1, Loss: 1.8952276706695557, Accuracy: 36.860652923583984, Val Loss: 1.6673927307128906, Val Accuracy: 42.39999771118164
Iteration 600, Epoch 1, Loss: 1.8639533519744873, Accuracy: 37.689788818359375, Val Loss: 1.682585597038269, Val Accuracy: 42.79999923706055
Iteration 700, Epoch 1, Loss: 1.837199330329895, Accuracy: 38.35145950317383, Val Loss: 1.6344739198684692, Val Accuracy: 42.599998474121094
###Markdown
Abstracting Away the Training LoopIn the previous examples, we used a customised training loop to train models (e.g. `train_part34`). Writing your own training loop is only required if you need more flexibility and control during training your model. Alternately, you can also use built-in APIs like `tf.keras.Model.fit()` and `tf.keras.Model.evaluate` to train and evaluate a model. Also remember to configure your model for training by calling `tf.keras.Model.compile.You don't need to perform any hyperparameter tuning here, but you should see validation and test accuracies above 42% after training for one epoch.
###Code
model = model_init_fn()
#model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate),
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum = .9, nesterov = True),
loss='sparse_categorical_crossentropy',
metrics=[tf.keras.metrics.sparse_categorical_accuracy])
model.fit(X_train, y_train, batch_size=64, epochs=2, validation_data=(X_val, y_val))
model.evaluate(X_test, y_test)
###Output
Train on 49000 samples, validate on 1000 samples
Epoch 1/2
49000/49000 [==============================] - 7s 149us/sample - loss: 2.2677 - sparse_categorical_accuracy: 0.3791 - val_loss: 2.3114 - val_sparse_categorical_accuracy: 0.4140
Epoch 2/2
49000/49000 [==============================] - 7s 139us/sample - loss: 1.8612 - sparse_categorical_accuracy: 0.4630 - val_loss: 2.1996 - val_sparse_categorical_accuracy: 0.4240
10000/10000 [==============================] - 1s 118us/sample - loss: 2.1615 - sparse_categorical_accuracy: 0.4194
###Markdown
Keras Sequential API: Three-Layer ConvNetHere you should use `tf.keras.Sequential` to reimplement the same three-layer ConvNet architecture used in Part II and Part III. As a reminder, your model should have the following architecture:1. Convolutional layer with 32 5x5 kernels, using zero padding of 22. ReLU nonlinearity3. Convolutional layer with 16 3x3 kernels, using zero padding of 14. ReLU nonlinearity5. Fully-connected layer giving class scores6. Softmax nonlinearityYou should initialize the weights of the model using a `tf.initializers.VarianceScaling` as above.You should train the model using Nesterov momentum 0.9.You don't need to perform any hyperparameter search, but you should achieve accuracy above 45% after training for one epoch.
###Code
def model_init_fn():
model = None
############################################################################
# TODO: Construct a three-layer ConvNet using tf.keras.Sequential. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
input_shape = (32, 32, 3)
num_classes = 10
initializer = tf.initializers.VarianceScaling(scale=2.0)
layers = [
tf.keras.layers.Conv2D(channel_1, (5, 5), (1,1),
input_shape=input_shape,
padding = 'same',
activation='relu',
kernel_initializer=initializer),
tf.keras.layers.Conv2D(channel_2, (3, 3), (1,1),
padding = 'same',
activation='relu',
kernel_initializer=initializer),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer)
]
model = tf.keras.Sequential(layers)
return model
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return model
learning_rate = 5e-4
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum = .9, nesterov = True)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
model34 = train_part34(model_init_fn, optimizer_init_fn)
###Output
Iteration 0, Epoch 1, Loss: 3.1202919483184814, Accuracy: 10.9375, Val Loss: 2.727996349334717, Val Accuracy: 10.300000190734863
Iteration 100, Epoch 1, Loss: 2.000793933868408, Accuracy: 30.136138916015625, Val Loss: 1.7997872829437256, Val Accuracy: 37.70000076293945
Iteration 200, Epoch 1, Loss: 1.8710850477218628, Accuracy: 34.95802307128906, Val Loss: 1.6573798656463623, Val Accuracy: 41.900001525878906
Iteration 300, Epoch 1, Loss: 1.7957477569580078, Accuracy: 37.2923583984375, Val Loss: 1.6201965808868408, Val Accuracy: 46.39999771118164
Iteration 400, Epoch 1, Loss: 1.7333133220672607, Accuracy: 39.4092903137207, Val Loss: 1.5384198427200317, Val Accuracy: 47.5
Iteration 500, Epoch 1, Loss: 1.6867010593414307, Accuracy: 40.90256881713867, Val Loss: 1.4892431497573853, Val Accuracy: 49.20000076293945
Iteration 600, Epoch 1, Loss: 1.6554102897644043, Accuracy: 41.997711181640625, Val Loss: 1.4652884006500244, Val Accuracy: 50.599998474121094
Iteration 700, Epoch 1, Loss: 1.6274172067642212, Accuracy: 43.056793212890625, Val Loss: 1.4231661558151245, Val Accuracy: 51.79999923706055
###Markdown
We will also train this model with the built-in training loop APIs provided by TensorFlow.
###Code
model = model_init_fn()
#model.compile(optimizer='sgd',
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate),
loss='sparse_categorical_crossentropy',
metrics=[tf.keras.metrics.sparse_categorical_accuracy])
model.fit(X_train, y_train, batch_size=64, epochs=1, validation_data=(X_val, y_val))
model.evaluate(X_test, y_test)
###Output
Train on 49000 samples, validate on 1000 samples
49000/49000 [==============================] - 7s 148us/sample - loss: 1.9793 - sparse_categorical_accuracy: 0.3033 - val_loss: 1.7637 - val_sparse_categorical_accuracy: 0.3940
10000/10000 [==============================] - 1s 108us/sample - loss: 1.7820 - sparse_categorical_accuracy: 0.3748
###Markdown
Part IV: Functional API Demonstration with a Two-Layer Network In the previous section, we saw how we can use `tf.keras.Sequential` to stack layers to quickly build simple models. But this comes at the cost of losing flexibility.Often we will have to write complex models that have non-sequential data flows: a layer can have **multiple inputs and/or outputs**, such as stacking the output of 2 previous layers together to feed as input to a third! (Some examples are residual connections and dense blocks.)In such cases, we can use Keras functional API to write models with complex topologies such as: 1. Multi-input models 2. Multi-output models 3. Models with shared layers (the same layer called several times) 4. Models with non-sequential data flows (e.g. residual connections)Writing a model with Functional API requires us to create a `tf.keras.Model` instance and explicitly write input tensors and output tensors for this model.
###Code
def two_layer_fc_functional(input_shape, hidden_size, num_classes):
initializer = tf.initializers.VarianceScaling(scale=2.0)
inputs = tf.keras.Input(shape=input_shape)
flattened_inputs = tf.keras.layers.Flatten()(inputs)
fc1_output = tf.keras.layers.Dense(hidden_size, activation='relu',
kernel_initializer=initializer)(flattened_inputs)
scores = tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer)(fc1_output)
# Instantiate the model given inputs and outputs.
model = tf.keras.Model(inputs=inputs, outputs=scores)
return model
def test_two_layer_fc_functional():
""" A small unit test to exercise the TwoLayerFC model above. """
input_size, hidden_size, num_classes = 50, 42, 10
input_shape = (50,)
x = tf.zeros((64, input_size))
model = two_layer_fc_functional(input_shape, hidden_size, num_classes)
with tf.device(device):
scores = model(x)
print(scores.shape)
test_two_layer_fc_functional()
###Output
(64, 10)
###Markdown
Keras Functional API: Train a Two-Layer NetworkYou can now train this two-layer network constructed using the functional API.You don't need to perform any hyperparameter tuning here, but you should see validation accuracies above 40% after training for one epoch.
###Code
input_shape = (32, 32, 3)
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn():
return two_layer_fc_functional(input_shape, hidden_size, num_classes)
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
Iteration 0, Epoch 1, Loss: 2.975888967514038, Accuracy: 9.375, Val Loss: 3.124917507171631, Val Accuracy: 11.0
Iteration 100, Epoch 1, Loss: 2.26572322845459, Accuracy: 28.078588485717773, Val Loss: 1.9556703567504883, Val Accuracy: 36.5
Iteration 200, Epoch 1, Loss: 2.0940442085266113, Accuracy: 32.03513717651367, Val Loss: 1.818089246749878, Val Accuracy: 39.70000076293945
Iteration 300, Epoch 1, Loss: 2.0151140689849854, Accuracy: 33.99086380004883, Val Loss: 1.9358595609664917, Val Accuracy: 36.70000076293945
Iteration 400, Epoch 1, Loss: 1.9442771673202515, Accuracy: 35.81281280517578, Val Loss: 1.7535817623138428, Val Accuracy: 41.10000228881836
Iteration 500, Epoch 1, Loss: 1.8974992036819458, Accuracy: 36.86376953125, Val Loss: 1.7121632099151611, Val Accuracy: 41.400001525878906
Iteration 600, Epoch 1, Loss: 1.8658170700073242, Accuracy: 37.80418014526367, Val Loss: 1.7267868518829346, Val Accuracy: 40.900001525878906
Iteration 700, Epoch 1, Loss: 1.8383498191833496, Accuracy: 38.61001968383789, Val Loss: 1.6557785272598267, Val Accuracy: 42.0
###Markdown
Part V: CIFAR-10 open-ended challengeIn this section you can experiment with whatever ConvNet architecture you'd like on CIFAR-10.You should experiment with architectures, hyperparameters, loss functions, regularization, or anything else you can think of to train a model that achieves **at least 70%** accuracy on the **validation** set within 10 epochs. You can use the built-in train function, the `train_part34` function from above, or implement your own training loop.Describe what you did at the end of the notebook. Some things you can try:- **Filter size**: Above we used 5x5 and 3x3; is this optimal?- **Number of filters**: Above we used 16 and 32 filters. Would more or fewer do better?- **Pooling**: We didn't use any pooling above. Would this improve the model?- **Normalization**: Would your model be improved with batch normalization, layer normalization, group normalization, or some other normalization strategy?- **Network architecture**: The ConvNet above has only three layers of trainable parameters. Would a deeper model do better?- **Global average pooling**: Instead of flattening after the final convolutional layer, would global average pooling do better? This strategy is used for example in Google's Inception network and in Residual Networks.- **Regularization**: Would some kind of regularization improve performance? Maybe weight decay or dropout? NOTE: Batch Normalization / DropoutIf you are using Batch Normalization and Dropout, remember to pass `is_training=True` if you use the `train_part34()` function. BatchNorm and Dropout layers have different behaviors at training and inference time. `training` is a specific keyword argument reserved for this purpose in any `tf.keras.Model`'s `call()` function. Read more about this here : https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/BatchNormalizationmethodshttps://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dropoutmethods Tips for trainingFor each network architecture that you try, you should tune the learning rate and other hyperparameters. When doing this there are a couple important things to keep in mind: - If the parameters are working well, you should see improvement within a few hundred iterations- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set. Going above and beyondIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!- Alternative optimizers: you can try Adam, Adagrad, RMSprop, etc.- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.- Model ensembles- Data augmentation- New Architectures - [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output. - [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together. - [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32) Have fun and happy training!
###Code
class CustomConvNet(tf.keras.Model):
def __init__(self):
super(CustomConvNet, self).__init__()
############################################################################
# TODO: Construct a model that performs well on CIFAR-10 #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
reg_strength = 0.01
# Use scaling of 2 because of RELU
initializer = tf.initializers.VarianceScaling(scale=2.0)
self.dropout0 = tf.keras.layers.Dropout(.5)
self.conv1 = tf.keras.layers.Conv2D(channel_1, (5, 5), (1,1),
padding = 'same',
#activation='none',
kernel_initializer=initializer,
kernel_regularizer= tf.keras.regularizers.l2(reg_strength),
bias_regularizer=tf.keras.regularizers.l2(reg_strength))
self.norm1 = tf.keras.layers.BatchNormalization(axis=-1)
self.active1 = tf.keras.layers.Activation("relu")
self.pool1 = tf.keras.layers.MaxPool2D()
self.conv2 = tf.keras.layers.Conv2D(channel_2, (3, 3), (1,1),
padding = 'same',
#activation='relu',
kernel_initializer=initializer,
kernel_regularizer= tf.keras.regularizers.l2(reg_strength),
bias_regularizer=tf.keras.regularizers.l2(reg_strength))
self.norm2 = tf.keras.layers.BatchNormalization(axis=-1)
self.active2 = tf.keras.layers.Activation("relu")
self.pool2 = tf.keras.layers.MaxPool2D()
self.conv3 = tf.keras.layers.Conv2D(channel_2, (3, 3), (1,1),
padding = 'same',
#activation='relu',
kernel_initializer=initializer,
kernel_regularizer= tf.keras.regularizers.l2(reg_strength),
bias_regularizer=tf.keras.regularizers.l2(reg_strength))
self.norm3 = tf.keras.layers.BatchNormalization(axis=-1)
self.active3 = tf.keras.layers.Activation("relu")
self.pool3 = tf.keras.layers.MaxPool2D()
self.flatten0 = tf.keras.layers.Flatten()
self.fc0 = tf.keras.layers.Dense(100, activation='relu',
kernel_initializer=initializer,
kernel_regularizer= tf.keras.regularizers.l2(reg_strength),
bias_regularizer=tf.keras.regularizers.l2(reg_strength))
self.fc1 = tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer,
kernel_regularizer= tf.keras.regularizers.l2(reg_strength),
bias_regularizer=tf.keras.regularizers.l2(reg_strength))
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
def call(self, input_tensor, training=False):
############################################################################
# TODO: Construct a model that performs well on CIFAR-10 #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
x = input_tensor
x = self.conv1(x)
x = self.norm1(x, training = training)
x = self.active1(x)
x = self.pool1(x)
x = self.conv2(x)
x = self.norm2(x, training = training)
x = self.active2(x)
x = self.pool2(x)
x = self.conv3(x)
x = self.norm3(x, training = training)
x = self.active3(x)
x = self.pool3(x)
x = self.flatten0(x)
x = self.fc0(x)
x = self.fc1(x)
return x
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return x
device = '/device:GPU:0' # Change this to a CPU/GPU as you wish!
# device = '/cpu:0' # Change this to a CPU/GPU as you wish!
print_every = 700
num_epochs = 10
model = CustomConvNet()
def model_init_fn():
return CustomConvNet()
def optimizer_init_fn():
learning_rate = 1e-3
#return tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum = .9, nesterov = True)
return tf.keras.optimizers.Adam(learning_rate)
train_part34(model_init_fn, optimizer_init_fn, num_epochs=num_epochs, is_training=True)
###Output
Iteration 0, Epoch 1, Loss: 3.0353410243988037, Accuracy: 14.0625, Val Loss: 4.090989112854004, Val Accuracy: 12.800000190734863
Iteration 700, Epoch 1, Loss: 1.503588318824768, Accuracy: 45.62678527832031, Val Loss: 1.31296706199646, Val Accuracy: 52.499996185302734
Iteration 1400, Epoch 2, Loss: 1.1449296474456787, Accuracy: 59.17322540283203, Val Loss: 1.1885653734207153, Val Accuracy: 58.499996185302734
Iteration 2100, Epoch 3, Loss: 1.012212872505188, Accuracy: 64.03229522705078, Val Loss: 1.0230681896209717, Val Accuracy: 63.900001525878906
Iteration 2800, Epoch 4, Loss: 0.9305073022842407, Accuracy: 67.0290756225586, Val Loss: 0.9436802268028259, Val Accuracy: 66.5
Iteration 3500, Epoch 5, Loss: 0.8777076005935669, Accuracy: 68.9716796875, Val Loss: 0.9642516374588013, Val Accuracy: 66.19999694824219
|
1A.ipynb | ###Markdown
Face RecognitionIn this assignment, you will build a face recognition system. Many of the ideas presented here are from [FaceNet](https://arxiv.org/pdf/1503.03832.pdf). In lecture, we also talked about [DeepFace](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf). Face recognition problems commonly fall into two categories: - **Face Verification** - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem. - **Face Recognition** - "who is this person?". For example, the video lecture showed a [face recognition video](https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem. FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person. **In this assignment, you will:**- Implement the triplet loss function- Use a pretrained model to map face images into 128-dimensional encodings- Use these encodings to perform face verification and face recognition Channels-first notation* In this exercise, we will be using a pre-trained model which represents ConvNet activations using a **"channels first"** convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. * In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. * Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community. Updates If you were working on the notebook before this update...* The current notebook is version "3a".* You can find your original work saved in the notebook with the previous version name ("v3") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* `triplet_loss`: Additional Hints added.* `verify`: Hints added.* `who_is_it`: corrected hints given in the comments.* Spelling and formatting updates for easier reading. Load packagesLet's load the required packages.
###Code
from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.set_printoptions(threshold=np.nan)
###Output
Using TensorFlow backend.
###Markdown
0 - Naive Face VerificationIn Face Verification, you're given two images and you have to determine if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person! **Figure 1** * Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on. * You'll see that rather than using the raw image, you can learn an encoding, $f(img)$. * By using an encoding for each image, an element-wise comparison produces a more accurate judgement as to whether two pictures are of the same person. 1 - Encoding face images into a 128-dimensional vector 1.1 - Using a ConvNet to compute encodingsThe FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning, let's load weights that someone else has already trained. The network architecture follows the Inception model from [Szegedy *et al.*](https://arxiv.org/abs/1409.4842). We have provided an inception network implementation. You can look in the file `inception_blocks_v2.py` to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook. This opens the file directory that contains the '.py' file). The key things you need to know are:- This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$ - It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vectorRun the cell below to create the model for face images.
###Code
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
###Output
Total Params: 3743280
###Markdown
** Expected Output **Total Params: 3743280 By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings to compare two face images as follows: **Figure 2**: By computing the distance between two encodings and thresholding, you can determine if the two pictures represent the same personSo, an encoding is a good one if: - The encodings of two images of the same person are quite similar to each other. - The encodings of two images of different persons are very different.The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart. **Figure 3**: In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) 1.2 - The Triplet LossFor an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.<!--We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1).!-->Training will use triplets of images $(A, P, N)$: - A is an "Anchor" image--a picture of a person. - P is a "Positive" image--a picture of the same person as the Anchor image.- N is a "Negative" image--a picture of a different person than the Anchor image.These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example. You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$:$$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$You would thus like to minimize the following "triplet cost":$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}_\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}_\text{(2)} + \alpha \large ] \small_+ \tag{3}$$Here, we are using the notation "$[z]_+$" to denote $max(z,0)$. Notes:- The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small. - The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large. It has a minus sign preceding it because minimizing the negative of the term is the same as maximizing that term.- $\alpha$ is called the margin. It is a hyperparameter that you pick manually. We will use $\alpha = 0.2$. Most implementations also rescale the encoding vectors to haven L2 norm equal to one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that in this assignment.**Exercise**: Implement the triplet loss as defined by formula (3). Here are the 4 steps:1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$3. Compute the full formula by taking the max with zero and summing over the training examples:$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small_+ \tag{3}$$ Hints* Useful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.maximum()`.* For steps 1 and 2, you will sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$. * For step 4 you will sum over the training examples. Additional Hints* Recall that the square of the L2 norm is the sum of the squared differences: $||x - y||_{2}^{2} = \sum_{i=1}^{N}(x_{i} - y_{i})^{2}$* Note that the `anchor`, `positive` and `negative` encodings are of shape `(m,128)`, where m is the number of training examples and 128 is the number of elements used to encode a single example.* For steps 1 and 2, you will maintain the number of `m` training examples and sum along the 128 values of each encoding. [tf.reduce_sum](https://www.tensorflow.org/api_docs/python/tf/math/reduce_sum) has an `axis` parameter. This chooses along which axis the sums are applied. * Note that one way to choose the last axis in a tensor is to use negative indexing (`axis=-1`).* In step 4, when summing over training examples, the result will be a single scalar value.* For `tf.reduce_sum` to sum across all axes, keep the default value `axis=None`.
###Code
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha = 0.2):
"""
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
"""
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE ### (≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive
pos_dist = tf.reduce_sum((anchor-positive)**2,axis=-1)
# Step 2: Compute the (encoding) distance between the anchor and the negative
neg_dist = tf.reduce_sum((anchor-negative)**2,axis=-1)
# Step 3: subtract the two previous distances and add alpha.
basic_loss = pos_dist+alpha-neg_dist
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.reduce_sum(tf.maximum(basic_loss,0))
### END CODE HERE ###
return loss
with tf.Session() as test:
tf.set_random_seed(1)
y_true = (None, None, None)
y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
print("loss = " + str(loss.eval()))
###Output
loss = 528.143
###Markdown
**Expected Output**: **loss** 528.143 2 - Loading the pre-trained modelFaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.
###Code
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
###Output
_____no_output_____
###Markdown
Here are some examples of distances between the encodings between three individuals: **Figure 4**: Example of distance outputs between three individuals' encodingsLet's now use this model to perform face verification and face recognition! 3 - Applying the model You are building a system for an office building where the building manager would like to offer facial recognition to allow the employees to enter the building.You'd like to build a **Face verification** system that gives access to the list of people who live or work there. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the entrance. The face recognition system then checks that they are who they claim to be. 3.1 - Face VerificationLet's build a database containing one encoding vector for each person who is allowed to enter the office. To generate the encoding we use `img_to_encoding(image_path, model)`, which runs the forward propagation of the model on the specified image. Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.
###Code
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
###Output
_____no_output_____
###Markdown
Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.**Exercise**: Implement the verify() function which checks if the front-door camera picture (`image_path`) is actually the person called "identity". You will have to go through the following steps:1. Compute the encoding of the image from `image_path`.2. Compute the distance between this encoding and the encoding of the identity image stored in the database.3. Open the door if the distance is less than 0.7, else do not open it.* As presented above, you should use the L2 distance [np.linalg.norm](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html). * (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.) Hints* `identity` is a string that is also a key in the `database` dictionary.* `img_to_encoding` has two parameters: the `image_path` and `model`.
###Code
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
"""
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be an employee who works in the office.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
"""
### START CODE HERE ###
# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
encoding = img_to_encoding(image_path,model)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.linalg.norm(encoding-database[identity])
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if dist<0.7:
print("It's " + str(identity) + ", welcome in!")
door_open = True
else:
print("It's not " + str(identity) + ", please go away")
door_open = False
### END CODE HERE ###
return dist, door_open
###Output
_____no_output_____
###Markdown
Younes is trying to enter the office and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture:
###Code
verify("images/camera_0.jpg", "younes", database, FRmodel)
###Output
It's younes, welcome in!
###Markdown
**Expected Output**: **It's younes, welcome in!** (0.65939283, True) Benoit, who does not work in the office, stole Kian's ID card and tried to enter the office. The camera took a picture of Benoit ("images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter.
###Code
verify("images/camera_2.jpg", "kian", database, FRmodel)
###Output
It's not kian, please go away
###Markdown
**Expected Output**: **It's not kian, please go away** (0.86224014, False) 3.2 - Face RecognitionYour face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the office the next day and couldn't get in! To solve this, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the building, and the door will unlock for them! You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as one of the inputs. **Exercise**: Implement `who_is_it()`. You will have to go through the following steps:1. Compute the target encoding of the image from image_path2. Find the encoding from the database that has smallest distance with the target encoding. - Initialize the `min_dist` variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding. - Loop over the database dictionary's names and encodings. To loop use `for (name, db_enc) in database.items()`. - Compute the L2 distance between the target "encoding" and the current "encoding" from the database. - If this distance is less than the min_dist, then set `min_dist` to `dist`, and `identity` to `name`.
###Code
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
"""
Implements face recognition for the office by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
"""
### START CODE HERE ###
## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
encoding = img_to_encoding(image_path,model)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current db_enc from the database. (≈ 1 line)
dist = np.linalg.norm(encoding-db_enc)
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist<min_dist:
min_dist = dist
identity = name
### END CODE HERE ###
if min_dist > 0.7:
print("Not in the database.")
else:
print ("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
###Output
_____no_output_____
###Markdown
Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes.
###Code
who_is_it("images/camera_0.jpg", database, FRmodel)
###Output
it's younes, the distance is 0.659393
###Markdown
Autonomous driving - Car detectionWelcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: [Redmon et al., 2016](https://arxiv.org/abs/1506.02640) and [Redmon and Farhadi, 2016](https://arxiv.org/abs/1612.08242). **You will learn to**:- Use object detection on a car detection dataset- Deal with bounding boxes Updates If you were working on the notebook before this update...* The current notebook is version "3a".* You can find your original work saved in the notebook with the previous version name ("v3") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* Clarified "YOLO" instructions preceding the code. * Added details about anchor boxes.* Added explanation of how score is calculated.* `yolo_filter_boxes`: added additional hints. Clarify syntax for argmax and max.* `iou`: clarify instructions for finding the intersection.* `iou`: give variable names for all 8 box vertices, for clarity. Adds `width` and `height` variables for clarity.* `iou`: add test cases to check handling of non-intersecting boxes, intersection at vertices, or intersection at edges.* `yolo_non_max_suppression`: clarify syntax for tf.image.non_max_suppression and keras.gather.* "convert output of the model to usable bounding box tensors": Provides a link to the definition of `yolo_head`.* `predict`: hint on calling sess.run.* Spelling, grammar, wording and formatting updates to improve clarity. Import librariesRun the following cell to load the packages and dependencies that you will find useful as you build the object detector!
###Code
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`. 1 - Problem StatementYou are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around. Pictures taken from a car-mounted camera while driving around Silicon Valley. We thank [drive.ai](htps://www.drive.ai/) for providing this dataset.You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like. **Figure 1** : **Definition of a box** If you have 80 classes that you want the object detector to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step. In this exercise, you will learn how "You Only Look Once" (YOLO) performs object detection, and then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use. 2 - YOLO "You Only Look Once" (YOLO) is a popular algorithm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes. 2.1 - Model details Inputs and outputs- The **input** is a batch of images, and each image has the shape (m, 608, 608, 3)- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers. Anchor Boxes* Anchor boxes are chosen by exploring the training data to choose reasonable height/width ratios that represent the different classes. For this assignment, 5 anchor boxes were chosen for you (to cover the 80 classes), and stored in the file './model_data/yolo_anchors.txt'* The dimension for anchor boxes is the second to last dimension in the encoding: $(m, n_H,n_W,anchors,classes)$.* The YOLO architecture is: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85). EncodingLet's look in greater detail at what this encoding represents. **Figure 2** : **Encoding architecture for YOLO** If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object. Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425). **Figure 3** : **Flattening the last two last dimensions** Class scoreNow, for each box (of each cell) we will compute the following element-wise product and extract a probability that the box contains a certain class. The class score is $score_{c,i} = p_{c} \times c_{i}$: the probability that there is an object $p_{c}$ times the probability that the object is a certain class $c_{i}$. **Figure 4** : **Find the class detected by each box** Example of figure 4* In figure 4, let's say for box 1 (cell 1), the probability that an object exists is $p_{1}=0.60$. So there's a 60% chance that an object exists in box 1 (cell 1). * The probability that the object is the class "category 3 (a car)" is $c_{3}=0.73$. * The score for box 1 and for category "3" is $score_{1,3}=0.60 \times 0.73 = 0.44$. * Let's say we calculate the score for all 80 classes in box 1, and find that the score for the car class (class 3) is the maximum. So we'll assign the score 0.44 and class "3" to this box "1". Visualizing classesHere's one way to visualize what YOLO is predicting on an image:- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across the 80 classes, one maximum for each of the 5 anchor boxes).- Color that grid cell according to what object that grid cell considers the most likely.Doing this results in this picture: **Figure 5** : Each one of the 19x19 grid cells is colored according to which class has the largest predicted probability in that cell. Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm. Visualizing bounding boxesAnother way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this: **Figure 6** : Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. Non-Max suppressionIn the figure above, we plotted only boxes for which the model had assigned a high probability, but this is still too many boxes. You'd like to reduce the algorithm's output to a much smaller number of detected objects. To do so, you'll use **non-max suppression**. Specifically, you'll carry out these steps: - Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class; either due to the low probability of any object, or low probability of this particular class).- Select only one box when several boxes overlap with each other and detect the same object. 2.2 - Filtering with a threshold on class scoresYou are going to first apply a filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold. The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It is convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables: - `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing the midpoint and dimensions $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes in each cell.- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the "class probabilities" $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell. **Exercise**: Implement `yolo_filter_boxes()`.1. Compute box scores by doing the elementwise product as described in Figure 4 ($p \times c$). The following code may help you choose the right operator: ```pythona = np.random.randn(19*19, 5, 1)b = np.random.randn(19*19, 5, 80)c = a * b shape of c will be (19*19, 5, 80)```This is an example of **broadcasting** (multiplying vectors of different sizes).2. For each box, find: - the index of the class with the maximum box score - the corresponding box score **Useful references** * [Keras argmax](https://keras.io/backend/argmax) * [Keras max](https://keras.io/backend/max) **Additional Hints** * For the `axis` parameter of `argmax` and `max`, if you want to select the **last** axis, one way to do so is to set `axis=-1`. This is similar to Python array indexing, where you can select the last position of an array using `arrayname[-1]`. * Applying `max` normally collapses the axis for which the maximum is applied. `keepdims=False` is the default option, and allows that dimension to be removed. We don't need to keep the last dimension after applying the maximum here. * Even though the documentation shows `keras.backend.argmax`, use `keras.argmax`. Similarly, use `keras.max`.3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep. 4. Use TensorFlow to apply the mask to `box_class_scores`, `boxes` and `box_classes` to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. **Useful reference**: * [boolean mask](https://www.tensorflow.org/api_docs/python/tf/boolean_mask) **Additional Hints**: * For the `tf.boolean_mask`, we can keep the default `axis=None`.**Reminder**: to call a Keras function, you should use `K.function(...)`.
###Code
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = box_class_probs*box_confidence
### END CODE HERE ###
# Step 2: Find the box_classes using the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores,axis=-1)
box_class_scores = K.max(box_scores,axis=-1)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = box_class_scores>=threshold
### END CODE HERE ###
# Step 4: Apply the mask to box_class_scores, boxes and box_classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores,filtering_mask)
boxes = tf.boolean_mask(boxes,filtering_mask)
classes = tf.boolean_mask(box_classes,filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
###Output
scores[2] = 10.7506
boxes[2] = [ 8.42653275 3.27136683 -0.5313437 -4.94137383]
classes[2] = 7
scores.shape = (?,)
boxes.shape = (?, 4)
classes.shape = (?,)
###Markdown
**Expected Output**: **scores[2]** 10.7506 **boxes[2]** [ 8.42653275 3.27136683 -0.5313437 -4.94137383] **classes[2]** 7 **scores.shape** (?,) **boxes.shape** (?, 4) **classes.shape** (?,) **Note** In the test for `yolo_filter_boxes`, we're using random numbers to test the function. In real data, the `box_class_probs` would contain non-zero values between 0 and 1 for the probabilities. The box coordinates in `boxes` would also be chosen so that lengths and heights are non-negative. 2.3 - Non-max suppression Even after filtering by thresholding over the class scores, you still end up with a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS). **Figure 7** : In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probability) of the 3 boxes. Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU. **Figure 8** : Definition of "Intersection over Union". **Exercise**: Implement iou(). Some hints:- In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) is the lower-right corner. In other words, the (0,0) origin starts at the top left corner of the image. As x increases, we move to the right. As y increases, we move down.- For this exercise, we define a box using its two corners: upper left $(x_1, y_1)$ and lower right $(x_2,y_2)$, instead of using the midpoint, height and width. (This makes it a bit easier to calculate the intersection).- To calculate the area of a rectangle, multiply its height $(y_2 - y_1)$ by its width $(x_2 - x_1)$. (Since $(x_1,y_1)$ is the top left and $x_2,y_2$ are the bottom right, these differences should be non-negative.- To find the **intersection** of the two boxes $(xi_{1}, yi_{1}, xi_{2}, yi_{2})$: - Feel free to draw some examples on paper to clarify this conceptually. - The top left corner of the intersection $(xi_{1}, yi_{1})$ is found by comparing the top left corners $(x_1, y_1)$ of the two boxes and finding a vertex that has an x-coordinate that is closer to the right, and y-coordinate that is closer to the bottom. - The bottom right corner of the intersection $(xi_{2}, yi_{2})$ is found by comparing the bottom right corners $(x_2,y_2)$ of the two boxes and finding a vertex whose x-coordinate is closer to the left, and the y-coordinate that is closer to the top. - The two boxes **may have no intersection**. You can detect this if the intersection coordinates you calculate end up being the top right and/or bottom left corners of an intersection box. Another way to think of this is if you calculate the height $(y_2 - y_1)$ or width $(x_2 - x_1)$ and find that at least one of these lengths is negative, then there is no intersection (intersection area is zero). - The two boxes may intersect at the **edges or vertices**, in which case the intersection area is still zero. This happens when either the height or width (or both) of the calculated intersection is zero.**Additional Hints**- `xi1` = **max**imum of the x1 coordinates of the two boxes- `yi1` = **max**imum of the y1 coordinates of the two boxes- `xi2` = **min**imum of the x2 coordinates of the two boxes- `yi2` = **min**imum of the y2 coordinates of the two boxes- `inter_area` = You can use `max(height, 0)` and `max(width, 0)`
###Code
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (box1_x1, box1_y1, box1_x2, box_1_y2)
box2 -- second box, list object with coordinates (box2_x1, box2_y1, box2_x2, box2_y2)
"""
# Assign variable names to coordinates for clarity
(box1_x1, box1_y1, box1_x2, box1_y2) = box1
(box2_x1, box2_y1, box2_x2, box2_y2) = box2
# Calculate the (yi1, xi1, yi2, xi2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 7 lines)
xi1 = max(box1[0],box2[0])
yi1 = max(box1[1],box2[1])
xi2 = min(box1[2],box2[2])
yi2 = min(box1[3],box2[3])
inter_width = max(xi2-xi1,0)
inter_height = max(yi2-yi1,0)
inter_area = inter_width*inter_height
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1[2]-box1[0])*(box1[3]-box1[1])
box2_area = (box2[2]-box2[0])*(box2[3]-box2[1])
union_area = box1_area+box2_area-inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area/union_area
### END CODE HERE ###
return iou
## Test case 1: boxes intersect
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou for intersecting boxes = " + str(iou(box1, box2)))
## Test case 2: boxes do not intersect
box1 = (1,2,3,4)
box2 = (5,6,7,8)
print("iou for non-intersecting boxes = " + str(iou(box1,box2)))
## Test case 3: boxes intersect at vertices only
box1 = (1,1,2,2)
box2 = (2,2,3,3)
print("iou for boxes that only touch at vertices = " + str(iou(box1,box2)))
## Test case 4: boxes intersect at edge only
box1 = (1,1,3,3)
box2 = (2,3,3,4)
print("iou for boxes that only touch at edges = " + str(iou(box1,box2)))
###Output
iou for intersecting boxes = 0.14285714285714285
iou for non-intersecting boxes = 0.0
iou for boxes that only touch at vertices = 0.0
iou for boxes that only touch at edges = 0.0
###Markdown
**Expected Output**:```iou for intersecting boxes = 0.14285714285714285iou for non-intersecting boxes = 0.0iou for boxes that only touch at vertices = 0.0iou for boxes that only touch at edges = 0.0``` YOLO non-max suppressionYou are now ready to implement non-max suppression. The key steps are: 1. Select the box that has the highest score.2. Compute the overlap of this box with all other boxes, and remove boxes that overlap significantly (iou >= `iou_threshold`).3. Go back to step 1 and iterate until there are no more boxes with a lower score than the currently selected box.This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):** Reference documentation ** - [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)```tf.image.non_max_suppression( boxes, scores, max_output_size, iou_threshold=0.5, name=None)```Note that in the version of tensorflow used here, there is no parameter `score_threshold` (it's shown in the documentation for the latest version) so trying to set this value will result in an error message: *got an unexpected keyword argument 'score_threshold.*- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/keras/backend/gather) Even though the documentation shows `tf.keras.backend.gather()`, you can use `keras.gather()`. ```keras.gather( reference, indices)```
###Code
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes,scores,max_boxes,iou_threshold)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = K.gather(scores,nms_indices)
boxes = K.gather(boxes,nms_indices)
classes = K.gather(classes,nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
###Output
scores[2] = 6.9384
boxes[2] = [-5.299932 3.13798141 4.45036697 0.95942086]
classes[2] = -2.24527
scores.shape = (10,)
boxes.shape = (10, 4)
classes.shape = (10,)
###Markdown
**Expected Output**: **scores[2]** 6.9384 **boxes[2]** [-5.299932 3.13798141 4.45036697 0.95942086] **classes[2]** -2.24527 **scores.shape** (10,) **boxes.shape** (10, 4) **classes.shape** (10,) 2.4 Wrapping up the filteringIt's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented. **Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided): ```pythonboxes = yolo_boxes_to_corners(box_xy, box_wh) ```which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes````pythonboxes = scale_boxes(boxes, image_shape)```YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image. Don't worry about these two functions; we'll show you where they need to be called.
###Code
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions (convert boxes box_xy and box_wh to corner coordinates)
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence,boxes,box_class_probs,score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with
# maximum number of boxes set to max_boxes and a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores,boxes,classes,max_boxes,iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
###Output
scores[2] = 138.791
boxes[2] = [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
classes[2] = 54
scores.shape = (10,)
boxes.shape = (10, 4)
classes.shape = (10,)
###Markdown
**Expected Output**: **scores[2]** 138.791 **boxes[2]** [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141] **classes[2]** 54 **scores.shape** (10,) **boxes.shape** (10, 4) **classes.shape** (10,) Summary for YOLO:- Input image (608, 608, 3)- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output. - After flattening the last two dimensions, the output is a volume of shape (19, 19, 425): - Each cell in a 19x19 grid over the input image gives 425 numbers. - 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture. - 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and 80 is the number of classes we'd like to detect- You then select only few boxes based on: - Score-thresholding: throw away boxes that have detected a class with a score less than the threshold - Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes- This gives you YOLO's final output. 3 - Test YOLO pre-trained model on images In this part, you are going to use a pre-trained model and test it on the car detection dataset. We'll need a session to execute the computation graph and evaluate the tensors.
###Code
sess = K.get_session()
###Output
_____no_output_____
###Markdown
3.1 - Defining classes, anchors and image shape.* Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. * We have gathered the information on the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". * We'll read class names and anchors from text files.* The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
###Code
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
###Output
_____no_output_____
###Markdown
3.2 - Loading a pre-trained model* Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. * You are going to load an existing pre-trained Keras YOLO model stored in "yolo.h5". * These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will simply refer to it as "YOLO" in this notebook.Run the cell below to load the model from this file.
###Code
yolo_model = load_model("model_data/yolo.h5")
###Output
/opt/conda/lib/python3.6/site-packages/keras/models.py:251: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '
###Markdown
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
###Code
yolo_model.summary()
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 608, 608, 3) 0
____________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 608, 608, 32) 864 input_1[0][0]
____________________________________________________________________________________________________
batch_normalization_1 (BatchNorm (None, 608, 608, 32) 128 conv2d_1[0][0]
____________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 608, 608, 32) 0 batch_normalization_1[0][0]
____________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 304, 304, 32) 0 leaky_re_lu_1[0][0]
____________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 304, 304, 64) 18432 max_pooling2d_1[0][0]
____________________________________________________________________________________________________
batch_normalization_2 (BatchNorm (None, 304, 304, 64) 256 conv2d_2[0][0]
____________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 304, 304, 64) 0 batch_normalization_2[0][0]
____________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 152, 152, 64) 0 leaky_re_lu_2[0][0]
____________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 152, 152, 128) 73728 max_pooling2d_2[0][0]
____________________________________________________________________________________________________
batch_normalization_3 (BatchNorm (None, 152, 152, 128) 512 conv2d_3[0][0]
____________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 152, 152, 128) 0 batch_normalization_3[0][0]
____________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 152, 152, 64) 8192 leaky_re_lu_3[0][0]
____________________________________________________________________________________________________
batch_normalization_4 (BatchNorm (None, 152, 152, 64) 256 conv2d_4[0][0]
____________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 152, 152, 64) 0 batch_normalization_4[0][0]
____________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 152, 152, 128) 73728 leaky_re_lu_4[0][0]
____________________________________________________________________________________________________
batch_normalization_5 (BatchNorm (None, 152, 152, 128) 512 conv2d_5[0][0]
____________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 152, 152, 128) 0 batch_normalization_5[0][0]
____________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 76, 76, 128) 0 leaky_re_lu_5[0][0]
____________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 76, 76, 256) 294912 max_pooling2d_3[0][0]
____________________________________________________________________________________________________
batch_normalization_6 (BatchNorm (None, 76, 76, 256) 1024 conv2d_6[0][0]
____________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_6[0][0]
____________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 76, 76, 128) 32768 leaky_re_lu_6[0][0]
____________________________________________________________________________________________________
batch_normalization_7 (BatchNorm (None, 76, 76, 128) 512 conv2d_7[0][0]
____________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 76, 76, 128) 0 batch_normalization_7[0][0]
____________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 76, 76, 256) 294912 leaky_re_lu_7[0][0]
____________________________________________________________________________________________________
batch_normalization_8 (BatchNorm (None, 76, 76, 256) 1024 conv2d_8[0][0]
____________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_8[0][0]
____________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 38, 38, 256) 0 leaky_re_lu_8[0][0]
____________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 38, 38, 512) 1179648 max_pooling2d_4[0][0]
____________________________________________________________________________________________________
batch_normalization_9 (BatchNorm (None, 38, 38, 512) 2048 conv2d_9[0][0]
____________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_9[0][0]
____________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_9[0][0]
____________________________________________________________________________________________________
batch_normalization_10 (BatchNor (None, 38, 38, 256) 1024 conv2d_10[0][0]
____________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_10[0][0]
____________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_10[0][0]
____________________________________________________________________________________________________
batch_normalization_11 (BatchNor (None, 38, 38, 512) 2048 conv2d_11[0][0]
____________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_11[0][0]
____________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_11[0][0]
____________________________________________________________________________________________________
batch_normalization_12 (BatchNor (None, 38, 38, 256) 1024 conv2d_12[0][0]
____________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_12[0][0]
____________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_12[0][0]
____________________________________________________________________________________________________
batch_normalization_13 (BatchNor (None, 38, 38, 512) 2048 conv2d_13[0][0]
____________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_13[0][0]
____________________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D) (None, 19, 19, 512) 0 leaky_re_lu_13[0][0]
____________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 19, 19, 1024) 4718592 max_pooling2d_5[0][0]
____________________________________________________________________________________________________
batch_normalization_14 (BatchNor (None, 19, 19, 1024) 4096 conv2d_14[0][0]
____________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_14[0][0]
____________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_14[0][0]
____________________________________________________________________________________________________
batch_normalization_15 (BatchNor (None, 19, 19, 512) 2048 conv2d_15[0][0]
____________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_15[0][0]
____________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_15[0][0]
____________________________________________________________________________________________________
batch_normalization_16 (BatchNor (None, 19, 19, 1024) 4096 conv2d_16[0][0]
____________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_16[0][0]
____________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_16[0][0]
____________________________________________________________________________________________________
batch_normalization_17 (BatchNor (None, 19, 19, 512) 2048 conv2d_17[0][0]
____________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_17[0][0]
____________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_17[0][0]
____________________________________________________________________________________________________
batch_normalization_18 (BatchNor (None, 19, 19, 1024) 4096 conv2d_18[0][0]
____________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_18[0][0]
____________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_18[0][0]
____________________________________________________________________________________________________
batch_normalization_19 (BatchNor (None, 19, 19, 1024) 4096 conv2d_19[0][0]
____________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 38, 38, 64) 32768 leaky_re_lu_13[0][0]
____________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_19[0][0]
____________________________________________________________________________________________________
batch_normalization_21 (BatchNor (None, 38, 38, 64) 256 conv2d_21[0][0]
____________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_19[0][0]
____________________________________________________________________________________________________
leaky_re_lu_21 (LeakyReLU) (None, 38, 38, 64) 0 batch_normalization_21[0][0]
____________________________________________________________________________________________________
batch_normalization_20 (BatchNor (None, 19, 19, 1024) 4096 conv2d_20[0][0]
____________________________________________________________________________________________________
space_to_depth_x2 (Lambda) (None, 19, 19, 256) 0 leaky_re_lu_21[0][0]
____________________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_20[0][0]
____________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 19, 19, 1280) 0 space_to_depth_x2[0][0]
leaky_re_lu_20[0][0]
____________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 19, 19, 1024) 11796480 concatenate_1[0][0]
____________________________________________________________________________________________________
batch_normalization_22 (BatchNor (None, 19, 19, 1024) 4096 conv2d_22[0][0]
____________________________________________________________________________________________________
leaky_re_lu_22 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_22[0][0]
____________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 19, 19, 425) 435625 leaky_re_lu_22[0][0]
====================================================================================================
Total params: 50,983,561
Trainable params: 50,962,889
Non-trainable params: 20,672
____________________________________________________________________________________________________
###Markdown
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2). 3.3 - Convert output of the model to usable bounding box tensorsThe output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.If you are curious about how `yolo_head` is implemented, you can find the function definition in the file ['keras_yolo.py'](https://github.com/allanzelener/YAD2K/blob/master/yad2k/models/keras_yolo.py). The file is located in your workspace in this path 'yad2k/models/keras_yolo.py'.
###Code
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
###Output
_____no_output_____
###Markdown
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function. 3.4 - Filtering boxes`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Let's now call `yolo_eval`, which you had previously implemented, to do this.
###Code
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
###Output
_____no_output_____
###Markdown
3.5 - Run the graph on an imageLet the fun begin. You have created a graph that can be summarized as follows:1. yolo_model.input is given to `yolo_model`. The model is used to compute the output yolo_model.output 2. yolo_model.output is processed by `yolo_head`. It gives you yolo_outputs 3. yolo_outputs goes through a filtering function, `yolo_eval`. It outputs your predictions: scores, boxes, classes **Exercise**: Implement predict() which runs the graph to test YOLO on an image.You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.The code below also uses the following function:```pythonimage, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))```which outputs:- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.- image_data: a numpy-array representing the image. This will be the input to the CNN.**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}. Hint: Using the TensorFlow Session object* Recall that above, we called `K.get_Session()` and saved the Session object in `sess`.* To evaluate a list of tensors, we call `sess.run()` like this:```sess.run(fetches=[tensor1,tensor2,tensor3], feed_dict={yolo_model.input: the_input_variable, K.learning_phase():0 }```* Notice that the variables `scores, boxes, classes` are not passed into the `predict` function, but these are global variables that you will use within the `predict` function.
###Code
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the predictions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores = tf.placeholder(tf.float32,shape=(None,))
out_boxes = tf.placeholder(tf.float32,shape=(None,4))
out_classes = tf.placeholder(tf.float32,shape=(None,))
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
###Output
_____no_output_____
###Markdown
Run the following cell on the "test.jpg" image to verify that your function is correct.
###Code
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
###Output
_____no_output_____ |
Python Absolute Beginner/Module_1_2.3_Absolute_Beginner.ipynb | ###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
It's time to save your code
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print("where's the homework?")
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
print('"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein')
###Output
"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"are spaces and puncuation alphabetical?".isalpha() #false becasue of the spaces
# [ ] initailize variable alpha_test with input
alpha_test = input("What is your name?: ")
# [ ] use .isalpha() on string variable alpha_test
alpha_test.isalpha()
###Output
What is your name?: cam
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
It's time to save your code
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print("Where's the homework?")
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
print('"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein')
###Output
"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stormy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation Alphabetical?".isalpha()
# [ ] initailize variable alpha_test with input
alpha_test=input("Enter a what you wish to test for alphabetical: ")
# [ ] use .isalpha() on string variable alpha_test
alpha_test.isalpha()
###Output
Enter a what you wish to test for alphabetical: Blah
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
It's time to save your code
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print("Where's the homework?")
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
person_name = "Albert Einstein"
print('"Education is what remains after one has forgotten what one has learned in school" - ' + person_name)
###Output
"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha ()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation Aplhabetical?".isalpha()
# [ ] initailize variable alpha_test with input
alpha_test = 'Iamhungry'
# [ ] use .isalpha() on string variable alpha_test
print(alpha_test, 'is a', alpha_test.isalpha(), 'statement')
###Output
Iamhungry is a True statement
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
It's time to save your code
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print ("Where's the homework?")
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
print ('"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein')
###Output
"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation Alphaberical?".isalpha()
# [ ] initailize variable alpha_test with input
fir_name = input("enter first name: ")
# [ ] use .isalpha() on string variable alpha_test
fir_name.isalpha()
###Output
enter first name: Tim
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
It's time to save your code
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print("Where's the homework?")
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
print('"Education is what remains after one has forgotten what one has learned in school" - Albert Eirnstein')
###Output
"Education is what remains after one has forgotten what one has learned in school" - Albert Eirnstein
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation Alphabetical?".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
# [ ] initailize variable alpha_test with input
alpha_test = input()
# [ ] use .isalpha() on string variable alpha_test
alpha_test.isalpha()
###Output
3
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
It's time to save your code
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print("Where's the homework?")
# [ ] output with double quotes: "Education is what remains after one has
#forgotten what one has learned in school" - Albert Einstein
print('"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein')
###Output
"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation Alphabetical?".isalpha()
# [ ] initailize variable alpha_test with input
alpha_test="alphabet test"
# [ ] use .isalpha() on string variable alpha_test
print("alphabet test:",alpha_test, "is all:" ,alpha_test.isalpha())
###Output
alphabet test: alphabet test is all: False
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
It's time to save your code
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print("Where's the homework?")
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
print('"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein')
###Output
"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation Alphabetical?".isalpha()
# [ ] initailize variable alpha_test with input
alpha_test = input("Enter name: ")
# [ ] use .isalpha() on string variable alpha_test
alpha_test.isalpha()
###Output
Enter name: Jacob
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
_____no_output_____
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
###Output
_____no_output_____
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
# [ ] initailize variable alpha_test with input
# [ ] use .isalpha() on string variable alpha_test
###Output
_____no_output_____
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("'It's time to save your code'")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
'It's time to save your code'
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print("'wheres the hw'")
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
print('"education is what remain after one has forgotten what one has learned"')
###Output
"education is what remain after one has forgotten what one has learned"
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alpha".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation alpha?".isalpha
# [ ] initailize variable alpha_test with input
alpha_test=input()
# [ ] use .isalpha() on string variable alpha_test
alpha_test.isalpha()
###Output
Aaron
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
It's time to save your code
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print("Where's the homework?")
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
print('"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein')
###Output
"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation Alphabetical?".isalpha()
# [ ] initailize variable alpha_test with input
alpha_test = input("Enter an int, float, or string: ")
# [ ] use .isalpha() on string variable alpha_test
alpha_test.isalpha()
###Output
Enter an int, float, or string: Yes
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
It's time to save your code
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print("Where's the homework?")
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
print('Albert states, "Education is what remains after one has forgotten what one has learned in school"')
###Output
Albert states, "Education is what remains after one has forgotten what one has learned in school"
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".isupper())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation alphabetical".isalpha()
# [ ] initailize variable alpha_test with input
alpha_test = input()
# [ ] use .isalpha() on string variable alpha_test
.isalpha() = alpha_test
###Output
_____no_output_____
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
It's time to save your code
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print("where's the homework?")
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
print('"Education is what remains after one has forgotten what ones has learned in school" - Albert Einstien')
###Output
"Education is what remains after one has forgotten what ones has learned in school" - Albert Einstien
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation Alphabetical?".isalpha()
# [ ] initailize variable alpha_test with input
alpha_test = input("Enter a word or phrase: ")
# [ ] use .isalpha() on string variable alpha_test
print("All alphabetical =", alpha_test.isalpha())
###Output
Enter a word or phrase: hello world
All alphabetical = False
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
It's time to save your code
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print("Where's the homework?")
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
print('"Education is what remains after one has forgotten what one has learned in school" -Albert Einstein')
###Output
"Education is what remains after one has forgotten what one has learned in school" -Albert Einstein
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation Alphabetical?".isalpha()
# [ ] initailize variable alpha_test with input
alpha_test = input()
# [ ] use .isalpha() on string variable alpha_test
alpha_test.isalpha()
###Output
alpha
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
It's time to save your code
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print("Where's the homework?")
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
print('"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein')
###Output
"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation Alphabetical?".isalpha()
# [ ] initailize variable alpha_test with input
alpha_test = input()
# [ ] use .isalpha() on string variable alpha_test
alpha_test.isalpha()
###Output
test
###Markdown
1-2.3 Intro Python Strings: input, testing, formatting- input() - gathering user input - print() formatting - **Quotes inside strings** - **Boolean string tests methods** - String formatting methods- Formatting string input()- Boolean `in` keyword -----> Student will be able to- gather, store and use string `input()` - format `print()` output - **test string characteristics** - format string output- search for a string in a string Concepts quotes inside strings []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}]) single quotes in double quotesto display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`** double quotes in single quotesto display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`** Examples
###Code
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
###Output
It's time to save your code
I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"
###Markdown
Task 1 - **[ ] `print()`** strings that display double and single quotation marks
###Code
# [ ] using a print statement, display the text: Where's the homework?
print("Where's the homework?")
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
print('"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein')
###Output
"Education is what remains after one has forgotten what one has learned in school" - Albert Einstein
###Markdown
>**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.* Concepts Boolean string tests[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])methods- .isalpha()- .isalnum()- .istitle()- .isdigit()- .islower()- .isupper()- .startswith()type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.>```python"Hello".isapha()```out:[ ] `True` `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False Examples Boolean String Tests- **[ ] review and run code in each cell**
###Code
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
###Output
_____no_output_____
###Markdown
Task 2: multi-part test stings with **`.isalpha()`**
###Code
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha()
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation Alphabetical?".isalpha()
# [ ] initailize variable alpha_test with input
alpha_test=input()
# [ ] use .isalpha() on string variable alpha_test
alpha_test.isalpha()
###Output
123
|
Iterables_and_iterators.ipynb | ###Markdown
IterableIn Python an Iterable is any object that implement the **Iterable protocol**. The requirement to comply with this protocol is to implement the `__iter__()` method and return an **Iterator**. IteratorAll **Iterators** must implement the **Iterable protocol** in addition to implement the `__next__()` method to retrieve elements from the **Iterator**. When there are no more elements available `next()` will raise the `StopIteration` exceptionAs an alternative, the **Iterator protocol** can be implemented with only the `__getitem__()` method that receives an index as parameter. It must return values for consecutive integers, starting from zero, as indexes. When the index is out of range of the data, it will raise the `IndexError` exception.
###Code
class ExampleIterator:
def __init__(self, data):
self._index = 0
self._data = data
def __iter__(self):
return self
def __next__(self):
if self._index >= len(self._data):
raise StopIteration()
result = self._data[self._index]
self._index += 1
return result
class ExampleIterable:
def __init__(self, data):
self._data = data
def __iter__(self):
return ExampleIterator(self._data)
sequence = ExampleIterable([1, 2, 3, 4, 5])
for i in sequence:
print(i)
[i * 2 for i in sequence]
class AlternateIterable:
def __init__(self, data):
self._data = data
def __getitem__(self, index):
return self._data[index]
sequence = AlternateIterable([1, 2, 3, 4, 5])
for i in sequence:
print(i)
[i * 3 for i in AlternateIterable([1, 2, 3, 4, 5])]
###Output
_____no_output_____
###Markdown
`iter()` functionThis function is used implement the **Iterator protocol** for the **callable** that is passed as a parameter.`iter(callable, sentinel)`* callable: is an object that takes zero arguments* sentinel: it's the value used to stop the iterationThis is often used for creating **infite sequences** from existing functions
###Code
from datetime import datetime as dt
it = iter(dt.now, None)
for i in range(10):
print(next(it))
###Output
2019-11-29 08:34:49.384085
2019-11-29 08:34:49.384311
2019-11-29 08:34:49.384351
2019-11-29 08:34:49.384387
2019-11-29 08:34:49.384422
2019-11-29 08:34:49.384456
2019-11-29 08:34:49.384490
2019-11-29 08:34:49.384525
2019-11-29 08:34:49.384559
2019-11-29 08:34:49.384594
###Markdown
Building-block functionsThe idea behind this functions was develop in the **functional programming** paradigm. All these functions implement the **Iterator protocol**. MapApply a function to every element in a sequence. It returns a new sequence with the result.In Python 3 `Map` has a **lazy** implementation, but in Python 2 has an **eager** implementation.It can accept **any number** of input sequences. The number of input sequences **must match** the number of function arguments
###Code
def combine(size, colour, animal):
return '{}, {}, {}'.format(size, colour, animal)
sizes = ['small', 'medium', 'large']
colours = ['red','yellow','blue']
animals = ['dog','cat','duck']
list(map(combine, sizes, colours, animals))
import itertools
def combine2(quantity, size, colour, animal):
return '{}, {}, {}, {}'.format(quantity, size, colour, animal)
list(map(combine2, itertools.count(), sizes, colours, animals))
###Output
_____no_output_____
###Markdown
FilterApply a function to each element in a sequence. It returns a new sequence with the elements for which the functions returns `True`In Python 3 `Filter` has a **lazy** implementation, but in Python 2 has an **eager** implementation.It can only accept a **single** input sequence. The function has to receive a single parameter too.Passing `None` as the first parameter to `Filter` in will return a new sequence without the elements for which the function evaluates to `False`
###Code
list(filter(lambda x: x > 0, [1, 4, 7, -6, 0, 2, -7, 10, -55]))
list(filter(None, [1, 4, 7, -6, 0, 2, -7, 10, -55]))
list(filter(None, [0, 1, False, True, [], [1,2,3], '', 'hello']))
###Output
_____no_output_____
###Markdown
ReduceThe `Reduce` function is part of the `functools` module. It repeatedly apply a function to the elements of a sequence reducing them to a single value.The function provided to the `Reduce` function receives two parameters and must return another value, which it will be the first parameter in the following call to the function.If you pass a sequence with **only one element** to the `Reduce` function, the function provided **will never be called** and it will return the only element in the sequence as a result.The initial value of the accumulator can be passed as a third parameter to the `Reduce` function. Conceptually it is just added at the beginning of the sequence.
###Code
from functools import reduce
import operator
reduce(operator.add, [1, 2, 3, 4, 5])
def mul(x, y):
print('mul {} * {}'.format(x, y))
return x * y
reduce(mul, [1, 2, 3, 4, 5])
reduce(mul, [])
reduce(mul, [1])
reduce(mul, [1, 2, 3], 0)
###Output
mul 0 * 1
mul 0 * 2
mul 0 * 3
|
notebook/MAP-to-NWB.ipynb | ###Markdown
Session and Subject
###Code
session = (experiment.Session & session_key).fetch1()
dj.ERD(lab.Subject) -1 + 1
subj = (lab.Subject * lab.CompleteGenotype & session_key).fetch1()
session
subj
# -- NWB file - a NWB2.0 file for each session
nwbfile = NWBFile(
session_description='',
identifier='_'.join(
[str(session['subject_id']), str(session['session']),
session['session_date'].strftime('%Y-%m-%d_%H-%M-%S')]),
session_start_time=datetime.combine(session['session_date'], datetime.min.time()),
file_create_date=datetime.now(tzlocal()),
experimenter=session['username'],
institution='Janelia Research Campus')
# -- subject
nwbfile.subject = pynwb.file.Subject(
subject_id=str(subj['subject_id']),
description=f'animal_source: {subj["animal_source"]}',
genotype=subj['complete_genotype'],
sex=subj['sex'])
###Output
_____no_output_____
###Markdown
Units Electrode Group
###Code
dj.ERD(ephys.ElectrodeGroup) - 1 + 1
probe = (ephys.ElectrodeGroup & session_key).fetch1()
probe
device = nwbfile.create_device(name = probe['probe_part_no'])
electrode_group = nwbfile.create_electrode_group(
name='',
description = 'N/A',
device = device,
location = '')
for chn in (ephys.ElectrodeGroup.Electrode & probe).fetch(as_dict=True):
nwbfile.add_electrode(id=chn['electrode'],
group=electrode_group,
filtering='Bandpass filtered 300-6K Hz',
imp=-1.,
location=electrode_group.location,
x=np.nan,
y=np.nan,
z=np.nan)
###Output
_____no_output_____
###Markdown
Units
###Code
dj.ERD(ephys.Unit) + 1 - 1
units_view = ((ephys.Unit & session_key).aggr(
ephys.UnitComment, *ephys.Unit.heading.names, comments='GROUP_CONCAT(unit_comment, "; ")', keep_all_rows=True).aggr(
ephys.UnitCellType, "comments", *ephys.Unit.heading.names, cell_type='GROUP_CONCAT(cell_type, "; ")', keep_all_rows=True))
units_view
additional_unit_columns = [{'name': tag,
'description': re.sub('\s+:|\s+', ' ', re.search(
f'(?<={tag})(.*)', str(units_view.heading)).group())}
for tag in units_view.heading.names
if tag not in units_view.proj().heading.names + ['spike_times', 'waveform', 'electrode_group', 'electrode']]
additional_unit_columns
units_view.heading
def select(d, *keys):
return dict((k, v) for k, v in d.items() if k in keys)
for unit in units_view.fetch(as_dict=True):
# make an electrode table region (which electrode(s) is this unit coming from)
nwbfile.add_unit(id=unit['unit'],
electrode_group=electrode_group,
**unit,
waveform_mean=unit['waveform'])
unit = units_view.fetch(as_dict=True)[0]
unit['waveform'].shape
###Output
_____no_output_____ |
code/chap21.ipynb | ###Markdown
Modeling and Simulation in PythonChapter 21Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
With air resistance Next we'll add air resistance using the [drag equation](https://en.wikipedia.org/wiki/Drag_equation) I'll start by getting the units we'll need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
###Output
_____no_output_____
###Markdown
Now I'll create a `Params` object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).
###Code
params = Params(height = 381 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 2.5e-3 * kg,
diameter = 19e-3 * m,
rho = 1.2 * kg/m**3,
v_term = 18 * m / s)
###Output
_____no_output_____
###Markdown
Now we can pass the `Params` object `make_system` which computes some additional parameters and defines `init`.`make_system` uses the given radius to compute `area` and the given `v_term` to compute the drag coefficient `C_d`.
###Code
def make_system(params):
"""Makes a System object for the given conditions.
params: Params object
returns: System object
"""
unpack(params)
area = np.pi * (diameter/2)**2
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=height, v=v_init)
t_end = 30 * s
return System(params, area=area, C_d=C_d,
init=init, t_end=t_end)
###Output
_____no_output_____
###Markdown
Let's make a `System`
###Code
system = make_system(params)
###Output
_____no_output_____
###Markdown
Here's the slope function, including acceleration due to gravity and drag.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object
returns: derivatives of y and v
"""
y, v = state
unpack(system)
f_drag = rho * v**2 * C_d * area / 2
a_drag = f_drag / mass
dydt = v
dvdt = -g + a_drag
return dydt, dvdt
###Output
_____no_output_____
###Markdown
As always, let's test the slope function with the initial conditions.
###Code
slope_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
We can use the same event function as in the previous chapter.
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And then run the simulation.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results.
###Code
results
###Output
_____no_output_____
###Markdown
The final height is close to 0, as expected.Interestingly, the final velocity is not exactly terminal velocity, which suggests that there are some numerical errors.We can get the flight time from `results`.
###Code
t_sidewalk = get_last_label(results)
###Output
_____no_output_____
###Markdown
Here's the plot of position as a function of time.
###Code
def plot_position(results):
plot(results.y)
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig02.pdf')
###Output
_____no_output_____
###Markdown
And velocity as a function of time:
###Code
def plot_velocity(results):
plot(results.v, color='C1', label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
###Output
_____no_output_____
###Markdown
From an initial velocity of 0, the penny accelerates downward until it reaches terminal velocity; after that, velocity is constant. **Exercise:** Run the simulation with an initial velocity, downward, that exceeds the penny's terminal velocity. Hint: You can create a new `Params` object based on an existing one, like this:`params = Params(params, v_init = -30 * m / s)`What do you expect to happen? Plot velocity and position as a function of time, and see if they are consistent with your prediction.
###Code
# Solution goes here
plot_position(results)
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose we drop a quarter from the Empire State Building and find that its flight time is 19.1 seconds. Use this measurement to estimate the terminal velocity.1. You can get the relevant dimensions of a quarter from https://en.wikipedia.org/wiki/Quarter_(United_States_coin).2. Create a `Params` object with the system parameters. We don't know `v_term`, so we'll start with the inital guess `v_term = 18 * m / s`.3. Use `make_system` to create a `System` object.4. Call `run_ode_solver` to simulate the system. How does the flight time of the simulation compare to the measurement?5. Try a few different values of `t_term` and see if you can get the simulated flight time close to 19.1 seconds.6. Optionally, write an error function and use `fsolve` to improve your estimate.7. Use your best estimate of `v_term` to compute `C_d`.Note: I fabricated the observed flight time, so don't take the results of this exercise too seriously.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Bungee jumping Suppose you want to set the world record for the highest "bungee dunk", [as shown in this video](https://www.youtube.com/watch?v=UBf7WC19lpw). Since the record is 70 m, let's design a jump for 80 m.We'll make the following modeling assumptions:1. Initially the bungee cord hangs from a crane with the attachment point 80 m above a cup of tea.2. Until the cord is fully extended, it applies no force to the jumper. It turns out this might not be a good assumption; we will revisit it.3. After the cord is fully extended, it obeys [Hooke's Law](https://en.wikipedia.org/wiki/Hooke%27s_law); that is, it applies a force to the jumper proportional to the extension of the cord beyond its resting length.4. The jumper is subject to drag force proportional to the square of their velocity, in the opposite of their direction of motion.Our objective is to choose the length of the cord, `L`, and its spring constant, `k`, so that the jumper falls all the way to the tea cup, but no farther! First I'll create a `Param` object to contain the quantities we'll need:1. Let's assume that the jumper's mass is 75 kg.2. With a terminal velocity of 60 m/s.3. The length of the bungee cord is `L = 40 m`.4. The spring constant of the cord is `k = 20 N / m` when the cord is stretched, and 0 when it's compressed.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
params = Params(y_attach = 80 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 75 * kg,
area = 1 * m**2,
rho = 1.2 * kg/m**3,
v_term = 60 * m / s,
L = 25 * m,
k = 40 * N / m)
###Output
_____no_output_____
###Markdown
Now here's a version of `make_system` that takes a `Params` object as a parameter.`make_system` uses the given value of `v_term` to compute the drag coefficient `C_d`.
###Code
def make_system(params):
"""Makes a System object for the given params.
params: Params object
returns: System object
"""
unpack(params)
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=y_attach, v=v_init)
t_end = 20 * s
return System(params, C_d=C_d,
init=init, t_end=t_end)
###Output
_____no_output_____
###Markdown
Let's make a `System`
###Code
system = make_system(params)
###Output
_____no_output_____
###Markdown
`spring_force` computes the force of the cord on the jumper:
###Code
def spring_force(y, system):
"""Computes the force of the bungee cord on the jumper:
y: height of the jumper
Uses these variables from system|
y_attach: height of the attachment point
L: resting length of the cord
k: spring constant of the cord
returns: force in N
"""
unpack(system)
distance_fallen = y_attach - y
if distance_fallen <= L:
return 0 * N
extension = distance_fallen - L
f_spring = k * extension
return f_spring
###Output
_____no_output_____
###Markdown
The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
###Code
spring_force(80*m, system)
spring_force(55*m, system)
spring_force(54*m, system)
###Output
_____no_output_____
###Markdown
`drag_force` computes drag as a function of velocity:
###Code
def drag_force(v, system):
"""Computes drag force in the opposite direction of `v`.
v: velocity
system: System object
returns: drag force
"""
unpack(system)
f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2
return f_drag
###Output
_____no_output_____
###Markdown
Here's the drag force at 60 meters per second.
###Code
v = -60 * m/s
f_drag = drag_force(v, system)
###Output
_____no_output_____
###Markdown
Acceleration due to drag at 60 m/s is approximately g, which confirms that 60 m/s is terminal velocity.
###Code
a_drag = f_drag / system.mass
###Output
_____no_output_____
###Markdown
Now here's the slope function:
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
"""
y, v = state
unpack(system)
a_drag = drag_force(v, system) / mass
a_spring = spring_force(y, system) / mass
dvdt = -g + a_drag + a_spring
return v, dvdt
###Output
_____no_output_____
###Markdown
As always, let's test the slope function with the initial params.
###Code
slope_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
And then run the simulation.
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.3*s)
details
###Output
_____no_output_____
###Markdown
Here's the plot of position as a function of time.
###Code
plot_position(results)
###Output
_____no_output_____
###Markdown
After reaching the lowest point, the jumper springs back almost to almost 70 m, and oscillates several times. That looks like more osciallation that we expect from an actual jump, which suggests that there some dissipation of energy in the real world that is not captured in our model. To improve the model, that might be a good thing to investigate.But since we are primarily interested in the initial descent, the model might be good enough for now.We can use `min` to find the lowest point:
###Code
min(results.y)
###Output
_____no_output_____
###Markdown
At the lowest point, the jumper is still too high, so we'll need to increase `L` or decrease `k`. Here's velocity as a function of time:
###Code
plot_velocity(results)
subplot(1, 2, 1)
plot_position(results)
subplot(1, 2, 2)
plot_velocity(results)
savefig('figs/chap09-fig03.pdf')
###Output
_____no_output_____
###Markdown
Although we compute acceleration inside the slope function, we don't get acceleration as a result from `run_ode_solver`.We can approximate it by computing the numerical derivative of `ys`:
###Code
a = gradient(results.v)
plot(a)
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
###Output
_____no_output_____
###Markdown
And we can compute the maximum acceleration the jumper experiences:
###Code
max_acceleration = max(a) * m/s**2
###Output
_____no_output_____
###Markdown
Relative to the acceleration of gravity, the jumper "pulls" about "1.7 g's".
###Code
max_acceleration / g
###Output
_____no_output_____
###Markdown
Under the hoodThe gradient function in `modsim.py` adapts the NumPy function of the same name so it works with `Series` objects.
###Code
%psource gradient
###Output
_____no_output_____
###Markdown
Solving for lengthAssuming that `k` is fixed, let's find the length `L` that makes the minimum altitude of the jumper exactly 0. The metric we are interested in is the lowest point of the first oscillation. For both efficiency and accuracy, it is better to stop the simulation when we reach this point, rather than run past it and the compute the minimum.Here's an event function that stops the simulation when velocity is 0.
###Code
def event_func(state, t, system):
"""Return velocity.
"""
y, v = state
return v
###Output
_____no_output_____
###Markdown
As usual, we should test it with the initial conditions.
###Code
event_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
And we see that we have a problem. Since the event function returns 0 under the initial conditions, the simulation would stop immediately. We can solve that problem by specifying the direction of the event function:
###Code
event_func.direction = +1
###Output
_____no_output_____
###Markdown
When direction is positive, it only stops the simulation if the velocity is 0 and increasing, which is what we want. Now we can test it an confirm that it stops at the bottom of the jump.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.3*s)
plot_position(results)
min(results.y)
###Output
_____no_output_____
###Markdown
**Exercise:** Write an error function that takes `L` and `params` as arguments, simulates a bungee jump, and returns the lowest point.Test the error function with a guess of 25 m and confirm that the return value is about 5 meters.Use `fsolve` with your error function to find the value of `L` that yields a perfect bungee dunk.Run a simulation with the result from `fsolve` and confirm that it works.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Optional exercise:** Search for the combination of length and spring constant that yields minimum height 0 while minimizing peak acceleration.
###Code
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 21Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
With air resistance Next we'll add air resistance using the [drag equation](https://en.wikipedia.org/wiki/Drag_equation) I'll start by getting the units we'll need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
###Output
_____no_output_____
###Markdown
Now I'll create a `Params` object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).
###Code
params = Params(height = 381 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 2.5e-3 * kg,
diameter = 19e-3 * m,
rho = 1.2 * kg/m**3,
v_term = 18 * m / s)
###Output
_____no_output_____
###Markdown
Now we can pass the `Params` object `make_system` which computes some additional parameters and defines `init`.`make_system` uses the given radius to compute `area` and the given `v_term` to compute the drag coefficient `C_d`.
###Code
def make_system(params):
"""Makes a System object for the given conditions.
params: Params object
returns: System object
"""
unpack(params)
area = np.pi * (diameter/2)**2
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=height, v=v_init)
t_end = 30 * s
return System(params, area=area, C_d=C_d,
init=init, t_end=t_end)
###Output
_____no_output_____
###Markdown
Let's make a `System`
###Code
system = make_system(params)
###Output
_____no_output_____
###Markdown
Here's the slope function, including acceleration due to gravity and drag.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object
returns: derivatives of y and v
"""
y, v = state
unpack(system)
f_drag = rho * v**2 * C_d * area / 2
a_drag = f_drag / mass
dydt = v
dvdt = -g + a_drag
return dydt, dvdt
###Output
_____no_output_____
###Markdown
As always, let's test the slope function with the initial conditions.
###Code
slope_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
We can use the same event function as in the previous chapter.
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And then run the simulation.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results.
###Code
results
###Output
_____no_output_____
###Markdown
The final height is close to 0, as expected.Interestingly, the final velocity is not exactly terminal velocity, which suggests that there are some numerical errors.We can get the flight time from `results`.
###Code
t_sidewalk = get_last_label(results)
###Output
_____no_output_____
###Markdown
Here's the plot of position as a function of time.
###Code
def plot_position(results):
plot(results.y)
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig02.pdf')
###Output
_____no_output_____
###Markdown
And velocity as a function of time:
###Code
def plot_velocity(results):
plot(results.v, color='C1', label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
###Output
_____no_output_____
###Markdown
From an initial velocity of 0, the penny accelerates downward until it reaches terminal velocity; after that, velocity is constant. **Exercise:** Run the simulation with an initial velocity, downward, that exceeds the penny's terminal velocity. Hint: You can create a new `Params` object based on an existing one, like this:`params = Params(params, v_init = -30 * m / s)`What do you expect to happen? Plot velocity and position as a function of time, and see if they are consistent with your prediction.
###Code
# Solution goes here
plot_position(results)
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose we drop a quarter from the Empire State Building and find that its flight time is 19.1 seconds. Use this measurement to estimate the terminal velocity.1. You can get the relevant dimensions of a quarter from https://en.wikipedia.org/wiki/Quarter_(United_States_coin).2. Create a `Params` object with the system parameters. We don't know `v_term`, so we'll start with the inital guess `v_term = 18 * m / s`.3. Use `make_system` to create a `System` object.4. Call `run_ode_solver` to simulate the system. How does the flight time of the simulation compare to the measurement?5. Try a few different values of `t_term` and see if you can get the simulated flight time close to 19.1 seconds.6. Optionally, write an error function and use `fsolve` to improve your estimate.7. Use your best estimate of `v_term` to compute `C_d`.Note: I fabricated the observed flight time, so don't take the results of this exercise too seriously.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Bungee jumping Suppose you want to set the world record for the highest "bungee dunk", [as shown in this video](https://www.youtube.com/watch?v=UBf7WC19lpw). Since the record is 70 m, let's design a jump for 80 m.We'll make the following modeling assumptions:1. Initially the bungee cord hangs from a crane with the attachment point 80 m above a cup of tea.2. Until the cord is fully extended, it applies no force to the jumper. It turns out this might not be a good assumption; we will revisit it.3. After the cord is fully extended, it obeys [Hooke's Law](https://en.wikipedia.org/wiki/Hooke%27s_law); that is, it applies a force to the jumper proportional to the extension of the cord beyond its resting length.4. The jumper is subject to drag force proportional to the square of their velocity, in the opposite of their direction of motion.Our objective is to choose the length of the cord, `L`, and its spring constant, `k`, so that the jumper falls all the way to the tea cup, but no farther! First I'll create a `Param` object to contain the quantities we'll need:1. Let's assume that the jumper's mass is 75 kg.2. With a terminal velocity of 60 m/s.3. The length of the bungee cord is `L = 40 m`.4. The spring constant of the cord is `k = 20 N / m` when the cord is stretched, and 0 when it's compressed.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
params = Params(y_attach = 80 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 75 * kg,
area = 1 * m**2,
rho = 1.2 * kg/m**3,
v_term = 60 * m / s,
L = 25 * m,
k = 40 * N / m)
###Output
_____no_output_____
###Markdown
Now here's a version of `make_system` that takes a `Params` object as a parameter.`make_system` uses the given value of `v_term` to compute the drag coefficient `C_d`.
###Code
def make_system(params):
"""Makes a System object for the given params.
params: Params object
returns: System object
"""
unpack(params)
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=y_attach, v=v_init)
t_end = 20 * s
return System(params, C_d=C_d,
init=init, t_end=t_end)
###Output
_____no_output_____
###Markdown
Let's make a `System`
###Code
system = make_system(params)
###Output
_____no_output_____
###Markdown
`spring_force` computes the force of the cord on the jumper:
###Code
def spring_force(y, system):
"""Computes the force of the bungee cord on the jumper:
y: height of the jumper
Uses these variables from system|
y_attach: height of the attachment point
L: resting length of the cord
k: spring constant of the cord
returns: force in N
"""
unpack(system)
distance_fallen = y_attach - y
if distance_fallen <= L:
return 0 * N
extension = distance_fallen - L
f_spring = k * extension
return f_spring
###Output
_____no_output_____
###Markdown
The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
###Code
spring_force(80*m, system)
spring_force(55*m, system)
spring_force(54*m, system)
###Output
_____no_output_____
###Markdown
`drag_force` computes drag as a function of velocity:
###Code
def drag_force(v, system):
"""Computes drag force in the opposite direction of `v`.
v: velocity
system: System object
returns: drag force
"""
unpack(system)
f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2
return f_drag
###Output
_____no_output_____
###Markdown
Here's the drag force at 60 meters per second.
###Code
v = -60 * m/s
f_drag = drag_force(v, system)
###Output
_____no_output_____
###Markdown
Acceleration due to drag at 60 m/s is approximately g, which confirms that 60 m/s is terminal velocity.
###Code
a_drag = f_drag / system.mass
###Output
_____no_output_____
###Markdown
Now here's the slope function:
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
"""
y, v = state
unpack(system)
a_drag = drag_force(v, system) / mass
a_spring = spring_force(y, system) / mass
dvdt = -g + a_drag + a_spring
return v, dvdt
###Output
_____no_output_____
###Markdown
As always, let's test the slope function with the initial params.
###Code
slope_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
And then run the simulation.
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.3*s)
details
###Output
_____no_output_____
###Markdown
Here's the plot of position as a function of time.
###Code
plot_position(results)
###Output
_____no_output_____
###Markdown
After reaching the lowest point, the jumper springs back almost to almost 70 m, and oscillates several times. That looks like more osciallation that we expect from an actual jump, which suggests that there some dissipation of energy in the real world that is not captured in our model. To improve the model, that might be a good thing to investigate.But since we are primarily interested in the initial descent, the model might be good enough for now.We can use `min` to find the lowest point:
###Code
min(results.y)
###Output
_____no_output_____
###Markdown
At the lowest point, the jumper is still too high, so we'll need to increase `L` or decrease `k`. Here's velocity as a function of time:
###Code
plot_velocity(results)
subplot(1, 2, 1)
plot_position(results)
subplot(1, 2, 2)
plot_velocity(results)
savefig('figs/chap09-fig03.pdf')
###Output
_____no_output_____
###Markdown
Although we compute acceleration inside the slope function, we don't get acceleration as a result from `run_ode_solver`.We can approximate it by computing the numerical derivative of `ys`:
###Code
a = gradient(results.v)
plot(a)
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
###Output
_____no_output_____
###Markdown
And we can compute the maximum acceleration the jumper experiences:
###Code
max_acceleration = max(a) * m/s**2
###Output
_____no_output_____
###Markdown
Relative to the acceleration of gravity, the jumper "pulls" about "1.7 g's".
###Code
max_acceleration / g
###Output
_____no_output_____
###Markdown
Under the hoodThe gradient function in `modsim.py` adapts the NumPy function of the same name so it works with `Series` objects.
###Code
%psource gradient
###Output
_____no_output_____
###Markdown
Solving for lengthAssuming that `k` is fixed, let's find the length `L` that makes the minimum altitude of the jumper exactly 0. The metric we are interested in is the lowest point of the first oscillation. For both efficiency and accuracy, it is better to stop the simulation when we reach this point, rather than run past it and the compute the minimum.Here's an event function that stops the simulation when velocity is 0.
###Code
def event_func(state, t, system):
"""Return velocity.
"""
y, v = state
return v
###Output
_____no_output_____
###Markdown
As usual, we should test it with the initial conditions.
###Code
event_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
And we see that we have a problem. Since the event function returns 0 under the initial conditions, the simulation would stop immediately. We can solve that problem by specifying the direction of the event function:
###Code
event_func.direction = +1
###Output
_____no_output_____
###Markdown
When direction is positive, it only stops the simulation if the velocity is 0 and increasing, which is what we want. Now we can test it an confirm that it stops at the bottom of the jump.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.3*s)
plot_position(results)
min(results.y)
###Output
_____no_output_____
###Markdown
**Exercise:** Write an error function that takes `L` and `params` as arguments, simulates a bungee jump, and returns the lowest point.Test the error function with a guess of 25 m and confirm that the return value is about 5 meters.Use `fsolve` with your error function to find the value of `L` that yields a perfect bungee dunk.Run a simulation with the result from `fsolve` and confirm that it works.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Optional exercise:** Search for the combination of length and spring constant that yields minimum height 0 while minimizing peak acceleration.
###Code
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 21Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
With air resistance Next we'll add air resistance using the [drag equation](https://en.wikipedia.org/wiki/Drag_equation) I'll start by getting the units we'll need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
###Output
_____no_output_____
###Markdown
Now I'll create a `Params` object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).
###Code
params = Params(height = 381 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 2.5e-3 * kg,
diameter = 19e-3 * m,
rho = 1.2 * kg/m**3,
v_term = 18 * m / s)
###Output
_____no_output_____
###Markdown
Now we can pass the `Params` object `make_system` which computes some additional parameters and defines `init`.`make_system` uses the given radius to compute `area` and the given `v_term` to compute the drag coefficient `C_d`.
###Code
def make_system(params):
"""Makes a System object for the given conditions.
params: Params object
returns: System object
"""
unpack(params)
area = np.pi * (diameter/2)**2
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=height, v=v_init)
t_end = 30 * s
return System(params, area=area, C_d=C_d,
init=init, t_end=t_end)
###Output
_____no_output_____
###Markdown
Let's make a `System`
###Code
system = make_system(params)
###Output
_____no_output_____
###Markdown
Here's the slope function, including acceleration due to gravity and drag.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object
returns: derivatives of y and v
"""
y, v = state
unpack(system)
f_drag = rho * v**2 * C_d * area / 2
a_drag = f_drag / mass
dydt = v
dvdt = -g + a_drag
return dydt, dvdt
###Output
_____no_output_____
###Markdown
As always, let's test the slope function with the initial conditions.
###Code
slope_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
We can use the same event function as in the previous chapter.
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And then run the simulation.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results.
###Code
results
###Output
_____no_output_____
###Markdown
The final height is close to 0, as expected.Interestingly, the final velocity is not exactly terminal velocity, which suggests that there are some numerical errors.We can get the flight time from `results`.
###Code
t_sidewalk = get_last_label(results)
###Output
_____no_output_____
###Markdown
Here's the plot of position as a function of time.
###Code
def plot_position(results):
plot(results.y)
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig02.pdf')
###Output
_____no_output_____
###Markdown
And velocity as a function of time:
###Code
def plot_velocity(results):
plot(results.v, color='C1', label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
###Output
_____no_output_____
###Markdown
From an initial velocity of 0, the penny accelerates downward until it reaches terminal velocity; after that, velocity is constant. **Exercise:** Run the simulation with an initial velocity, downward, that exceeds the penny's terminal velocity. Hint: You can create a new `Params` object based on an existing one, like this:`params = Params(params, v_init = -30 * m / s)`What do you expect to happen? Plot velocity and position as a function of time, and see if they are consistent with your prediction.
###Code
# Solution goes here
plot_position(results)
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose we drop a quarter from the Empire State Building and find that its flight time is 19.1 seconds. Use this measurement to estimate the terminal velocity.1. You can get the relevant dimensions of a quarter from https://en.wikipedia.org/wiki/Quarter_(United_States_coin).2. Create a `Params` object with the system parameters. We don't know `v_term`, so we'll start with the inital guess `v_term = 18 * m / s`.3. Use `make_system` to create a `System` object.4. Call `run_ode_solver` to simulate the system. How does the flight time of the simulation compare to the measurement?5. Try a few different values of `t_term` and see if you can get the simulated flight time close to 19.1 seconds.6. Optionally, write an error function and use `fsolve` to improve your estimate.7. Use your best estimate of `v_term` to compute `C_d`.Note: I fabricated the observed flight time, so don't take the results of this exercise too seriously.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Bungee jumping Suppose you want to set the world record for the highest "bungee dunk", [as shown in this video](https://www.youtube.com/watch?v=UBf7WC19lpw). Since the record is 70 m, let's design a jump for 80 m.We'll make the following modeling assumptions:1. Initially the bungee cord hangs from a crane with the attachment point 80 m above a cup of tea.2. Until the cord is fully extended, it applies no force to the jumper. It turns out this might not be a good assumption; we will revisit it.3. After the cord is fully extended, it obeys [Hooke's Law](https://en.wikipedia.org/wiki/Hooke%27s_law); that is, it applies a force to the jumper proportional to the extension of the cord beyond its resting length.4. The jumper is subject to drag force proportional to the square of their velocity, in the opposite of their direction of motion.Our objective is to choose the length of the cord, `L`, and its spring constant, `k`, so that the jumper falls all the way to the tea cup, but no farther! First I'll create a `Param` object to contain the quantities we'll need:1. Let's assume that the jumper's mass is 75 kg.2. With a terminal velocity of 60 m/s.3. The length of the bungee cord is `L = 40 m`.4. The spring constant of the cord is `k = 20 N / m` when the cord is stretched, and 0 when it's compressed.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
params = Params(y_attach = 80 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 75 * kg,
area = 1 * m**2,
rho = 1.2 * kg/m**3,
v_term = 60 * m / s,
L = 25 * m,
k = 40 * N / m)
###Output
_____no_output_____
###Markdown
Now here's a version of `make_system` that takes a `Params` object as a parameter.`make_system` uses the given value of `v_term` to compute the drag coefficient `C_d`.
###Code
def make_system(params):
"""Makes a System object for the given params.
params: Params object
returns: System object
"""
unpack(params)
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=y_attach, v=v_init)
t_end = 20 * s
return System(params, C_d=C_d,
init=init, t_end=t_end)
###Output
_____no_output_____
###Markdown
Let's make a `System`
###Code
system = make_system(params)
###Output
_____no_output_____
###Markdown
`spring_force` computes the force of the cord on the jumper:
###Code
def spring_force(y, system):
"""Computes the force of the bungee cord on the jumper:
y: height of the jumper
Uses these variables from system|
y_attach: height of the attachment point
L: resting length of the cord
k: spring constant of the cord
returns: force in N
"""
unpack(system)
distance_fallen = y_attach - y
if distance_fallen <= L:
return 0 * N
extension = distance_fallen - L
f_spring = k * extension
return f_spring
###Output
_____no_output_____
###Markdown
The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
###Code
spring_force(80*m, system)
spring_force(55*m, system)
spring_force(54*m, system)
###Output
_____no_output_____
###Markdown
`drag_force` computes drag as a function of velocity:
###Code
def drag_force(v, system):
"""Computes drag force in the opposite direction of `v`.
v: velocity
system: System object
returns: drag force
"""
unpack(system)
f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2
return f_drag
###Output
_____no_output_____
###Markdown
Here's the drag force at 60 meters per second.
###Code
v = -60 * m/s
f_drag = drag_force(v, system)
###Output
_____no_output_____
###Markdown
Acceleration due to drag at 60 m/s is approximately g, which confirms that 60 m/s is terminal velocity.
###Code
a_drag = f_drag / system.mass
###Output
_____no_output_____
###Markdown
Now here's the slope function:
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
"""
y, v = state
unpack(system)
a_drag = drag_force(v, system) / mass
a_spring = spring_force(y, system) / mass
dvdt = -g + a_drag + a_spring
return v, dvdt
###Output
_____no_output_____
###Markdown
As always, let's test the slope function with the initial params.
###Code
slope_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
And then run the simulation.
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.3*s)
details
###Output
_____no_output_____
###Markdown
Here's the plot of position as a function of time.
###Code
plot_position(results)
###Output
_____no_output_____
###Markdown
After reaching the lowest point, the jumper springs back almost to almost 70 m, and oscillates several times. That looks like more osciallation that we expect from an actual jump, which suggests that there some dissipation of energy in the real world that is not captured in our model. To improve the model, that might be a good thing to investigate.But since we are primarily interested in the initial descent, the model might be good enough for now.We can use `min` to find the lowest point:
###Code
min(results.y)
###Output
_____no_output_____
###Markdown
At the lowest point, the jumper is still too high, so we'll need to increase `L` or decrease `k`. Here's velocity as a function of time:
###Code
plot_velocity(results)
subplot(1, 2, 1)
plot_position(results)
subplot(1, 2, 2)
plot_velocity(results)
savefig('figs/chap09-fig03.pdf')
###Output
_____no_output_____
###Markdown
Although we compute acceleration inside the slope function, we don't get acceleration as a result from `run_ode_solver`.We can approximate it by computing the numerical derivative of `ys`:
###Code
a = gradient(results.v)
plot(a)
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
###Output
_____no_output_____
###Markdown
And we can compute the maximum acceleration the jumper experiences:
###Code
max_acceleration = max(a) * m/s**2
###Output
_____no_output_____
###Markdown
Relative to the acceleration of gravity, the jumper "pulls" about "1.7 g's".
###Code
max_acceleration / g
###Output
_____no_output_____
###Markdown
Under the hoodThe gradient function in `modsim.py` adapts the NumPy function of the same name so it works with `Series` objects.
###Code
%psource gradient
###Output
_____no_output_____
###Markdown
Solving for lengthAssuming that `k` is fixed, let's find the length `L` that makes the minimum altitude of the jumper exactly 0. The metric we are interested in is the lowest point of the first oscillation. For both efficiency and accuracy, it is better to stop the simulation when we reach this point, rather than run past it and the compute the minimum.Here's an event function that stops the simulation when velocity is 0.
###Code
def event_func(state, t, system):
"""Return velocity.
"""
y, v = state
return v
###Output
_____no_output_____
###Markdown
As usual, we should test it with the initial conditions.
###Code
event_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
And we see that we have a problem. Since the event function returns 0 under the initial conditions, the simulation would stop immediately. We can solve that problem by specifying the direction of the event function:
###Code
event_func.direction = +1
###Output
_____no_output_____
###Markdown
When direction is positive, it only stops the simulation if the velocity is 0 and increasing, which is what we want. Now we can test it an confirm that it stops at the bottom of the jump.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.3*s)
plot_position(results)
min(results.y)
###Output
_____no_output_____
###Markdown
**Exercise:** Write an error function that takes `L` and `params` as arguments, simulates a bungee jump, and returns the lowest point.Test the error function with a guess of 25 m and confirm that the return value is about 5 meters.Use `fsolve` with your error function to find the value of `L` that yields a perfect bungee dunk.Run a simulation with the result from `fsolve` and confirm that it works.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Optional exercise:** Search for the combination of length and spring constant that yields minimum height 0 while minimizing peak acceleration.
###Code
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 21Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
With air resistance Next we'll add air resistance using the [drag equation](https://en.wikipedia.org/wiki/Drag_equation) I'll start by getting the units we'll need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
###Output
_____no_output_____
###Markdown
Now I'll create a `Params` object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).
###Code
params = Params(height = 381 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 2.5e-3 * kg,
diameter = 19e-3 * m,
rho = 1.2 * kg/m**3,
v_term = 18 * m / s)
###Output
_____no_output_____
###Markdown
Now we can pass the `Params` object `make_system` which computes some additional parameters and defines `init`.`make_system` uses the given radius to compute `area` and the given `v_term` to compute the drag coefficient `C_d`.
###Code
def make_system(params):
"""Makes a System object for the given conditions.
params: Params object
returns: System object
"""
unpack(params)
area = np.pi * (diameter/2)**2
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=height, v=v_init)
t_end = 30 * s
return System(params, area=area, C_d=C_d,
init=init, t_end=t_end)
###Output
_____no_output_____
###Markdown
Let's make a `System`
###Code
system = make_system(params)
###Output
_____no_output_____
###Markdown
Here's the slope function, including acceleration due to gravity and drag.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object
returns: derivatives of y and v
"""
y, v = state
unpack(system)
f_drag = rho * v**2 * C_d * area / 2
a_drag = f_drag / mass
dydt = v
dvdt = -g + a_drag
return dydt, dvdt
###Output
_____no_output_____
###Markdown
As always, let's test the slope function with the initial conditions.
###Code
slope_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
We can use the same event function as in the previous chapter.
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And then run the simulation.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results.
###Code
results
###Output
_____no_output_____
###Markdown
The final height is close to 0, as expected.Interestingly, the final velocity is not exactly terminal velocity, which suggests that there are some numerical errors.We can get the flight time from `results`.
###Code
t_sidewalk = get_last_label(results)
###Output
_____no_output_____
###Markdown
Here's the plot of position as a function of time.
###Code
def plot_position(results):
plot(results.y)
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig02.pdf')
###Output
Saving figure to file figs/chap09-fig02.pdf
###Markdown
And velocity as a function of time:
###Code
def plot_velocity(results):
plot(results.v, color='C1', label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
###Output
_____no_output_____
###Markdown
From an initial velocity of 0, the penny accelerates downward until it reaches terminal velocity; after that, velocity is constant. **Exercise:** Run the simulation with an initial velocity, downward, that exceeds the penny's terminal velocity. Hint: You can create a new `Params` object based on an existing one, like this:`params = Params(params, v_init = -30 * m / s)`What do you expect to happen? Plot velocity and position as a function of time, and see if they are consistent with your prediction.
###Code
# Solution goes here
plot_position(results)
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose we drop a quarter from the Empire State Building and find that its flight time is 19.1 seconds. Use this measurement to estimate the terminal velocity.1. You can get the relevant dimensions of a quarter from https://en.wikipedia.org/wiki/Quarter_(United_States_coin).2. Create a `Params` object with the system parameters. We don't know `v_term`, so we'll start with the inital guess `v_term = 18 * m / s`.3. Use `make_system` to create a `System` object.4. Call `run_ode_solver` to simulate the system. How does the flight time of the simulation compare to the measurement?5. Try a few different values of `t_term` and see if you can get the simulated flight time close to 19.1 seconds.6. Optionally, write an error function and use `fsolve` to improve your estimate.7. Use your best estimate of `v_term` to compute `C_d`.Note: I fabricated the observed flight time, so don't take the results of this exercise too seriously.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Bungee jumping Suppose you want to set the world record for the highest "bungee dunk", [as shown in this video](https://www.youtube.com/watch?v=UBf7WC19lpw). Since the record is 70 m, let's design a jump for 80 m.We'll make the following modeling assumptions:1. Initially the bungee cord hangs from a crane with the attachment point 80 m above a cup of tea.2. Until the cord is fully extended, it applies no force to the jumper. It turns out this might not be a good assumption; we will revisit it.3. After the cord is fully extended, it obeys [Hooke's Law](https://en.wikipedia.org/wiki/Hooke%27s_law); that is, it applies a force to the jumper proportional to the extension of the cord beyond its resting length.4. The jumper is subject to drag force proportional to the square of their velocity, in the opposite of their direction of motion.Our objective is to choose the length of the cord, `L`, and its spring constant, `k`, so that the jumper falls all the way to the tea cup, but no farther! First I'll create a `Param` object to contain the quantities we'll need:1. Let's assume that the jumper's mass is 75 kg.2. With a terminal velocity of 60 m/s.3. The length of the bungee cord is `L = 40 m`.4. The spring constant of the cord is `k = 20 N / m` when the cord is stretched, and 0 when it's compressed.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
params = Params(y_attach = 80 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 75 * kg,
area = 1 * m**2,
rho = 1.2 * kg/m**3,
v_term = 60 * m / s,
L = 25 * m,
k = 40 * N / m)
###Output
_____no_output_____
###Markdown
Now here's a version of `make_system` that takes a `Params` object as a parameter.`make_system` uses the given value of `v_term` to compute the drag coefficient `C_d`.
###Code
def make_system(params):
"""Makes a System object for the given params.
params: Params object
returns: System object
"""
unpack(params)
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=y_attach, v=v_init)
t_end = 20 * s
return System(params, C_d=C_d,
init=init, t_end=t_end)
###Output
_____no_output_____
###Markdown
Let's make a `System`
###Code
system = make_system(params)
###Output
_____no_output_____
###Markdown
`spring_force` computes the force of the cord on the jumper:
###Code
def spring_force(y, system):
"""Computes the force of the bungee cord on the jumper:
y: height of the jumper
Uses these variables from system|
y_attach: height of the attachment point
L: resting length of the cord
k: spring constant of the cord
returns: force in N
"""
unpack(system)
distance_fallen = y_attach - y
if distance_fallen <= L:
return 0 * N
extension = distance_fallen - L
f_spring = k * extension
return f_spring
###Output
_____no_output_____
###Markdown
The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
###Code
spring_force(80*m, system)
spring_force(55*m, system)
spring_force(54*m, system)
###Output
_____no_output_____
###Markdown
`drag_force` computes drag as a function of velocity:
###Code
def drag_force(v, system):
"""Computes drag force in the opposite direction of `v`.
v: velocity
system: System object
returns: drag force
"""
unpack(system)
f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2
return f_drag
###Output
_____no_output_____
###Markdown
Here's the drag force at 60 meters per second.
###Code
v = -60 * m/s
f_drag = drag_force(v, system)
###Output
_____no_output_____
###Markdown
Acceleration due to drag at 60 m/s is approximately g, which confirms that 60 m/s is terminal velocity.
###Code
a_drag = f_drag / system.mass
###Output
_____no_output_____
###Markdown
Now here's the slope function:
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
"""
y, v = state
unpack(system)
a_drag = drag_force(v, system) / mass
a_spring = spring_force(y, system) / mass
dvdt = -g + a_drag + a_spring
return v, dvdt
###Output
_____no_output_____
###Markdown
As always, let's test the slope function with the initial params.
###Code
slope_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
And then run the simulation.
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.3*s)
details
###Output
_____no_output_____
###Markdown
Here's the plot of position as a function of time.
###Code
plot_position(results)
###Output
_____no_output_____
###Markdown
After reaching the lowest point, the jumper springs back almost to almost 70 m, and oscillates several times. That looks like more osciallation that we expect from an actual jump, which suggests that there some dissipation of energy in the real world that is not captured in our model. To improve the model, that might be a good thing to investigate.But since we are primarily interested in the initial descent, the model might be good enough for now.We can use `min` to find the lowest point:
###Code
min(results.y)
###Output
_____no_output_____
###Markdown
At the lowest point, the jumper is still too high, so we'll need to increase `L` or decrease `k`. Here's velocity as a function of time:
###Code
plot_velocity(results)
subplot(1, 2, 1)
plot_position(results)
subplot(1, 2, 2)
plot_velocity(results)
savefig('figs/chap09-fig03.pdf')
###Output
_____no_output_____
###Markdown
Although we compute acceleration inside the slope function, we don't get acceleration as a result from `run_ode_solver`.We can approximate it by computing the numerical derivative of `ys`:
###Code
a = gradient(results.v)
plot(a)
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
###Output
_____no_output_____
###Markdown
And we can compute the maximum acceleration the jumper experiences:
###Code
max_acceleration = max(a) * m/s**2
###Output
_____no_output_____
###Markdown
Relative to the acceleration of gravity, the jumper "pulls" about "1.7 g's".
###Code
max_acceleration / g
###Output
_____no_output_____
###Markdown
Under the hoodThe gradient function in `modsim.py` adapts the NumPy function of the same name so it works with `Series` objects.
###Code
%psource gradient
###Output
_____no_output_____
###Markdown
Solving for lengthAssuming that `k` is fixed, let's find the length `L` that makes the minimum altitude of the jumper exactly 0. The metric we are interested in is the lowest point of the first oscillation. For both efficiency and accuracy, it is better to stop the simulation when we reach this point, rather than run past it and the compute the minimum.Here's an event function that stops the simulation when velocity is 0.
###Code
def event_func(state, t, system):
"""Return velocity.
"""
y, v = state
return v
###Output
_____no_output_____
###Markdown
As usual, we should test it with the initial conditions.
###Code
event_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
And we see that we have a problem. Since the event function returns 0 under the initial conditions, the simulation would stop immediately. We can solve that problem by specifying the direction of the event function:
###Code
event_func.direction = +1
###Output
_____no_output_____
###Markdown
When direction is positive, it only stops the simulation if the velocity is 0 and increasing, which is what we want. Now we can test it an confirm that it stops at the bottom of the jump.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.3*s)
plot_position(results)
min(results.y)
###Output
_____no_output_____
###Markdown
**Exercise:** Write an error function that takes `L` and `params` as arguments, simulates a bungee jump, and returns the lowest point.Test the error function with a guess of 25 m and confirm that the return value is about 5 meters.Use `fsolve` with your error function to find the value of `L` that yields a perfect bungee dunk.Run a simulation with the result from `fsolve` and confirm that it works.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Optional exercise:** Search for the combination of length and spring constant that yields minimum height 0 while minimizing peak acceleration.
###Code
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 21Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
With air resistance Next we'll add air resistance using the [drag equation](https://en.wikipedia.org/wiki/Drag_equation) I'll start by getting the units we'll need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
###Output
_____no_output_____
###Markdown
Now I'll create a `Params` object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).
###Code
params = Params(height = 381 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 2.5e-3 * kg,
diameter = 19e-3 * m,
rho = 1.2 * kg/m**3,
v_term = 18 * m / s)
###Output
_____no_output_____
###Markdown
Now we can pass the `Params` object `make_system` which computes some additional parameters and defines `init`.`make_system` uses the given radius to compute `area` and the given `v_term` to compute the drag coefficient `C_d`.
###Code
def make_system(params):
"""Makes a System object for the given conditions.
params: Params object
returns: System object
"""
unpack(params)
area = np.pi * (diameter/2)**2
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=height, v=v_init)
t_end = 30 * s
return System(params, area=area, C_d=C_d,
init=init, t_end=t_end)
###Output
_____no_output_____
###Markdown
Let's make a `System`
###Code
system = make_system(params)
###Output
_____no_output_____
###Markdown
Here's the slope function, including acceleration due to gravity and drag.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object
returns: derivatives of y and v
"""
y, v = state
unpack(system)
f_drag = rho * v**2 * C_d * area / 2
a_drag = f_drag / mass
dydt = v
dvdt = -g + a_drag
return dydt, dvdt
###Output
_____no_output_____
###Markdown
As always, let's test the slope function with the initial conditions.
###Code
slope_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
We can use the same event function as in the previous chapter.
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And then run the simulation.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results.
###Code
results
###Output
_____no_output_____
###Markdown
The final height is close to 0, as expected.Interestingly, the final velocity is not exactly terminal velocity, which suggests that there are some numerical errors.We can get the flight time from `results`.
###Code
t_sidewalk = get_last_label(results)
###Output
_____no_output_____
###Markdown
Here's the plot of position as a function of time.
###Code
def plot_position(results):
plot(results.y)
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig02.pdf')
###Output
Saving figure to file figs/chap09-fig02.pdf
###Markdown
And velocity as a function of time:
###Code
def plot_velocity(results):
plot(results.v, color='C1', label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
###Output
_____no_output_____
###Markdown
From an initial velocity of 0, the penny accelerates downward until it reaches terminal velocity; after that, velocity is constant. **Exercise:** Run the simulation with an initial velocity, downward, that exceeds the penny's terminal velocity. Hint: You can create a new `Params` object based on an existing one, like this:`params = Params(params, v_init = -30 * m / s)`What do you expect to happen? Plot velocity and position as a function of time, and see if they are consistent with your prediction.
###Code
# Solution goes here
plot_position(results)
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose we drop a quarter from the Empire State Building and find that its flight time is 19.1 seconds. Use this measurement to estimate the terminal velocity.1. You can get the relevant dimensions of a quarter from https://en.wikipedia.org/wiki/Quarter_(United_States_coin).2. Create a `Params` object with the system parameters. We don't know `v_term`, so we'll start with the inital guess `v_term = 18 * m / s`.3. Use `make_system` to create a `System` object.4. Call `run_ode_solver` to simulate the system. How does the flight time of the simulation compare to the measurement?5. Try a few different values of `t_term` and see if you can get the simulated flight time close to 19.1 seconds.6. Optionally, write an error function and use `fsolve` to improve your estimate.7. Use your best estimate of `v_term` to compute `C_d`.Note: I fabricated the observed flight time, so don't take the results of this exercise too seriously.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Bungee jumping Suppose you want to set the world record for the highest "bungee dunk", [as shown in this video](https://www.youtube.com/watch?v=UBf7WC19lpw). Since the record is 70 m, let's design a jump for 80 m.We'll make the following modeling assumptions:1. Initially the bungee cord hangs from a crane with the attachment point 80 m above a cup of tea.2. Until the cord is fully extended, it applies no force to the jumper. It turns out this might not be a good assumption; we will revisit it.3. After the cord is fully extended, it obeys [Hooke's Law](https://en.wikipedia.org/wiki/Hooke%27s_law); that is, it applies a force to the jumper proportional to the extension of the cord beyond its resting length.4. The jumper is subject to drag force proportional to the square of their velocity, in the opposite of their direction of motion.Our objective is to choose the length of the cord, `L`, and its spring constant, `k`, so that the jumper falls all the way to the tea cup, but no farther! First I'll create a `Param` object to contain the quantities we'll need:1. Let's assume that the jumper's mass is 75 kg.2. With a terminal velocity of 60 m/s.3. The length of the bungee cord is `L = 40 m`.4. The spring constant of the cord is `k = 20 N / m` when the cord is stretched, and 0 when it's compressed.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
params = Params(y_attach = 80 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 75 * kg,
area = 1 * m**2,
rho = 1.2 * kg/m**3,
v_term = 60 * m / s,
L = 25 * m,
k = 40 * N / m)
###Output
_____no_output_____
###Markdown
Now here's a version of `make_system` that takes a `Params` object as a parameter.`make_system` uses the given value of `v_term` to compute the drag coefficient `C_d`.
###Code
def make_system(params):
"""Makes a System object for the given params.
params: Params object
returns: System object
"""
unpack(params)
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=y_attach, v=v_init)
t_end = 20 * s
return System(params, C_d=C_d,
init=init, t_end=t_end)
###Output
_____no_output_____
###Markdown
Let's make a `System`
###Code
system = make_system(params)
###Output
_____no_output_____
###Markdown
`spring_force` computes the force of the cord on the jumper:
###Code
def spring_force(y, system):
"""Computes the force of the bungee cord on the jumper:
y: height of the jumper
Uses these variables from system|
y_attach: height of the attachment point
L: resting length of the cord
k: spring constant of the cord
returns: force in N
"""
unpack(system)
distance_fallen = y_attach - y
if distance_fallen <= L:
return 0 * N
extension = distance_fallen - L
f_spring = k * extension
return f_spring
###Output
_____no_output_____
###Markdown
The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
###Code
spring_force(80*m, system)
spring_force(55*m, system)
spring_force(54*m, system)
###Output
_____no_output_____
###Markdown
`drag_force` computes drag as a function of velocity:
###Code
def drag_force(v, system):
"""Computes drag force in the opposite direction of `v`.
v: velocity
system: System object
returns: drag force
"""
unpack(system)
f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2
return f_drag
###Output
_____no_output_____
###Markdown
Here's the drag force at 60 meters per second.
###Code
v = -60 * m/s
f_drag = drag_force(v, system)
###Output
_____no_output_____
###Markdown
Acceleration due to drag at 60 m/s is approximately g, which confirms that 60 m/s is terminal velocity.
###Code
a_drag = f_drag / system.mass
###Output
_____no_output_____
###Markdown
Now here's the slope function:
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
"""
y, v = state
unpack(system)
a_drag = drag_force(v, system) / mass
a_spring = spring_force(y, system) / mass
dvdt = -g + a_drag + a_spring
return v, dvdt
###Output
_____no_output_____
###Markdown
As always, let's test the slope function with the initial params.
###Code
slope_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
And then run the simulation.
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.3*s)
details
###Output
_____no_output_____
###Markdown
Here's the plot of position as a function of time.
###Code
plot_position(results)
###Output
_____no_output_____
###Markdown
After reaching the lowest point, the jumper springs back almost to almost 70 m, and oscillates several times. That looks like more osciallation that we expect from an actual jump, which suggests that there some dissipation of energy in the real world that is not captured in our model. To improve the model, that might be a good thing to investigate.But since we are primarily interested in the initial descent, the model might be good enough for now.We can use `min` to find the lowest point:
###Code
min(results.y)
###Output
_____no_output_____
###Markdown
At the lowest point, the jumper is still too high, so we'll need to increase `L` or decrease `k`. Here's velocity as a function of time:
###Code
plot_velocity(results)
subplot(1, 2, 1)
plot_position(results)
subplot(1, 2, 2)
plot_velocity(results)
savefig('figs/chap09-fig03.pdf')
###Output
_____no_output_____
###Markdown
Although we compute acceleration inside the slope function, we don't get acceleration as a result from `run_ode_solver`.We can approximate it by computing the numerical derivative of `ys`:
###Code
a = gradient(results.v)
plot(a)
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
###Output
_____no_output_____
###Markdown
And we can compute the maximum acceleration the jumper experiences:
###Code
max_acceleration = max(a) * m/s**2
###Output
_____no_output_____
###Markdown
Relative to the acceleration of gravity, the jumper "pulls" about "1.7 g's".
###Code
max_acceleration / g
###Output
_____no_output_____
###Markdown
Under the hoodThe gradient function in `modsim.py` adapts the NumPy function of the same name so it works with `Series` objects.
###Code
%psource gradient
###Output
_____no_output_____
###Markdown
Solving for lengthAssuming that `k` is fixed, let's find the length `L` that makes the minimum altitude of the jumper exactly 0. The metric we are interested in is the lowest point of the first oscillation. For both efficiency and accuracy, it is better to stop the simulation when we reach this point, rather than run past it and the compute the minimum.Here's an event function that stops the simulation when velocity is 0.
###Code
def event_func(state, t, system):
"""Return velocity.
"""
y, v = state
return v
###Output
_____no_output_____
###Markdown
As usual, we should test it with the initial conditions.
###Code
event_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
And we see that we have a problem. Since the event function returns 0 under the initial conditions, the simulation would stop immediately. We can solve that problem by specifying the direction of the event function:
###Code
event_func.direction = +1
###Output
_____no_output_____
###Markdown
When direction is positive, it only stops the simulation if the velocity is 0 and increasing, which is what we want. Now we can test it an confirm that it stops at the bottom of the jump.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.3*s)
plot_position(results)
min(results.y)
###Output
_____no_output_____
###Markdown
**Exercise:** Write an error function that takes `L` and `params` as arguments, simulates a bungee jump, and returns the lowest point.Test the error function with a guess of 25 m and confirm that the return value is about 5 meters.Use `fsolve` with your error function to find the value of `L` that yields a perfect bungee dunk.Run a simulation with the result from `fsolve` and confirm that it works.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Optional exercise:** Search for the combination of length and spring constant that yields minimum height 0 while minimizing peak acceleration.
###Code
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 21Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
With air resistance Next we'll add air resistance using the [drag equation](https://en.wikipedia.org/wiki/Drag_equation) I'll start by getting the units we'll need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
###Output
_____no_output_____
###Markdown
Now I'll create a `Params` object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).
###Code
params = Params(height = 381 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 2.5e-3 * kg,
diameter = 19e-3 * m,
rho = 1.2 * kg/m**3,
v_term = 18 * m / s)
###Output
_____no_output_____
###Markdown
Now we can pass the `Params` object `make_system` which computes some additional parameters and defines `init`.`make_system` uses the given radius to compute `area` and the given `v_term` to compute the drag coefficient `C_d`.
###Code
def make_system(params):
"""Makes a System object for the given conditions.
params: Params object
returns: System object
"""
unpack(params)
area = np.pi * (diameter/2)**2
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=height, v=v_init)
t_end = 30 * s
return System(params, area=area, C_d=C_d,
init=init, t_end=t_end)
###Output
_____no_output_____
###Markdown
Let's make a `System`
###Code
system = make_system(params)
###Output
_____no_output_____
###Markdown
Here's the slope function, including acceleration due to gravity and drag.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object
returns: derivatives of y and v
"""
y, v = state
unpack(system)
f_drag = rho * v**2 * C_d * area / 2
a_drag = f_drag / mass
dydt = v
dvdt = -g + a_drag
return dydt, dvdt
###Output
_____no_output_____
###Markdown
As always, let's test the slope function with the initial conditions.
###Code
slope_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
We can use the same event function as in the previous chapter.
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And then run the simulation.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results.
###Code
results
###Output
_____no_output_____
###Markdown
The final height is close to 0, as expected.Interestingly, the final velocity is not exactly terminal velocity, which suggests that there are some numerical errors.We can get the flight time from `results`.
###Code
t_sidewalk = get_last_label(results)
###Output
_____no_output_____
###Markdown
Here's the plot of position as a function of time.
###Code
def plot_position(results):
plot(results.y)
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig02.pdf')
###Output
Saving figure to file figs/chap09-fig02.pdf
###Markdown
And velocity as a function of time:
###Code
def plot_velocity(results):
plot(results.v, color='C1', label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
###Output
_____no_output_____
###Markdown
From an initial velocity of 0, the penny accelerates downward until it reaches terminal velocity; after that, velocity is constant. **Exercise:** Run the simulation with an initial velocity, downward, that exceeds the penny's terminal velocity. Hint: You can create a new `Params` object based on an existing one, like this:`params = Params(params, v_init = -30 * m / s)`What do you expect to happen? Plot velocity and position as a function of time, and see if they are consistent with your prediction.
###Code
params = Params(params, v_init = -30 * m / s)
system = make_system(params)
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
I expect the velocity to approach terminal velocity and the penny to follow a similar linear descent
###Code
plot_position(results)
plot_velocity(results)
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose we drop a quarter from the Empire State Building and find that its flight time is 19.1 seconds. Use this measurement to estimate the terminal velocity.1. You can get the relevant dimensions of a quarter from https://en.wikipedia.org/wiki/Quarter_(United_States_coin).2. Create a `Params` object with the system parameters. We don't know `v_term`, so we'll start with the inital guess `v_term = 18 * m / s`.3. Use `make_system` to create a `System` object.4. Call `run_ode_solver` to simulate the system. How does the flight time of the simulation compare to the measurement?5. Try a few different values of `t_term` and see if you can get the simulated flight time close to 19.1 seconds.6. Optionally, write an error function and use `fsolve` to improve your estimate.7. Use your best estimate of `v_term` to compute `C_d`.Note: I fabricated the observed flight time, so don't take the results of this exercise too seriously.
###Code
params = Params(height = 381 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 6.25e-3 * kg,
diameter = 24.26e-3 * m,
rho = 1.2 * kg/m**3,
v_term = 21.65 * m / s);
system = make_system(params);
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.5*s)
details.message
print(str(results.last_valid_index()), "is when the quarter hits the ground")
subplot(2,1,1)
plot_position(results)
subplot(2,1,2)
plot_velocity(results)
def error_func(v_term):
params = Params(height = 381 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 6.25e-3 * kg,
diameter = 24.26e-3 * m,
rho = 1.2 * kg/m**3,
v_term =v_term* m / s);
system = make_system(params);
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.5*s)
return results.last_valid_index() - 19.1
v_term1 = fsolve(error_func, 21)[0]
unpack(params)
C_d = 2* mass * g / (rho * v_term1**2 *pi*(diameter/2)**2)
# Solution goes here
###Output
_____no_output_____
###Markdown
Bungee jumping Suppose you want to set the world record for the highest "bungee dunk", [as shown in this video](https://www.youtube.com/watch?v=UBf7WC19lpw). Since the record is 70 m, let's design a jump for 80 m.We'll make the following modeling assumptions:1. Initially the bungee cord hangs from a crane with the attachment point 80 m above a cup of tea.2. Until the cord is fully extended, it applies no force to the jumper. It turns out this might not be a good assumption; we will revisit it.3. After the cord is fully extended, it obeys [Hooke's Law](https://en.wikipedia.org/wiki/Hooke%27s_law); that is, it applies a force to the jumper proportional to the extension of the cord beyond its resting length.4. The jumper is subject to drag force proportional to the square of their velocity, in the opposite of their direction of motion.Our objective is to choose the length of the cord, `L`, and its spring constant, `k`, so that the jumper falls all the way to the tea cup, but no farther! First I'll create a `Param` object to contain the quantities we'll need:1. Let's assume that the jumper's mass is 75 kg.2. With a terminal velocity of 60 m/s.3. The length of the bungee cord is `L = 40 m`.4. The spring constant of the cord is `k = 20 N / m` when the cord is stretched, and 0 when it's compressed.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
params = Params(y_attach = 80 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 75 * kg,
area = 1 * m**2,
rho = 1.2 * kg/m**3,
v_term = 60 * m / s,
L = 25 * m,
k = 40 * N / m)
###Output
_____no_output_____
###Markdown
Now here's a version of `make_system` that takes a `Params` object as a parameter.`make_system` uses the given value of `v_term` to compute the drag coefficient `C_d`.
###Code
def make_system(params):
"""Makes a System object for the given params.
params: Params object
returns: System object
"""
unpack(params)
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=y_attach, v=v_init)
t_end = 20 * s
return System(params, C_d=C_d,
init=init, t_end=t_end)
###Output
_____no_output_____
###Markdown
Let's make a `System`
###Code
system = make_system(params)
###Output
_____no_output_____
###Markdown
`spring_force` computes the force of the cord on the jumper:
###Code
def spring_force(y, system):
"""Computes the force of the bungee cord on the jumper:
y: height of the jumper
Uses these variables from system|
y_attach: height of the attachment point
L: resting length of the cord
k: spring constant of the cord
returns: force in N
"""
unpack(system)
distance_fallen = y_attach - y
if distance_fallen <= L:
return 0 * N
extension = distance_fallen - L
f_spring = k * extension
return f_spring
###Output
_____no_output_____
###Markdown
The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
###Code
spring_force(80*m, system)
spring_force(55*m, system)
spring_force(54*m, system)
###Output
_____no_output_____
###Markdown
`drag_force` computes drag as a function of velocity:
###Code
def drag_force(v, system):
"""Computes drag force in the opposite direction of `v`.
v: velocity
system: System object
returns: drag force
"""
unpack(system)
f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2
return f_drag
###Output
_____no_output_____
###Markdown
Here's the drag force at 60 meters per second.
###Code
v = -60 * m/s
f_drag = drag_force(v, system)
###Output
_____no_output_____
###Markdown
Acceleration due to drag at 60 m/s is approximately g, which confirms that 60 m/s is terminal velocity.
###Code
a_drag = f_drag / system.mass
###Output
_____no_output_____
###Markdown
Now here's the slope function:
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
"""
y, v = state
unpack(system)
a_drag = drag_force(v, system) / mass
a_spring = spring_force(y, system) / mass
dvdt = -g + a_drag + a_spring
return v, dvdt
###Output
_____no_output_____
###Markdown
As always, let's test the slope function with the initial params.
###Code
slope_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
And then run the simulation.
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.3*s)
details
###Output
_____no_output_____
###Markdown
Here's the plot of position as a function of time.
###Code
plot_position(results)
###Output
_____no_output_____
###Markdown
After reaching the lowest point, the jumper springs back almost to almost 70 m, and oscillates several times. That looks like more osciallation that we expect from an actual jump, which suggests that there some dissipation of energy in the real world that is not captured in our model. To improve the model, that might be a good thing to investigate.But since we are primarily interested in the initial descent, the model might be good enough for now.We can use `min` to find the lowest point:
###Code
min(results.y)
###Output
_____no_output_____
###Markdown
At the lowest point, the jumper is still too high, so we'll need to increase `L` or decrease `k`. Here's velocity as a function of time:
###Code
plot_velocity(results)
subplot(1, 2, 1)
plot_position(results)
subplot(1, 2, 2)
plot_velocity(results)
savefig('figs/chap09-fig03.pdf')
###Output
Saving figure to file figs/chap09-fig03.pdf
###Markdown
Although we compute acceleration inside the slope function, we don't get acceleration as a result from `run_ode_solver`.We can approximate it by computing the numerical derivative of `ys`:
###Code
a = gradient(results.v)
plot(a)
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
###Output
_____no_output_____
###Markdown
And we can compute the maximum acceleration the jumper experiences:
###Code
max_acceleration = max(a) * m/s**2
###Output
_____no_output_____
###Markdown
Relative to the acceleration of gravity, the jumper "pulls" about "1.7 g's".
###Code
max_acceleration / g
###Output
_____no_output_____
###Markdown
Under the hoodThe gradient function in `modsim.py` adapts the NumPy function of the same name so it works with `Series` objects.
###Code
%psource gradient
###Output
_____no_output_____
###Markdown
Solving for lengthAssuming that `k` is fixed, let's find the length `L` that makes the minimum altitude of the jumper exactly 0. The metric we are interested in is the lowest point of the first oscillation. For both efficiency and accuracy, it is better to stop the simulation when we reach this point, rather than run past it and the compute the minimum.Here's an event function that stops the simulation when velocity is 0.
###Code
def event_func(state, t, system):
"""Return velocity.
"""
y, v = state
return v
###Output
_____no_output_____
###Markdown
As usual, we should test it with the initial conditions.
###Code
event_func(system.init, 0, system)
###Output
_____no_output_____
###Markdown
And we see that we have a problem. Since the event function returns 0 under the initial conditions, the simulation would stop immediately. We can solve that problem by specifying the direction of the event function:
###Code
event_func.direction = +1
###Output
_____no_output_____
###Markdown
When direction is positive, it only stops the simulation if the velocity is 0 and increasing, which is what we want. Now we can test it an confirm that it stops at the bottom of the jump.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.3*s)
plot_position(results)
min(results.y)
###Output
_____no_output_____
###Markdown
**Exercise:** Write an error function that takes `L` and `params` as arguments, simulates a bungee jump, and returns the lowest point.Test the error function with a guess of 25 m and confirm that the return value is about 5 meters.Use `fsolve` with your error function to find the value of `L` that yields a perfect bungee dunk.Run a simulation with the result from `fsolve` and confirm that it works.
###Code
params = Params(y_attach = 80 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 75 * kg,
area = 1 * m**2,
rho = 1.2 * kg/m**3,
v_term = 60 * m / s,
L = 25 * m,
k = 40 * N / m)
def error_func2(L, params):
params1 = Params(params, L = L)
system = make_system(params1)
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.3*s)
return min(results.y)
error_func2(25,params)
params1 = Params(params, L = L_best)
L_best = fsolve(error_func2, 25, params1)[0]
params1 = Params(params, L = L_best, k = 50)
system = make_system(params1)
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.3*s)
subplot(2, 1, 1)
plot_position(results)
subplot(2, 1, 2)
plot_velocity(results)
###Output
_____no_output_____
###Markdown
**Optional exercise:** Search for the combination of length and spring constant that yields minimum height 0 while minimizing peak acceleration.
###Code
def error_func3(k,params):
params1 = Params(params, k = k*N/m)
print(k)
L_best = fsolve(error_func2, 30, params1)[0]
print("yo", L_best)
params2 = Params(params1, L =L_best)
system = make_system(params2)
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.3*s)
a = gradient(results.v)
print(2, max(a))
return max(a)
fsolve(error_func3, 40, params)
# Solution goes here
###Output
_____no_output_____ |
modules/module-05/mod5_nb1_pandas_foundations.ipynb | ###Markdown
Runtime Dependencies: Must Run First!
###Code
import numpy as np
import pandas as pd
# ### Bonus: Multiple Outputs Per Cell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
###Output
_____no_output_____
###Markdown
Module 5: Pandas **Importing Pandas:**```pythonimport pandas as pd```Pandas is an essential data science package for Python that brings DataFrame functionality into Python. DataFrames are built upon NumPy arrays, and allow us to set custom indicies, add column and row labels, and bring SQL-like and Excel-like operations possible. Module 5.1: Series & DataFrames SeriesLet's start with a Series. A series is essentially a NumPy array underneath, with some added features. Let's go through some of them!
###Code
arr = np.array(range(50,71,2))
arr
ser = pd.Series(arr)
ser
###Output
_____no_output_____
###Markdown
As you can see, this looks a bit different than the array output. We now have data being formatted vertically in a table with the index on the right hand side.By default, Pandas will assign a zero-indexed index if none is supplied. However, we can start changing some of these properties quickly.Essential keyword arguments upon creation:1) data2) index4) nameWe've already used the data keyword to tell Python what data needs to be imported, and now we're going to add the name property. This is essentially a label for the data. These will come back in DataFrames as column names.
###Code
lst = ["Apples","Bananas","Grapes","Mangos","Avocados"]
ser = pd.Series(lst, name="Produce")
ser
###Output
_____no_output_____
###Markdown
And we can even set a custom index if we don't want to use the one automatically generated!Let's say we have a small list of employees with their ID number and name. It would make sense to have the ID be the index!
###Code
loc = "https://raw.githubusercontent.com/mhall-simon/python/main/data/misc/arr-ID-and-names.csv"
arr = np.genfromtxt(loc, dtype=str, delimiter=',', skip_header=1)
arr
###Output
_____no_output_____
###Markdown
Now, let's create the series with the custom index and employee names:
###Code
ser = pd.Series(arr[:,1], index=arr[:,0], name="Employees by ID")
ser
###Output
_____no_output_____
###Markdown
DataFrameRight now, we can create an indexed series (with one columnn of data and an index), but sometimes we have complex datasets that can't fit in a single column.This is where the DataFrame comes in handy. We can now store and process more complex data!Let's start with a really simple inventory example:
###Code
dct = {'partNum': [104,105,106], 'Quantity': [415,346,98], 'Price': [1.24, 2.15, 5.98], 'Cost': [0.78,1.56,3.12]}
df = pd.DataFrame(dct)
df
###Output
_____no_output_____
###Markdown
Now, when we visualize the data structure you can see the column labels! Since no index was created, Pandas will automatically assign one!In section 3, I'm going to cover how to generate DataFrames from various sources. Some DataFrame Attributes *Pulled From Documentation or Summarized Desc*| Attribute | Description || :---: | --- || `.index` | The Index (row labels) of the DataFrame || `.columns` | The column labels of the DataFrame || `.shape` | Dimensions of DataFrame |
###Code
df.index
df.columns
df.shape
###Output
_____no_output_____
###Markdown
Some DataFrame Methods*Pulled From Documentation or Summarized Desc*| Method | Description || :---: | --- || `.info()` | Prints out info for DataFrame || `.describe()` | Prints out summary statistics || `.head()` | Print out top n rows of DataFrame, default to 5 || `.tail()` | Print out last n rows of DataFrame, default to 5 |Just keep in mind that sometimes a summary statistic for a column doesn't make sense!
###Code
df.info()
df.describe()
###Output
_____no_output_____
###Markdown
Module 5.2: Indexing DataFrames In Pandas, there's 3 main methods for indexing a DataFrame.The first one allows us to easily subset a column (or more) from our DataFrame.To start, I'm going to import a data set about COVID 19 vaccines. This CSV has vaccination info from the CDC, with the report being pulled on February 8th, 2021.**Keep In Mind That You Can Always Reference This. You Don't Need to Memorize, But You Should Be Familiar Enough to Know Which One to Choose**
###Code
loc = "https://raw.githubusercontent.com/mhall-simon/python/main/data/covid/covid19_vaccinations_in_the_united_states.csv"
df = pd.read_csv(loc, header=2, index_col="State/Territory/Federal Entity")
df
###Output
_____no_output_____
###Markdown
Easily Accessing Columns This is our first indexing method, to pull columns of data, and there are a few syntax styles: Pulling Single Column
###Code
df['Total Delivered']
###Output
_____no_output_____
###Markdown
Pulling Multiple Columns
###Code
df[['Total Delivered','Total Administered']]
###Output
_____no_output_____
###Markdown
Note About Single ColumnIf our column names have no spaces, we can quickly access a column like this too:```pythondf.col_name``` Indexing with `.loc[]` We can also use the method `.loc[]` to index our DataFrames. This method uses the labels of rows/columns to index.Just like 2D NumPy arrays, loc takes the arguments as `[row:row,col:col]` Pulling a Row with `.loc()`
###Code
df.loc['Wisconsin']
###Output
_____no_output_____
###Markdown
Pulling Range of Rows - All Columns
###Code
df.loc['Oklahoma':'Wisconsin',:]
###Output
_____no_output_____
###Markdown
Pulling Range of Rows, Select Columns
###Code
df.loc['Oregon':'Texas','Total Delivered':'Total Administered']
###Output
_____no_output_____
###Markdown
Indexing with `.iloc[]` Iloc uses purely integer-based indexing to find values for us! This means that we need to make sure our dataset is in an expected order. Rows: Scalar
###Code
df.iloc[5]
###Output
_____no_output_____
###Markdown
Rows: List
###Code
df.iloc[[2,4,8]]
###Output
_____no_output_____
###Markdown
Rows: Slicing
###Code
df.iloc[2:5]
###Output
_____no_output_____
###Markdown
Rows & Cols: Scalars
###Code
df.iloc[0,0]
###Output
_____no_output_____
###Markdown
Rows & Cols: List
###Code
df.iloc[[12,18], [0,2]]
###Output
_____no_output_____
###Markdown
Rows & Cols: Slicing
###Code
df.iloc[12:22, 0:4]
###Output
_____no_output_____
###Markdown
Module 5.3: Importing Data Sets Overview You've already seen some import statements above, but now I'm going to cover them on a high level!Dig through the documentation to learn about all the little tweaks and tricks you can use upon import.2 Essential Methods:- Read CSV- Read ExcelThe most important thing to note about importing data is to look through the file in Excel!You cannot properly import something if you don't know what it is!I'm only going to cover the CSV method, as the two are very similar and have only slightly different tweaks. Using Read CSV In GitHub, I have a CSV file downloaded from Yahoo Finance about Tesla's monthly stock performance over the last 5 years.We can examine it here: https://github.com/mhall-simon/python/blob/main/data/misc/TSLA.csvAnd the GitHub raw link is: https://raw.githubusercontent.com/mhall-simon/python/main/data/misc/TSLA.csvLet's import it by just targeting a link:
###Code
loc = "https://raw.githubusercontent.com/mhall-simon/python/main/data/misc/TSLA.csv"
df = pd.read_csv(loc)
df.head()
###Output
_____no_output_____
###Markdown
Pretty neat! However, a useful feature would be to index by Date, as that is a unique key for the data set.Since the Date column is first, we're going to use `index_col=0` as a keyword argument.Also, since it's Date/Time data, we should parse it into the proper format.We do this by providing the argument `parse_dates=True`
###Code
loc = "https://raw.githubusercontent.com/mhall-simon/python/main/data/misc/TSLA.csv"
df = pd.read_csv(loc, index_col=0, parse_dates=True)
df.head()
###Output
_____no_output_____
###Markdown
Let's inspect the DataFrame now using `.info()`
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 61 entries, 2016-03-01 to 2021-02-12
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Open 61 non-null float64
1 High 61 non-null float64
2 Low 61 non-null float64
3 Close 61 non-null float64
4 Adj Close 61 non-null float64
5 Volume 61 non-null int64
dtypes: float64(5), int64(1)
memory usage: 3.3 KB
###Markdown
There's a ton of other arguments, some being more useful than the others. As we keep progressing through training, try to see what keyword arguments I use to import data! Module 5.4: Date & Time Overview Pandas also brings with it a new data type: DateTime.This is very useful for using time series data, as we don't need to map information out into multiple columns.If the DateTime data is the index, it makes processing time series data in Python very efficient!Let's start with the same data set as above for Tesla stock:
###Code
loc = "https://raw.githubusercontent.com/mhall-simon/python/main/data/misc/TSLA.csv"
df = pd.read_csv(loc, index_col=0, parse_dates=True)
df.head()
###Output
_____no_output_____
###Markdown
Extracting D-M-Y Information
###Code
df.index.day
df.index.month
df.index.year
###Output
_____no_output_____
###Markdown
These properties will become very useful when we want to index, slice, or even run SQL functionality on our time series data! Module 5.5: Multi-Indexed Data Sets Above, you've only seen data with a single index. However, you can also have a multi-index!These are excellent for when you have multiple indicies, and don't want to store information in columns. Importing Regularly:
###Code
loc = "https://github.com/mhall-simon/python/blob/main/data/misc/inventory-multi-index.xlsx?raw=true"
df = pd.read_excel(loc, parse_dates=True)
df
###Output
_____no_output_____
###Markdown
Importing Multi-Index:
###Code
loc = "https://github.com/mhall-simon/python/blob/main/data/misc/inventory-multi-index.xlsx?raw=true"
df = pd.read_excel(loc, parse_dates=True, index_col=[0,1])
df
###Output
_____no_output_____
###Markdown
Now, our data is formatted better! It's hierarchical between the date and part number!I'm not going to go too in depth with Multi-Indexed Data Sets, but you should know that they exist! Accessing Row in Multi-Indexed Dataset (Outer Most Group)
###Code
df.loc['2020-03-01']
###Output
_____no_output_____
###Markdown
Accessing Row in Multi-Indexed Dataset (Inner Most Group)
###Code
df.loc[('2020-03-01','A-01')]
###Output
_____no_output_____
###Markdown
Tip: Multi-Indexed Datasets behave very-similarly to groupby objects! Module 5.6: Broadcasting Scalars Just like with NumPy, we can broadcast scalars across our dataframe!Since we're already familiar with how it works, we just need the general formula:1. Index Subset (Optional)2. Broadcast ScalarIt's pretty easy! And allows us to update information stored in DataFrames without creating new rows / cols!Above, we imported a dataset of inventory prices and amount sold, let's do some basic operations on them!
###Code
loc = "https://github.com/mhall-simon/python/blob/main/data/misc/inventory-multi-index.xlsx?raw=true"
df = pd.read_excel(loc, parse_dates=True, index_col=[0,1])
df
###Output
_____no_output_____
###Markdown
Broadcast Across Slice - Column Oh no! Our costs were too low because we forgot tax. Let's add 8% to the cost column of the dataset:
###Code
df.Cost = df.Cost*1.08
df
###Output
_____no_output_____
###Markdown
Broadcast Across Slice - Row Oh no! We forgot we have a 5% discount on the first day of the month. Let's easily correct this.
###Code
df.loc['2020-03-01','Price'] = df.loc['2020-03-01','Price']*0.95
df
###Output
_____no_output_____
###Markdown
These operations are easy—it's really just a question of knowing how to slice your data! Module 5.7: Row & Column-wise Operations Just like with NumPy, we can also do element-wise operations with our DataFrames. Again, it's really just a practice of knowing how to properly slice your data! Creating New Column Based Upon Other Columns Right now, we have a data frame with prices, quantity, and cost. Let's start working towards profitability for each part per day.Here's how we would build a column named Gross, which is Price * Number Sold.
###Code
loc = "https://github.com/mhall-simon/python/blob/main/data/misc/inventory-multi-index.xlsx?raw=true"
df = pd.read_excel(loc, parse_dates=True, index_col=[0,1])
df
df['Gross'] = df.Price * df.Sold
df
###Output
_____no_output_____
###Markdown
Now, let's calculate Net, which is going to be Gross minus our total cost for the day.
###Code
df['Net'] = df.Gross - (df.Cost * df.Sold)
df
###Output
_____no_output_____
###Markdown
Now, we can also figure out our margin. We'll calculate our margin as Net / Gross, and leave it in decimal terms (don't multiply by 100)
###Code
df['Margin'] = df.Net / df.Gross
df
###Output
_____no_output_____
###Markdown
If you're thinking that right now we're really only calculating new columns, you'd be right! We usually have our data formatted as new entries going vertically, within the columns.Most of our row-wise operations come in SQL-functionality, when we calculate statistics for our columns or run groups, filters, and more! Applying Custom FunctionsUsually, we want to keep our operations to run as broadcasting or element-wise operations with subsets and slices. However, sometimes we need to apply a function!These are going to be slower, but when they're needed they're necessary!We can write them with lambda functions. Let's mark our products with the Note "High" to denote a high-margin product. Management defines any high margin product as one with at least a 65% profit margin.
###Code
df['Note'] = df.apply(lambda row: 'High' if row.Margin >= 0.65 else 'Low', axis=1)
df
###Output
_____no_output_____
###Markdown
Bonus Box: Nested Lambdas for Multiple Logic Checks We can "nest" lambda functions to do multiple logic checks! Essentially, the else block is going to run another lambda!Let's redo the above example and cast 3 margins. One for high (>65%), one for medium (>40%), and one for low.
###Code
df['Note'] = df.apply(lambda row: 'High' if row.Margin >= 0.65 else ('Medium' if row.Margin >= 0.4 else 'Low'), axis=1)
df
###Output
_____no_output_____ |
data-science/scikit-learn/08/01-Implement-Confusion-Mafrix.ipynb | ###Markdown
实现混淆矩阵, 精准率和召回率
###Code
import numpy as np
from sklearn import datasets
digits = datasets.load_digits()
X = digits.data
y = digits.target.copy()
y[digits.target == 9] = 1
y[digits.target != 9] = 0
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=666)
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(X_train, y_train)
log_reg.score(X_test, y_test)
y_log_predict = log_reg.predict(X_test)
# True Negative
def TN(y_true, y_predict):
assert len(y_true) == len(y_predict)
return np.sum((y_true == 0) * (y_predict == 0))
TN(y_test, y_log_predict)
# False Positive
def FP(y_true, y_predict):
assert len(y_true) == len(y_predict)
return np.sum((y_true == 0) * (y_predict == 1))
FP(y_test, y_log_predict)
# False Negative
def FN(y_true, y_predict):
assert len(y_true) == len(y_predict)
return np.sum((y_true == 1) * (y_predict == 0))
FN(y_test, y_log_predict)
# True Positive
def TP(y_true, y_predict):
assert len(y_true) == len(y_predict)
return np.sum((y_true == 1) * (y_predict == 1))
TP(y_test, y_log_predict)
# 混淆矩阵
def confusion_matrix(y_true, y_predict):
return np.array([
[TN(y_true, y_predict), FP(y_true, y_predict)],
[FN(y_true, y_predict), TP(y_true, y_predict)]
])
confusion_matrix(y_test, y_log_predict)
# 准确率
def precision_score(y_true, y_predict):
tp = TP(y_true, y_predict)
fp = FP(y_true, y_predict)
try:
return tp / (tp + fp)
except:
return 0.0
precision_score(y_test, y_log_predict)
# 召回率
def recall_score(y_true, y_predict):
tp = TP(y_true, y_predict)
fn = FN(y_true, y_predict)
try:
return tp / (tp + fn)
except:
return 0.0
recall_score(y_test, y_log_predict)
###Output
_____no_output_____
###Markdown
scikit-learn中的混淆矩阵, 精准率和召回率
###Code
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_log_predict)
from sklearn.metrics import precision_score
precision_score(y_test, y_log_predict)
from sklearn.metrics import recall_score
recall_score(y_test, y_log_predict)
###Output
_____no_output_____ |
6 PySpark KMeans.ipynb | ###Markdown
Welcome to exercise two of week three of “Apache Spark for Scalable Machine Learning on BigData”. In this exercise we’ll work on clustering.Let’s create our DataFrame again:
###Code
%sh
# delete files from previous runs
#rm -f hmp.parquet*
# download the file containing the data in PARQUET format
#wget https://github.com/IBM/coursera/raw/master/hmp.parquet
#ls -ltr /databricks/driver
#mkdir /dbfs/tmp/HMPPARQ/
#cp /databricks/driver/hmp.parquet /dbfs/tmp/HMPPARQ/
ls -ltr /dbfs/tmp/HMPPARQ/
# create a dataframe out of it
df = spark.read.parquet('/tmp/HMPPARQ/hmp.parquet')
# register a corresponding query table
df.createOrReplaceTempView('df')
df.select("class").distinct().show()
###Output
_____no_output_____
###Markdown
Let’s reuse our feature engineering pipeline.
###Code
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler, Normalizer
from pyspark.ml.linalg import Vectors
from pyspark.ml import Pipeline
indexer = StringIndexer(inputCol="class", outputCol="classIndex")
encoder = OneHotEncoder(inputCol="classIndex", outputCol="categoryVec")
vectorAssembler = VectorAssembler(inputCols=["x","y","z"],
outputCol="features")
normalizer = Normalizer(inputCol="features", outputCol="features_norm", p=1.0)
pipeline = Pipeline(stages=[indexer, encoder, vectorAssembler, normalizer])
model = pipeline.fit(df)
prediction = model.transform(df)
prediction.show()
###Output
_____no_output_____
###Markdown
Now let’s create a new pipeline for kmeans.
###Code
from pyspark.ml.clustering import KMeans
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler, Normalizer
from pyspark.ml.evaluation import ClusteringEvaluator
from pyspark.ml import Pipeline
kmeans = KMeans(featuresCol="features_norm").setK(14).setSeed(1)
model = kmeans.fit(prediction)
predictions = model.transform(prediction)
evaluator = ClusteringEvaluator()
silhouette = evaluator.evaluate(predictions)
print("Silhouette with squared euclidean distance = " + str(silhouette))
centers = model.clusterCenters()
print("Cluster Centers: ")
for center in centers:
print(center)
###Output
_____no_output_____
###Markdown
We have 14 different movement patterns in the dataset, so setting K of KMeans to 14 is a good idea. But please experiment with different values for K, do you find a sweet spot? The closer Silhouette gets to 1, the better.https://en.wikipedia.org/wiki/Silhouette_(clustering)
###Code
# please change the pipeline the check performance for different K, feel free to use a loop
###Output
_____no_output_____ |
flowws/spheres/Spheres.ipynb | ###Markdown
In this notebook we run a simple system of purely repulsive spheres using a [Weeks-Chandler-Andersen potential](https://hoomd-blue.readthedocs.io/en/stable/module-md-pair.htmlhoomd.md.pair.lj). We visualize the spheres using the povray backend and color them by distance to their nearest neighbor.
###Code
import flowws
import gtar
from hoomd_flowws.Init import Init
from hoomd_flowws.Interaction import Interaction
from hoomd_flowws.Run import Run
import plato, plato.draw.povray as draw
import freud
import numpy as np
import IPython
import ipywidgets
storage = flowws.DirectoryStorage()
stages = [
Init(number=128),
Interaction(
type='lj', global_params=[('r_cut', 2**(1./6))],
pair_params=[('A', 'A', 'epsilon', 1), ('A', 'A', 'sigma', 1)]),
Run(steps=1e3, integrator='langevin'),
Run(steps=1e4, integrator='langevin', compress_to=.57, dump_period=1e3),
]
flowws.Workflow(stages, storage).run();
num_frames = 0
def get_frame(frame=-1):
global num_frames
with gtar.GTAR('dump.sqlite', 'r') as traj:
(posRec, boxRec), frames = traj.framesWithRecordsNamed(['position', 'box'])
num_frames = len(frames)
positions = traj.getRecord(posRec, frames[frame])
box = traj.getRecord(boxRec, frames[frame])
return positions, box
def update(scene, frame=-1):
(positions, box) = get_frame(frame)
# get nearest-neighbor distance, rescaled to go from 0-1, as cval
fbox = freud.box.Box.from_box(box)
nn = freud.locality.AABBQuery(fbox, positions)
nlist = nn.query(positions, dict(exclude_ii=True, num_neighbors=1)).toNeighborList(True)
cval = nlist.distances.copy()
cval -= np.min(cval)
cval /= np.max(cval)
colors = plato.cmap.cubehelix(.25 + .5*cval)
for prim in scene:
prim.colors = colors
prim.positions = positions
prim.diameters = np.ones(len(positions))
prim = draw.Spheres()
features = dict(ambient_light=.4)
scene = draw.Scene(prim, features=features, zoom=4.8)
update(scene)
target = '../../gallery/flowws_spheres_povray.png'
scene.save(target)
IPython.display.Image(filename=target)
import plato.draw.vispy as interactive
live_scene = scene.convert(interactive)
live_scene.show()
@ipywidgets.interact(frame=(0, num_frames - 1))
def plot(frame=0):
update(live_scene, frame)
live_scene.render()
###Output
_____no_output_____ |
Jupyter-notebook/URL-categorization-Jupyter-Notebook.ipynb | ###Markdown
Import librariesThese libraries will be used for our URL_classification project.
###Code
import datetime
import csv
import nltk
import numpy as np
import pandas as pd
import ast
from urllib.request import urlopen
from bs4 import BeautifulSoup
import os.path
print(datetime.datetime.now().time())
###Output
14:23:30.843081
###Markdown
Use this command if you have any errors on importing nltk library. It will open a nltk meniu with download and update options. If it's still missing some libraries, it needs to install manually by writing nltk.download('library name') where library name is missing library name which asserts error message.
###Code
nltk.download('stopwords')
nltk.download('words')
nltk.download('punkt')
def conctruct_dataset():
file = 'URL-categorization-DFE.csv'
df = pd.read_csv(file)[['main_category', 'main_category:confidence', 'url']]
df = df[(df['main_category'] != 'Not_working') & (df['main_category:confidence'] > 0.5)]
char_blacklist = list(chr(i) for i in range(32, 127) if i <= 64 or i >= 91 and i <= 96 or i >= 123)
stopwords = nltk.corpus.stopwords.words('english')
stopwords.extend(char_blacklist)
language_whitelist = ['en']
english_vocab = set(w.lower() for w in nltk.corpus.words.words())
blacklist_domain = ['.it', '.ru', '.cn', '.jp', '.tw', '.de', '.pl', '.fr', '.hu', '.bg', '.nl']
df = df[~df['url'].str.endswith(tuple(blacklist_domain))]
df['tokenized_words'] = ''
counter = 0
for i, row in df.iterrows():
counter += 1
if counter >= 50:
break
print("{}, {}/{}".format(row['url'], counter, len(df)))
try:
html = urlopen('http://' + row['url'], timeout=1).read()
except:
continue
soup = BeautifulSoup(html, "html.parser")
for script in soup(["script", "style"]):
script.extract()
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = '\n'.join(chunk.lower() for chunk in chunks if chunk)
filter_text = " ".join(w for w in nltk.word_tokenize(text) \
if w.lower() in english_vocab)
tokens = nltk.word_tokenize(filter_text)
allWordExceptStopDist = nltk.FreqDist(
w.lower() for w in tokens if w not in stopwords and len(w) >= 3 and w[0] not in char_blacklist)
all_words = [i for i in allWordExceptStopDist]
if len(all_words) > 0:
continue
df.at[i, 'tokenized_words'] = all_words
df = df[df['tokenized_words'] != '']
def train_machine():
char_blacklist = list(chr(i) for i in range(32, 127) if i <= 64 or i >= 91 and i <= 96 or i >= 123)
stopwords = nltk.corpus.stopwords.words('english')
stopwords.extend(char_blacklist)
df = pd.read_csv('cleaned_data.csv')
top = 50
words_frequency = {}
for category in set(df['main_category'].values):
all_words = []
for row in df[df['main_category'] == category]['tokenized_words'].tolist():
for word in ast.literal_eval(row):
all_words.append(word)
allWordExceptStopDist = nltk.FreqDist(
w.lower() for w in all_words if w not in stopwords and len(w) >= 3 and w[0] not in char_blacklist)
most_common = allWordExceptStopDist.most_common(top)
words_frequency[category] = most_common
for category in set(df['main_category'].values):
words_frequency[category] = [word for word, number in words_frequency[category]]
from collections import Counter
features = np.zeros(df.shape[0] * top).reshape(df.shape[0], top)
labels = np.zeros(df.shape[0])
counter = 0
for i, row in df.iterrows():
c = [word for word, word_count in Counter(ast.literal_eval(row['tokenized_words'])).most_common(top)]
labels[counter] = list(set(df['main_category'].values)).index(row['main_category'])
for word in c:
if word in words_frequency[row['main_category']]:
features[counter][words_frequency[row['main_category']].index(word)] = 1
counter += 1
return labels, features
def no_filter_data():
file = 'URL-categorization-DFE.csv'
df = pd.read_csv(file)[['main_category', 'main_category:confidence', 'url']]
df = df[(df['main_category'] != 'Not_working') & (df['main_category:confidence'] > 0.5)]
df['tokenized_words'] = ''
counter = 0
for i, row in df.iterrows():
counter += 1
print("{}, {}/{}".format(row['url'], counter, len(df)))
try:
html = urlopen('http://' + row['url'], timeout=1).read()
except:
continue
soup = BeautifulSoup(html, "html.parser")
for script in soup(["script", "style"]):
script.extract()
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = '\n'.join(chunk.lower() for chunk in chunks if chunk)
tokens = nltk.word_tokenize(text)
df.at[i, 'tokenized_words'] = tokens if len(tokens) > 0 else ''
df = df[df['tokenized_words'] != '']
return df
# if os.path.isfile("cleaned_data.csv"):
# labels, features = train_machine()
# else:
# conctruct_dataset()
# labels, features = train_machine()
if not os.path.isfile("Datasets/full_data.csv"):
df = no_filter_data()
df.shape
from sklearn.metrics import accuracy_score
from scipy.sparse import coo_matrix
X_sparse = coo_matrix(features)
from sklearn.utils import shuffle
X, X_sparse, y = shuffle(features, X_sparse, labels, random_state=0)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train, y_train)
predictions = lr.predict(X_test)
score = lr.score(X_test, y_test)
print(predictions)
print(score)
from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier()
dtc.fit(X_train, y_train)
predictions = dtc.predict(X_test)
score = dtc.score(X_test, y_test)
print(predictions)
print(score)
from sklearn import svm
svm = svm.SVC()
svm.fit(X_train, y_train)
predictions = svm.predict(X_test)
score = svm.score(X_test, y_test)
print(predictions)
print(score)
###Output
[12. 12. 15. ... 1. 1. 18.]
0.3989290495314592
|
Hilbert Transform and Analytic Representation.ipynb | ###Markdown
The Hilbert Transform and Analytic Representation of Signals Hilbert TransformThe Hilbert transform of a function $u(t)$ is the covolution of the function with $\frac{1}{{\pi}t}$ (the Cauchy Kernel), or $u(t)*\frac{1}{{\pi}t}$:Alternatively, an easier concept to grasp is that we are simply applying a $90^{\circ}$ phase shift to **all sinusoids in the Fourier series** that comprises the signal.1. Take the FFT of a signal2. Rotate phase of Fourier coefficients by $90^\circ$, or $\frac{\pi}{2} rad$ + Positive frequencies are shifted by $\frac{\pi}{2}$ (rotate in counterclockwise direction) + Negative frequencies are shifted by $-\frac{\pi}{2}$ (rotate in clockwise direction)>*NOTE: a $90^\circ$ phase shift of a sinusoid is easily accomplished by multiplying the Fourier coefficient by $j$. To see this, apply a $\frac{\pi}{2}$ phase shift to a complex exponential sinusoid:*>>$$e^{j(\omega{t} + \frac{\pi}{2})} = e^{j\omega{t}}{\cdot}e^{j\frac{\pi}{2}}$$>>...and since $e^{j\frac{\pi}{2}} = j$, the phase-shifted sinusoid will be: $je^{j\omega{t}}$ (or $-je^{-j\omega{t}}$ for negative $\omega$). Voila!3. Take iFFT of the rotated Fourier coefficients
###Code
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
sns.set()
def hilbert_t(signal):
# take FFT to compute complex fourier coefficients:
fft = np.fft.fft(signal)
# rotate coefficients by +/-90 degrees:
if len(fft)%2 == 0:
neg = -1j*fft[:int(len(fft)/2)] # coefficients of negative harmonics
pos = 1j*fft[int(len(fft)/2):] # coefficients of positive harmonics
shifted_fft = np.concatenate((neg, pos))
else:
neg = -1j*fft[:int(np.floor(len(fft)/2))]
pos = 1j*fft[int(np.floor(len(fft)/2))+1:]
shifted_fft = np.concatenate((neg, np.array([fft[int(np.floor(len(fft)/2))]]), pos))
assert len(shifted_fft) == len(fft), str(len(shifted_fft)) + " does not equal " + str(len(fft))
# apply iFFT:
ifft = np.fft.ifft(shifted_fft)
# add to original (real) signal values:
return np.real(signal) + 1j*np.real(ifft)
num_samples = 500
t = np.linspace(0,5*2*np.pi,num_samples)
test_signal = np.sin(t) + np.sin(2*t)
#test_signal = np.random.random(num_samples)
test_signal_ht = hilbert_t(test_signal)
fig, axs = plt.subplots(nrows=2, ncols=1, figsize=(15,5))
axs[0].plot(t, np.real(test_signal))
axs[0].set_title("Original Signal x(t) (Real-valued)")
axs[1].plot(t, np.imag(test_signal_ht))
axs[1].set_title("Hilbert Transform, (Imaginary Values Plotted)")
fig.tight_layout(pad=1)
###Output
_____no_output_____
###Markdown
Analytic Representation of a SignalThe **Analytic Representation** of a real-valued signal is the original signal added to its Hilbert transform. This results in a complex-valued signal where:1. The real values are the same as the real values of the original signal2. The imaginary values are the values provided by the Hilbert transform (i.e., its fourier series's sinusoids shifted 90 deg)The AR is used because it discards the "negative" frequency components, representing them instead as the complex part of the fourier coefficients. This is possible because the Hilbert transform is **conjugate symmetric**, (i.e. has the property of **Hermitian symmetry**).+ A signal $x(t)$ is **conjugate symmetric** if $x(t) = x^\ast(-t)$, that is, if its real part is **even** and it's imaginary part is **odd** about the origin + see https://www.youtube.com/watch?v=O1HRtZmkI4E+ Practically, this is accomplished by adding the "negative" frequency components to the "positive", resulting in a complex fourier coefficient:$$e^{j(\omega{t}+({-\omega})t)} = e^{j\omega{t}-j\omega{t}} = e^{j\omega{t}}e^{-j\omega{t}} \rightarrow$$$$(real - j\cdot{imaginary})e^{j\omega{t}}$$and since $-j = e^{-j\frac{\pi}{2}}$:$$real{\cdot}e^{j\omega{t}} + e^{-j\frac{\pi}{2}}imaginary{\cdot}e^{j\omega{t}}$$$$real{\cdot}e^{j\omega{t}} + imaginary{\cdot}e^{-j\frac{\pi}{2} + j\omega{t}}$$$$real{\cdot}e^{j\omega{t}} + imaginary{\cdot}e^{j(\omega{t} - \frac{\pi}{2})}$$
###Code
gridsize = (3, 2) # overall grid layout for the figure
fig = plt.figure(figsize=(12, 8))
ax1 = plt.subplot2grid(gridsize, (0,0)) # top left in the 3x2 grid
ax2 = plt.subplot2grid(gridsize, (0,1)) # top right in the 3x2 grid
ax3 = plt.subplot2grid(gridsize, (1, 0), colspan=2, rowspan=2, projection='3d') # this one takes up 2 rows and 2 columns
# manipulate the little graphs:
ax1.plot(t, np.real(test_signal))
ax1.set_title("Original Signal x(t) (Real-valued)")
ax2.plot(t, np.imag(test_signal_ht))
ax2.set_title("Hilbert Transform, (Imaginary Values Plotted)")
# manipulate the big graph:
#ax3.plot(projection='3d')
ax3.set_title('3-Dimensional View of Analytic Representation')
#ax3.plot(xs=t, ys=np.imag(test_signal_ht), zs=test_signal, zdir='z', label='Analytic Representation')
ax3.scatter(xs=t, ys=np.imag(test_signal_ht), zs=test_signal, zdir='z', label='Analytic Representation')
fig.tight_layout(pad=1) # do this to prevent xlabels on upper plots overlapping title on lower plot
# The scipy hilbert transform function produces the analytic representation of the input:
# 1. real values are the original, real-valued input signal
# 2. imaginary values are the hilbert-transformed signal (phase-shifted 90 degrees)
test_signal_scipy_ht = signal.hilbert(np.real(test_signal))
fig, ax = plt.subplots()
ax.plot(np.imag(test_signal_scipy_ht), label="scipy hilbert transform")
ax.plot(np.imag(test_signal_ht), label="my hilbert transform")
ax.legend(loc="lower right")
%timeit temp = hilbert_t(test_signal)
%timeit temp = signal.hilbert(test_signal)
###Output
30 µs ± 101 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
###Markdown
Normalized Analytic Respresentation Scale the AR to "normalize", so that the sum of all the squared magnitudes is 1 Probability Amplitude Function and Probability Density FunctionNotes from:+ Funkhouser Scott, Suski William and Winn Andrew, "Waveform information from quantum mechanical entropy", *Proceedings of the Royal Society,* Vol 472, Issue 2190, June 2016.
###Code
num_samples = 200
t = np.linspace(-np.pi,np.pi,num_samples)
# from Funkhouser, Winn, Suski paper:
d = 1
theta = 10
c = (np.power((8/np.pi),0.25)*(1/np.sqrt(d)))
test_signal = c*np.exp(-np.power(t,2)/np.power(d,2))*np.cos(theta*t)
test_signal_ht = signal.hilbert(test_signal)
gridsize = (3, 2) # overall grid layout for the figure
fig = plt.figure(figsize=(12, 8))
ax1 = plt.subplot2grid(gridsize, (0,0)) # top left in the 3x2 grid
ax2 = plt.subplot2grid(gridsize, (0,1)) # top right in the 3x2 grid
ax3 = plt.subplot2grid(gridsize, (1, 0), colspan=2, rowspan=2, projection='3d') # this one takes up 2 rows and 2 columns
# manipulate the little graphs:
ax1.plot(t, np.real(test_signal))
ax1.set_title("Original Signal x(t) (Real-valued)")
ax2.plot(t, np.imag(test_signal_ht))
ax2.set_title("Hilbert Transform, (Imaginary Values Plotted)")
# manipulate the big graph:
#ax3.plot(projection='3d')
ax3.set_title('3-Dimensional View of Analytic Representation')
#ax3.plot(xs=t, ys=np.imag(test_signal_ht), zs=test_signal, zdir='z', label='Analytic Representation')
ax3.scatter(xs=t, ys=np.imag(test_signal_ht), zs=test_signal, zdir='z', label='Analytic Representation')
fig.tight_layout(pad=1) # do this to prevent xlabels on upper plots overlapping title on lower plot
fig, ax = plt.subplots()
ax.plot(t, np.real(test_signal_ht), label='real')
ax.plot(t, np.imag(test_signal_ht), label='imaginary', color='green')
ax.plot(t, np.abs(test_signal_ht), label='envelope', color='red')
ax.legend(loc='upper right')
norm = np.sum(np.abs(test_signal_ht))
normalized_test_signal_ht = test_signal_ht/norm
np.sum(np.abs(normalized_test_signal_ht))
fig, ax = plt.subplots()
ax.plot(t, np.real(normalized_test_signal_ht), label='real')
ax.plot(t, np.imag(normalized_test_signal_ht), label='imaginary', color='green')
ax.plot(t, np.abs(normalized_test_signal_ht), label='envelope', color='red')
ax.legend(loc='upper right')
time_domain_squared_amp = np.square(np.abs(test_signal_ht))
freq_domain_squared_amp = np.square(np.abs(np.fft.fft(test_signal_ht)))
fig, ax = plt.subplots(ncols=2, figsize=(14, 4))
ax[0].plot(t, time_domain_squared_amp)
ax[0].set_title('Time Domain Squared Amplitude')
ax[1].plot(freq_domain_squared_amp)
ax[1].set_title('Frequency Domain Squared Amplitude')
###Output
_____no_output_____ |
Practice6/3_char_rnn_train.ipynb | ###Markdown
chars[0] converts index to char vocab['a'] converts char to index
###Code
# Now convert all text to index using vocab!
corpus = np.array(list(map(vocab.get, data)))
print ("Type of 'corpus' is %s, shape is %s, and length is %d"
% (type(corpus), corpus.shape, len(corpus)))
check_len = 10
print ("\n'corpus' looks like %s" % (corpus[0:check_len]))
for i in range(check_len):
_wordidx = corpus[i]
print ("[%d/%d] chars[%02d] corresponds to '%s'"
% (i, check_len, _wordidx, chars[_wordidx]))
# Generate batch data
batch_size = 50
seq_length = 200
num_batches = int(corpus.size / (batch_size * seq_length))
# First, reduce the length of corpus to fit batch_size
corpus_reduced = corpus[:(num_batches*batch_size*seq_length)]
xdata = corpus_reduced
ydata = np.copy(xdata)
ydata[:-1] = xdata[1:]
ydata[-1] = xdata[0]
print ('xdata is ... %s and length is %d' % (xdata, xdata.size))
print ('ydata is ... %s and length is %d' % (ydata, xdata.size))
print ("")
# Second, make batch
xbatches = np.split(xdata.reshape(batch_size, -1), num_batches, 1)
ybatches = np.split(ydata.reshape(batch_size, -1), num_batches, 1)
print ("Type of 'xbatches' is %s and length is %d"
% (type(xbatches), len(xbatches)))
print ("Type of 'ybatches' is %s and length is %d"
% (type(ybatches), len(ybatches)))
print ("")
# How can we access to xbatches??
nbatch = 5
temp = xbatches[0:nbatch]
print ("Type of 'temp' is %s and length is %d"
% (type(temp), len(temp)))
for i in range(nbatch):
temp2 = temp[i]
print ("Type of 'temp[%d]' is %s and shape is %s" % (i, type(temp2), temp2.shape,))
###Output
xdata is ... [36 22 7 ..., 11 25 3] and length is 1700000
ydata is ... [22 7 0 ..., 25 3 36] and length is 1700000
Type of 'xbatches' is <type 'list'> and length is 170
Type of 'ybatches' is <type 'list'> and length is 170
Type of 'temp' is <type 'list'> and length is 5
Type of 'temp[0]' is <type 'numpy.ndarray'> and shape is (50, 200)
Type of 'temp[1]' is <type 'numpy.ndarray'> and shape is (50, 200)
Type of 'temp[2]' is <type 'numpy.ndarray'> and shape is (50, 200)
Type of 'temp[3]' is <type 'numpy.ndarray'> and shape is (50, 200)
Type of 'temp[4]' is <type 'numpy.ndarray'> and shape is (50, 200)
###Markdown
Now, we are ready to make our RNN model with seq2seq
###Code
# Important RNN parameters
vocab_size = len(vocab)
rnn_size = 128
num_layers = 2
grad_clip = 5.
def unit_cell():
return tf.contrib.rnn.BasicLSTMCell(rnn_size,state_is_tuple=True,reuse=tf.get_variable_scope().reuse)
cell = tf.contrib.rnn.MultiRNNCell([unit_cell() for _ in range(num_layers)])
input_data = tf.placeholder(tf.int32, [batch_size, seq_length])
targets = tf.placeholder(tf.int32, [batch_size, seq_length])
istate = cell.zero_state(batch_size, tf.float32)
# Weigths
with tf.variable_scope('rnnlm'):
softmax_w = tf.get_variable("softmax_w", [rnn_size, vocab_size])
softmax_b = tf.get_variable("softmax_b", [vocab_size])
with tf.device("/cpu:0"):
embedding = tf.get_variable("embedding", [vocab_size, rnn_size])
inputs = tf.split(tf.nn.embedding_lookup(embedding, input_data), seq_length, 1)
inputs = [tf.squeeze(_input, [1]) for _input in inputs]
# Output
def loop(prev, _):
prev = tf.nn.xw_plus_b(prev, softmax_w, softmax_b)
prev_symbol = tf.stop_gradient(tf.argmax(prev, 1))
return tf.nn.embedding_lookup(embedding, prev_symbol)
"""
loop_function: If not None, this function will be applied to the i-th output
in order to generate the i+1-st input, and decoder_inputs will be ignored,
except for the first element ("GO" symbol).
"""
outputs, last_state = tf.contrib.rnn.static_rnn(cell, inputs, istate
, scope='rnnlm')
output = tf.reshape(tf.concat(outputs, 1), [-1, rnn_size])
logits = tf.nn.xw_plus_b(output, softmax_w, softmax_b)
probs = tf.nn.softmax(logits)
# Loss
loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example([logits], # Input
[tf.reshape(targets, [-1])], # Target
[tf.ones([batch_size * seq_length])], # Weight
vocab_size)
# Optimizer
cost = tf.reduce_sum(loss) / batch_size / seq_length
final_state = last_state
lr = tf.Variable(0.0, trainable=False)
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
_optm = tf.train.AdamOptimizer(lr)
optm = _optm.apply_gradients(zip(grads, tvars))
print ("Network Ready")
# Train the model!
num_epochs = 50
save_every = 500
learning_rate = 0.002
decay_rate = 0.97
sess = tf.Session()
sess.run(tf.initialize_all_variables())
summary_writer = tf.summary.FileWriter(save_dir, graph=sess.graph)
saver = tf.train.Saver(tf.all_variables())
init_time = time.time()
for epoch in range(num_epochs):
# Learning rate scheduling
sess.run(tf.assign(lr, learning_rate * (decay_rate ** epoch)))
state = sess.run(istate)
batchidx = 0
for iteration in range(num_batches):
start_time = time.time()
randbatchidx = np.random.randint(num_batches)
xbatch = xbatches[batchidx]
ybatch = ybatches[batchidx]
batchidx = batchidx + 1
# Note that, num_batches = len(xbatches)
# Train!
train_loss, state, _ = sess.run([cost, final_state, optm]
, feed_dict={input_data: xbatch, targets: ybatch, istate: state})
total_iter = epoch*num_batches + iteration
end_time = time.time();
duration = end_time - start_time
if total_iter % 100 == 0:
print ("[%d/%d] cost: %.4f / Each batch learning took %.4f sec"
% (total_iter, num_epochs*num_batches, train_loss, duration))
if total_iter % save_every == 0:
ckpt_path = os.path.join(save_dir, 'model.ckpt')
saver.save(sess, ckpt_path, global_step = total_iter)
# Save network!
print("model saved to '%s'" % (ckpt_path))
###Output
WARNING:tensorflow:From <ipython-input-9-c08af8068626>:8: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
WARNING:tensorflow:From <ipython-input-9-c08af8068626>:10: all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Please use tf.global_variables instead.
[0/8500] cost: 4.6006 / Each batch learning took 2.2222 sec
model saved to 'data/linux_kernel/model.ckpt'
[100/8500] cost: 3.1259 / Each batch learning took 0.3366 sec
[200/8500] cost: 2.5992 / Each batch learning took 0.3258 sec
[300/8500] cost: 2.4603 / Each batch learning took 0.3260 sec
[400/8500] cost: 2.2591 / Each batch learning took 0.3136 sec
[500/8500] cost: 2.0035 / Each batch learning took 0.3140 sec
model saved to 'data/linux_kernel/model.ckpt'
[600/8500] cost: 1.9589 / Each batch learning took 0.3695 sec
[700/8500] cost: 1.8066 / Each batch learning took 0.3130 sec
[800/8500] cost: 1.7801 / Each batch learning took 0.3119 sec
[900/8500] cost: 1.7433 / Each batch learning took 0.4185 sec
[1000/8500] cost: 1.6289 / Each batch learning took 0.3153 sec
model saved to 'data/linux_kernel/model.ckpt'
[1100/8500] cost: 1.6194 / Each batch learning took 0.3388 sec
[1200/8500] cost: 1.4603 / Each batch learning took 0.3129 sec
[1300/8500] cost: 1.5877 / Each batch learning took 0.3141 sec
[1400/8500] cost: 1.5235 / Each batch learning took 0.3087 sec
[1500/8500] cost: 1.5317 / Each batch learning took 0.3440 sec
model saved to 'data/linux_kernel/model.ckpt'
[1600/8500] cost: 1.5362 / Each batch learning took 0.4632 sec
[1700/8500] cost: 1.4946 / Each batch learning took 0.3351 sec
[1800/8500] cost: 1.4392 / Each batch learning took 0.3374 sec
[1900/8500] cost: 1.4224 / Each batch learning took 0.3323 sec
[2000/8500] cost: 1.4797 / Each batch learning took 0.3115 sec
model saved to 'data/linux_kernel/model.ckpt'
[2100/8500] cost: 1.4381 / Each batch learning took 0.3863 sec
[2200/8500] cost: 1.3570 / Each batch learning took 0.3080 sec
[2300/8500] cost: 1.3689 / Each batch learning took 0.3120 sec
[2400/8500] cost: 1.3241 / Each batch learning took 0.3174 sec
[2500/8500] cost: 1.3431 / Each batch learning took 0.3326 sec
model saved to 'data/linux_kernel/model.ckpt'
[2600/8500] cost: 1.3311 / Each batch learning took 0.4586 sec
[2700/8500] cost: 1.2888 / Each batch learning took 0.3147 sec
[2800/8500] cost: 1.3359 / Each batch learning took 0.3262 sec
[2900/8500] cost: 1.1899 / Each batch learning took 0.3310 sec
[3000/8500] cost: 1.3265 / Each batch learning took 0.3324 sec
model saved to 'data/linux_kernel/model.ckpt'
[3100/8500] cost: 1.2806 / Each batch learning took 0.5395 sec
[3200/8500] cost: 1.3113 / Each batch learning took 0.3448 sec
[3300/8500] cost: 1.3262 / Each batch learning took 0.3422 sec
[3400/8500] cost: 1.3011 / Each batch learning took 0.3195 sec
[3500/8500] cost: 1.2781 / Each batch learning took 0.3138 sec
model saved to 'data/linux_kernel/model.ckpt'
[3600/8500] cost: 1.2607 / Each batch learning took 0.3156 sec
[3700/8500] cost: 1.2897 / Each batch learning took 0.4064 sec
[3800/8500] cost: 1.2809 / Each batch learning took 0.3063 sec
[3900/8500] cost: 1.2301 / Each batch learning took 0.3330 sec
[4000/8500] cost: 1.2372 / Each batch learning took 0.3157 sec
model saved to 'data/linux_kernel/model.ckpt'
[4100/8500] cost: 1.2088 / Each batch learning took 0.3536 sec
[4200/8500] cost: 1.2277 / Each batch learning took 0.3146 sec
[4300/8500] cost: 1.2095 / Each batch learning took 0.3148 sec
[4400/8500] cost: 1.1840 / Each batch learning took 0.3425 sec
[4500/8500] cost: 1.2459 / Each batch learning took 0.3368 sec
model saved to 'data/linux_kernel/model.ckpt'
[4600/8500] cost: 1.0941 / Each batch learning took 0.4124 sec
[4700/8500] cost: 1.2265 / Each batch learning took 0.3164 sec
[4800/8500] cost: 1.1862 / Each batch learning took 0.3307 sec
[4900/8500] cost: 1.2198 / Each batch learning took 0.3371 sec
[5000/8500] cost: 1.2345 / Each batch learning took 0.3298 sec
model saved to 'data/linux_kernel/model.ckpt'
[5100/8500] cost: 1.2081 / Each batch learning took 0.3418 sec
[5200/8500] cost: 1.2043 / Each batch learning took 0.3105 sec
[5300/8500] cost: 1.1929 / Each batch learning took 0.3377 sec
[5400/8500] cost: 1.2155 / Each batch learning took 0.3373 sec
[5500/8500] cost: 1.2052 / Each batch learning took 0.3908 sec
model saved to 'data/linux_kernel/model.ckpt'
[5600/8500] cost: 1.1683 / Each batch learning took 0.3207 sec
[5700/8500] cost: 1.1695 / Each batch learning took 0.3358 sec
[5800/8500] cost: 1.1485 / Each batch learning took 0.3392 sec
[5900/8500] cost: 1.1671 / Each batch learning took 0.3451 sec
[6000/8500] cost: 1.1481 / Each batch learning took 0.3391 sec
model saved to 'data/linux_kernel/model.ckpt'
[6100/8500] cost: 1.1262 / Each batch learning took 0.3186 sec
[6200/8500] cost: 1.1943 / Each batch learning took 0.4622 sec
[6300/8500] cost: 1.0425 / Each batch learning took 0.3805 sec
[6400/8500] cost: 1.1697 / Each batch learning took 0.3373 sec
[6500/8500] cost: 1.1365 / Each batch learning took 0.3838 sec
model saved to 'data/linux_kernel/model.ckpt'
[6600/8500] cost: 1.1704 / Each batch learning took 0.3196 sec
[6700/8500] cost: 1.1841 / Each batch learning took 0.3364 sec
[6800/8500] cost: 1.1521 / Each batch learning took 0.3404 sec
[6900/8500] cost: 1.1598 / Each batch learning took 0.3631 sec
[7000/8500] cost: 1.1523 / Each batch learning took 0.3372 sec
model saved to 'data/linux_kernel/model.ckpt'
[7100/8500] cost: 1.1689 / Each batch learning took 0.3289 sec
[7200/8500] cost: 1.1579 / Each batch learning took 0.3935 sec
[7300/8500] cost: 1.1316 / Each batch learning took 0.3154 sec
[7400/8500] cost: 1.1284 / Each batch learning took 0.3672 sec
[7500/8500] cost: 1.1087 / Each batch learning took 0.3855 sec
model saved to 'data/linux_kernel/model.ckpt'
[7600/8500] cost: 1.1276 / Each batch learning took 0.5031 sec
[7700/8500] cost: 1.1090 / Each batch learning took 0.3774 sec
[7800/8500] cost: 1.0901 / Each batch learning took 0.4558 sec
[7900/8500] cost: 1.1609 / Each batch learning took 0.4658 sec
[8000/8500] cost: 1.0116 / Each batch learning took 0.3612 sec
model saved to 'data/linux_kernel/model.ckpt'
[8100/8500] cost: 1.1309 / Each batch learning took 0.3854 sec
[8200/8500] cost: 1.1066 / Each batch learning took 0.3160 sec
[8300/8500] cost: 1.1417 / Each batch learning took 0.3196 sec
[8400/8500] cost: 1.1568 / Each batch learning took 0.3270 sec
###Markdown
Run the command line tensorboard --logdir=/tmp/tf_logs/char_rnn_tutorial Open http://localhost:6006/ into your web browser
###Code
print ("Done!! It took %.4f second. " %(time.time() - init_time))
###Output
Done!! It took 5238.4040 second.
|
stats-newtextbook-python/samples/6-3-ロジスティック回帰.ipynb | ###Markdown
第6部 一般化線形モデル|Pythonで学ぶ統計学入門 3章 ロジスティック回帰 分析の準備
###Code
# 数値計算に使うライブラリ
import numpy as np
import pandas as pd
import scipy as sp
from scipy import stats
# グラフを描画するライブラリ
from matplotlib import pyplot as plt
import seaborn as sns
sns.set()
# 統計モデルを推定するライブラリ(ワーニングが出ることもあります)
import statsmodels.formula.api as smf
import statsmodels.api as sm
# 表示桁数の指定
%precision 3
# グラフをjupyter Notebook内に表示させるための指定
%matplotlib inline
###Output
_____no_output_____
###Markdown
実装:データの読み込みと図示
###Code
# データの読み込み
test_result = pd.read_csv("6-3-1-logistic-regression.csv")
print(test_result.head(3))
# データの図示
sns.barplot(x = "hours",y = "result",
data = test_result, palette='gray_r')
# 勉強時間ごとの合格率
print(test_result.groupby("hours").mean())
###Output
result
hours
0 0.0
1 0.0
2 0.1
3 0.1
4 0.4
5 0.4
6 0.9
7 0.8
8 0.9
9 1.0
###Markdown
実装:ロジスティック回帰
###Code
# モデル化
mod_glm = smf.glm(formula = "result ~ hours",
data = test_result,
family=sm.families.Binomial()).fit()
# 参考:リンク関数を指定する
logistic_reg = smf.glm(formula = "result ~ hours",
data = test_result,
family=sm.families.Binomial(link=sm.families.links.logit)).fit()
###Output
_____no_output_____
###Markdown
実装:ロジスティック回帰の結果の出力
###Code
# 結果の出力
mod_glm.summary()
###Output
_____no_output_____
###Markdown
実装:AICによるモデル選択
###Code
# Nullモデル
mod_glm_null = smf.glm(
"result ~ 1", data = test_result,
family=sm.families.Binomial()).fit()
# AICの比較
print("Nullモデル :", mod_glm_null.aic.round(3))
print("変数入りモデル:", mod_glm.aic.round(3))
###Output
Nullモデル : 139.989
変数入りモデル: 72.028
###Markdown
実装:ロジスティック回帰曲線の図示
###Code
# lmplotでロジスティック回帰曲線を図示する
sns.lmplot(x = "hours", y = "result",
data = test_result,
logistic = True,
scatter_kws = {"color": "black"},
line_kws = {"color": "black"},
x_jitter = 0.1, y_jitter = 0.02)
###Output
_____no_output_____
###Markdown
実装:成功確率の予測
###Code
# 0~9まで1ずつ増える等差数列
exp_val = pd.DataFrame({
"hours": np.arange(0, 10, 1)
})
# 成功確率の予測値
pred = mod_glm.predict(exp_val)
pred
###Output
_____no_output_____
###Markdown
ロジスティック回帰の係数とオッズ比の関係
###Code
# 勉強時間が1時間の時の合格率
exp_val_1 = pd.DataFrame({"hours": [1]})
pred_1 = mod_glm.predict(exp_val_1)
# 勉強時間が2時間の時の合格率
exp_val_2 = pd.DataFrame({"hours": [2]})
pred_2 = mod_glm.predict(exp_val_2)
# オッズ
odds_1 = pred_1 / (1 - pred_1)
odds_2 = pred_2 / (1 - pred_2)
# 対数オッズ比
sp.log(odds_2 / odds_1)
# 係数
mod_glm.params["hours"]
# 補足:オッズ比に戻す
sp.exp(mod_glm.params["hours"])
###Output
_____no_output_____ |
notebooks/08_extended_refactored.ipynb | ###Markdown
Deep Reinforcement Learning in Action - Chaper 8 - Intrinsic Curiosity Module - Refactor and Extended - save model, load model- save plots of losses and episode lengths- bug fixes including fixing large memory footprint from saving tensors with gradients to a list- exposes additional hparams through the param dictionary
###Code
import gym
from nes_py.wrappers import JoypadSpace # A
import gym_super_mario_bros
from gym_super_mario_bros.actions import SIMPLE_MOVEMENT, COMPLEX_MOVEMENT # B
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from collections import deque
from tqdm.notebook import trange
from random import shuffle
import matplotlib.pyplot as plt
from skimage.transform import resize # A
import numpy as np
###Output
_____no_output_____
###Markdown
Random Agent Mario
###Code
env = gym_super_mario_bros.make('SuperMarioBros-v0')
env = JoypadSpace(env, COMPLEX_MOVEMENT) # C
done = True
for step in range(2500): # D
if done:
state = env.reset()
state, reward, done, info = env.step(env.action_space.sample())
env.render()
env.close()
###Output
_____no_output_____
###Markdown
Downscaling
###Code
# code intentionally duplicated in training code block
def downscale_obs(obs, new_size=(42, 42), to_gray=True):
if to_gray:
return resize(obs, new_size, anti_aliasing=True).max(axis=2) # B
else:
return resize(obs, new_size, anti_aliasing=True)
env = gym_super_mario_bros.make('SuperMarioBros-v0')
plt.imshow(env.render("rgb_array"))
plt.imshow(downscale_obs(env.render("rgb_array")))
env.close()
###Output
_____no_output_____
###Markdown
Train Agent
###Code
# Curiosity-driven Exploration by Self-supervised Prediction
# https://pathak22.github.io/noreward-rl/resources/icml17.pdf
# https://github.com/pathak22/noreward-rl/blob/master/src/constants.py
# TODO: distributional, n-step
import logging
import sys
from einops import rearrange
LOGGER = logging.getLogger(__name__)
LOGGER.setLevel(logging.INFO)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
LOGGER.addHandler(handler)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
LOGGER.info('Pytorch using device: %s', device)
# Listing 8.9
default_params_book = {
'batch_size': 150,
'beta': 0.2,
'lambda': 0.1,
'eta': 1.0,
'gamma': 0.2,
'max_episode_len': 100, # this is really the max episode length without exceeding min_progress
'min_progress': 15,
'action_repeats': 6, # something is hard-coded where it will throw an error if < 3
'frames_per_state': 3, # book uses 3 and hardcoded it into the models
'epochs': 5000,
'eps': 0.15,
'test_eps': 0.15,
'switch_to_eps_greedy': 1000,
'use_extrinsic': False,
'experience_replay_length': 1000,
'learning_rate': 0.001,
'include_LSTM': False,
}
default_params_paper = {
'batch_size': 1000,
'beta': 0.2,
'lambda': 1.0,
'eta': 0.01,
'gamma': 0.99,
'max_episode_len': 100, # this is really the max episode length without exceeding min_progress
'min_progress': 15,
# something about the creation of state2 on first pass causes it to throw an error if < frames_per_state
# now fixed
'action_repeats': 6,
'frames_per_state': 6, # book uses 3 and hardcoded it into the models
'epochs': 6000,
'eps': 0.15,
'test_eps': 0.15,
'switch_to_eps_greedy': 1000,
'use_extrinsic': True,
'experience_replay_length': 1000,
'learning_rate': 0.001,
'include_LSTM': True,
}
params = {
'batch_size': 1000,
'beta': 0.2,
'lambda': 1.0,
'eta': 0.01,
'gamma': 0.99,
'max_episode_len': 100, # this is really the max episode length without exceeding min_progress
'min_progress': 15,
# something about the creation of state2 on first pass causes it to throw an error if < frames_per_state
# now fixed
'action_repeats': 6,
'frames_per_state': 6, # book uses 3 and hardcoded it into the models
'epochs': 6000,
'eps': 0.15,
'test_eps': 0.15,
'switch_to_eps_greedy': 1000,
'use_extrinsic': True,
'experience_replay_length': 1000,
'learning_rate': 0.001,
'include_LSTM': True,
}
# Listing 8.2
def downscale_obs(obs, new_size=(42, 42), to_gray=True):
if to_gray:
return resize(obs, new_size, anti_aliasing=True).max(axis=2) # B
else:
return resize(obs, new_size, anti_aliasing=True)
# Listing 8.4
def prepare_state(state): # A
return torch.from_numpy(downscale_obs(state, to_gray=True)).float().unsqueeze(dim=0)
def prepare_multi_state(state1, state2): # B
state1 = state1.clone()
tmp = torch.from_numpy(downscale_obs(state2, to_gray=True)).float()
state1[0][0] = state1[0][1]
state1[0][1] = state1[0][2]
state1[0][2] = tmp
return state1
def prepare_initial_state(state, N=params['frames_per_state']): # C
state_ = torch.from_numpy(downscale_obs(state, to_gray=True)).float()
tmp = state_.repeat((N, 1, 1))
return tmp.unsqueeze(dim=0)
# Listing 8.5
def policy(qvalues, eps=None): # A
if eps is not None:
if torch.rand(1) < eps:
return torch.randint(low=0, high=7, size=(1,))
else:
return torch.argmax(qvalues)
else:
# if epsilon (eps) is not provided, use a softmax policy by sampling from the softmax
# using the torch.multinomial function
LOGGER.debug("q values shape: %s", qvalues.shape)
return torch.multinomial(F.softmax(F.normalize(qvalues), dim=1), num_samples=1)
# Listing 8.6
class ExperienceReplay:
def __init__(self, N=500, batch_size=100):
self.N = N # A
self.batch_size = batch_size # B
self.memory = []
self.counter = 0
def add_memory(self, state1, action, reward, state2):
self.counter += 1
if self.counter % 500 == 0: # C
self.shuffle_memory()
if len(self.memory) < self.N: # D
self.memory.append((state1, action, reward, state2))
else:
rand_index = np.random.randint(0, self.N-1)
self.memory[rand_index] = (state1, action, reward, state2)
def shuffle_memory(self): # E
shuffle(self.memory)
def get_batch(self): # F
if len(self.memory) < self.batch_size:
batch_size = len(self.memory)
else:
batch_size = self.batch_size
if len(self.memory) < 1:
LOGGER.error("Error: No data in memory.")
return None
# G
ind = np.random.choice(np.arange(len(self.memory)),
batch_size, replace=False)
batch = [self.memory[i] for i in ind] # batch is a list of tuples
state1_batch = torch.stack([x[0].squeeze(dim=0) for x in batch], dim=0)
action_batch = torch.Tensor([x[1] for x in batch]).long()
reward_batch = torch.Tensor([x[2] for x in batch])
state2_batch = torch.stack([x[3].squeeze(dim=0) for x in batch], dim=0)
LOGGER.debug(state2_batch.shape)
return state1_batch, action_batch, reward_batch, state2_batch
# Listing 8.7
class Phi(nn.Module): # A
def __init__(self):
super(Phi, self).__init__()
self.conv1 = nn.Conv2d(
params['frames_per_state'], 32, kernel_size=(3, 3), stride=2, padding=1)
self.conv2 = nn.Conv2d(32, 32, kernel_size=(3, 3), stride=2, padding=1)
self.conv3 = nn.Conv2d(32, 32, kernel_size=(3, 3), stride=2, padding=1)
self.conv4 = nn.Conv2d(32, 32, kernel_size=(3, 3), stride=2, padding=1)
def forward(self, x):
x = F.normalize(x)
y = F.elu(self.conv1(x))
y = F.elu(self.conv2(y))
y = F.elu(self.conv3(y))
y = F.elu(self.conv4(y)) # size [1, 32, 3, 3] batch, channels, 3 x 3
y = y.flatten(start_dim=1) # size N, 288
return y
class Gnet(nn.Module): # B
def __init__(self):
super(Gnet, self).__init__()
self.linear1 = nn.Linear(576, 256)
self.linear2 = nn.Linear(256, 12)
def forward(self, state1, state2):
x = torch.cat((state1, state2), dim=1)
y = F.relu(self.linear1(x))
y = self.linear2(y)
y = F.softmax(y, dim=1)
return y
class Fnet(nn.Module): # C
def __init__(self):
super(Fnet, self).__init__()
self.linear1 = nn.Linear(300, 256)
self.linear2 = nn.Linear(256, 288)
def forward(self, state, action):
action_ = torch.zeros(action.shape[0], 12) # D
indices = torch.stack(
(torch.arange(action.shape[0]), action.squeeze()), dim=0)
indices = indices.tolist()
action_[indices] = 1.
x = torch.cat((state, action_), dim=1)
y = F.relu(self.linear1(x))
y = self.linear2(y)
return y
# Listing 8.8
class Qnetwork(nn.Module):
def __init__(self):
super(Qnetwork, self).__init__()
# in_channels, out_channels, kernel_size, stride=1, padding=0
self.conv1 = nn.Conv2d(in_channels=params['frames_per_state'], out_channels=32, kernel_size=(
3, 3), stride=2, padding=1)
self.conv2 = nn.Conv2d(32, 32, kernel_size=(3, 3), stride=2, padding=1)
self.conv3 = nn.Conv2d(32, 32, kernel_size=(3, 3), stride=2, padding=1)
self.conv4 = nn.Conv2d(32, 32, kernel_size=(3, 3), stride=2, padding=1)
self.linear1 = nn.Linear(288, 100)
self.linear2 = nn.Linear(100, 12)
self.batchnorm1 = nn.BatchNorm2d(32)
self.batchnorm2 = nn.BatchNorm2d(32)
self.batchnorm3 = nn.BatchNorm2d(32)
self.batchnorm4 = nn.BatchNorm2d(32)
self.lstm = nn.LSTM(input_size=9, hidden_size=288, batch_first=True)
def forward(self, x):
x = F.normalize(x)
LOGGER.debug("DQN input shape: %s", x.shape)
y = F.elu(self.conv1(x))
LOGGER.debug("DQN conv1 output shape: %s", y.shape)
y = self.batchnorm1(y)
y = F.elu(self.conv2(y))
y = self.batchnorm2(y)
y = F.elu(self.conv3(y))
y = self.batchnorm3(y)
y = F.elu(self.conv4(y))
y = self.batchnorm4(y)
LOGGER.debug("shape before lstm: %s", y.shape)
if params['include_LSTM'] == True:
y = rearrange(y, 'batch channels x y -> batch channels (x y)')
_, (_, y) = self.lstm(y)
LOGGER.debug("shape after lstm: %s", y.shape)
y = rearrange(
y, 'd batch hidden -> (d batch) hidden', d=1, hidden=288)
else:
y = y.flatten(start_dim=2)
y = y.view(y.shape[0], -1, 32)
y = y.flatten(start_dim=1)
y = F.elu(self.linear1(y))
y = self.linear2(y) # size N, 12
return y
# Listing 8.9
replay = ExperienceReplay(
N=params['experience_replay_length'], batch_size=params['batch_size'])
Qmodel = Qnetwork()
encoder = Phi()
forward_model = Fnet()
inverse_model = Gnet()
forward_loss = nn.MSELoss(reduction='none')
inverse_loss = nn.CrossEntropyLoss(reduction='none')
qloss = nn.MSELoss()
all_model_params = list(Qmodel.parameters()) + list(encoder.parameters()) # A
all_model_params += list(forward_model.parameters()) + \
list(inverse_model.parameters())
opt = optim.Adam(lr=params['learning_rate'], params=all_model_params)
# Listing 8.10
def loss_fn(q_loss, inverse_loss, forward_loss):
"""
book: minimize[λ × Qloss + (1 – β)Floss + β × Gloss]
paper: minimize[λ × Qloss + (1 – β)I_loss + (β)F_loss]
forward model is F
inverse model is G
"""
loss_ = (1 - params['beta']) * inverse_loss
loss_ += params['beta'] * forward_loss
loss_ = loss_.sum() / loss_.flatten().shape[0]
loss = loss_ + params['lambda'] * q_loss
return loss
def reset_env():
"""
Reset the environment and return a new initial state
"""
env.reset()
state1 = prepare_initial_state(env.render('rgb_array'))
return state1
# Listing 8.11
def ICM(state1, action, state2, forward_scale=1., inverse_scale=1e4):
state1_hat = encoder(state1) # A
state2_hat = encoder(state2)
state2_hat_pred = forward_model(state1_hat.detach(), action.detach()) # B
forward_pred_err = forward_scale * forward_loss(state2_hat_pred,
state2_hat.detach()).sum(dim=1).unsqueeze(dim=1)
pred_action = inverse_model(state1_hat, state2_hat) # C
inverse_pred_err = inverse_scale * inverse_loss(pred_action,
action.detach().flatten()).unsqueeze(dim=1)
return forward_pred_err, inverse_pred_err
# Listing 8.12
def minibatch_train(use_extrinsic=True):
state1_batch, action_batch, reward_batch, state2_batch = replay.get_batch()
action_batch = action_batch.view(action_batch.shape[0], 1) # A
reward_batch = reward_batch.view(reward_batch.shape[0], 1)
forward_pred_err, inverse_pred_err = ICM(
state1_batch, action_batch, state2_batch) # B
i_reward = (1. / params['eta']) * forward_pred_err # C
reward = i_reward.detach() # D
if use_extrinsic: # E
reward += reward_batch
qvals = Qmodel(state2_batch) # F
reward += params['gamma'] * torch.max(qvals)
reward_pred = Qmodel(state1_batch)
reward_target = reward_pred.clone()
indices = torch.stack((torch.arange(action_batch.shape[0]),
action_batch.squeeze()), dim=0)
indices = indices.tolist()
reward_target[indices] = reward.squeeze()
q_loss = 1e5 * qloss(F.normalize(reward_pred),
F.normalize(reward_target.detach()))
return forward_pred_err, inverse_pred_err, q_loss
# Listing 8.13
env = gym_super_mario_bros.make('SuperMarioBros-v0')
env = JoypadSpace(env, COMPLEX_MOVEMENT) # C
env.reset()
state1 = prepare_initial_state(env.render('rgb_array'))
losses = []
episode_length = 0
# this prevents storing more action repeats than frames_per_state
state_deque = deque(maxlen=params['frames_per_state'])
e_reward = 0.
# keep track of the last x position in order to reset it if there's no forward progress
last_x_pos = env.env.env._x_position
ep_lengths = []
for i in trange(params['epochs']):
opt.zero_grad()
episode_length += 1
# run DQN forward to get action-value predictions
q_val_pred = Qmodel(state1)
# after x epochs, switch to the epsilon-greedy policy
if i > params['switch_to_eps_greedy']:
action = int(policy(q_val_pred, params['eps']))
else:
action = int(policy(q_val_pred))
# repeat action selected by policy x times to speed up learning
for j in range(params['action_repeats']):
state2, e_reward_, done, info = env.step(action)
last_x_pos = info['x_pos']
if done:
state1 = reset_env()
break
e_reward += e_reward_
state_deque.append(prepare_state(state2))
while len(state_deque) < params['frames_per_state']:
print("adding to queue")
# this should prevent the error caused when action_repeats < frames_per_state
state_deque.append(prepare_state(state2))
# convert deque object into a tensor
state2 = torch.stack(list(state_deque), dim=1)
LOGGER.debug("state2.shape: %s", state2.shape)
# add single experience to replay buffer
replay.add_memory(state1, action, e_reward, state2)
e_reward = 0
# if Mario is not making enough forward progress, restart game
if episode_length > params['max_episode_len']:
if (info['x_pos'] - last_x_pos) < params['min_progress']:
done = True
else:
last_x_pos = info['x_pos']
if done:
ep_lengths.append(info['x_pos'])
state1 = reset_env()
last_x_pos = env.env.env._x_position
episode_length = 0
else:
# this causes an error when action_repeats < frames_per_state
LOGGER.debug("state2.shape: %s", state2.shape)
state1 = state2
if params['experience_replay_length'] < params['batch_size']:
raise ValueError(
"params['experience_replay_length'] < params['batch_size'] will cause model not to train")
if len(replay.memory) < params['batch_size']:
continue
forward_pred_err, inverse_pred_err, q_loss = minibatch_train(
use_extrinsic=params['use_extrinsic']) # get errors for one mini-batch of data from replay buffer
# compute overall loss
loss = loss_fn(q_loss=q_loss,
inverse_loss=inverse_pred_err,
forward_loss=forward_pred_err)
# it's important to not save the losses to a list directly because then you're saving the gradients as well
loss_list = (q_loss.mean().detach().numpy(), forward_pred_err.flatten().mean().detach().numpy(),
inverse_pred_err.flatten().mean().detach().numpy())
LOGGER.debug("loss list: %s", loss_list)
losses.append(loss_list)
loss.backward()
opt.step()
env.close()
###Output
2021-11-01 09:51:21,027 - __main__ - INFO - Pytorch using device: cuda
###Markdown
Name model for saving files
###Code
model_version_name = 'mario_curiosity_model_with_extrinsic_reward_long_frames_per_state_v5'
###Output
_____no_output_____
###Markdown
Plot losses and episode length
###Code
## now handling this inside the training function
# losses_ = np.array([tuple([loss.detach().numpy()
# for loss in loss_tensor_tuple]) for loss_tensor_tuple in losses])
losses_ = np.array(losses)
plt.figure(figsize=(8, 6))
plt.plot(np.log(losses_[:, 0]), label='Q loss')
plt.plot(np.log(losses_[:, 1]), label='Forward loss')
plt.plot(np.log(losses_[:, 2]), label='Inverse loss')
plt.legend()
plt.savefig(f'{model_version_name}_losses.png')
plt.show()
# https://stackoverflow.com/questions/55466298/pytorch-cant-call-numpy-on-variable-that-requires-grad-use-var-detach-num
# https://stackoverflow.com/questions/16940293/why-is-there-no-tuple-comprehension-in-python
plt.figure()
plt.plot(np.array(ep_lengths), label='Episode length')
# plt.xlabel('Training time')
# plt.ylabel('Episode length')
plt.legend()
plt.savefig(f'{model_version_name}_episode_lengths.png')
plt.show()
###Output
_____no_output_____
###Markdown
save model
###Code
import json
torch.save(Qmodel, f'{model_version_name}.pt')
with open(f'{model_version_name}_config.json', 'w') as outfile:
json.dump(params, outfile)
###Output
_____no_output_____
###Markdown
load model
###Code
Qmodel = torch.load(f'{model_version_name}.pt')
Qmodel.eval() # needed for dropout and/or batchnorm layers to function correctly for inference
###Output
_____no_output_____
###Markdown
Test Trained Agent
###Code
save_video = True
render_also = True
if save_video:
from gym.wrappers.monitoring.video_recorder import VideoRecorder
env = gym_super_mario_bros.make('SuperMarioBros-v0')
env = JoypadSpace(env, COMPLEX_MOVEMENT) # C
if save_video:
video_recorder = None
video_recorder = VideoRecorder(
env, f'{model_version_name}_gameplay.mp4', enabled=True)
done = True
state_deque = deque(maxlen=params['frames_per_state'])
for step in range(10000):
if done:
env.reset()
state1 = prepare_initial_state(env.render('rgb_array'))
q_val_pred = Qmodel(state1)
action = int(policy(q_val_pred, params['test_eps']))
state2, reward, done, info = env.step(action)
state2 = prepare_multi_state(state1, state2)
state1 = state2
if save_video:
video_recorder.capture_frame()
if render_also:
env.render()
else:
env.render()
if save_video:
video_recorder.close()
video_recorder.enabled = False
env.close()
# # if you're done, run env.close()
# env.close()
###Output
_____no_output_____ |
Real_world_examples/Mining_rehabilitation.ipynb | ###Markdown
Tracking rehabilitation of mines * **Compatability:** Notebook currently compatible with both the `NCI` and `DEA Sandbox` environments* **Products used:** [ls8_fc_albers](https://explorer.sandbox.dea.ga.gov.au/ls8_fc_albers), [wofs_albers](https://explorer.sandbox.dea.ga.gov.au/wofs_albers) BackgroundLand rehabilitation is an important aspect of responsible mining.For example, The Department of Mines, Industry Regulation and Safety (DMIRS) maintain a Mining Rehabilitation Fund (MRF) that Western Australian mining operators contribute to.The fund is used to rehabilitate abandoned and legacy mines, and is underpinned by the Mining Rehabilitation Fund Act 2012.As part of the fund, tenement holders report ground disturbance, which can help DMIRS monitor how a mine's rehabilitation is going, as well as major disurbance events related to mining activity. At the moment, most mining organisations only review disturbance annually, often contracting out the service to third party surveying and ecological consulting agencies.While these providers generally provide excellent information, there are two main issues:* Annual visits give a very coarse view of how the mine is changing in time.* There is no way to validate or sanity check consultants reports without a site visit. Digital Earth Australia use caseRehabilitation and land disturbance can be monitored through satellite data by tracking the amount of vegetation and bare ground on the site compared with surrounding areas.A decrease in bare ground and increase in vegetation can be linked to positive rehabilitation.A slow increase or sharp spike in the amount of bare ground over a mining site may indicate increased disturbance, which is against the trend expected during rehabilitation efforts.This tracking can be achieved using the Fractional Cover data product from the Joint Remote Sensing Research Program, which is provided through DEA.Fractional Cover is derived from Landsat data, which has a revisit time of around two weeks for Australia, providing regular insight to a given mine's rehabilitation.This would allow companies to identify any disturbance events early in the year and take corrective action before the yearly reporting.It would also allow DMIRS to keep detailed records of how the mines they monitor are changing in time.Fractional Cover can also be used to validate the field reporting from surveying and ecological consultants before submitting reports.While reports from field surveys will provide more detail than most Earth Observation data products, such products provide the ability to provide context and validation of reports.For example, if the survey detects a disturbance, it may be hard to detect a reason.Fractional Cover can be used to identify the point in time, and possibly the cause of each disturbance event. DescriptionIn this example, the Landsat 8 Fractional Cover product is used to assess how land cover (specifically bare soil, green vegetation and non-green vegetation) is changing over time.The worked example below takes users through the code required to* Create a time series data cube over a mine site.* Create graphs to identify rehabilitation trends and disturbance events.* Interpret the results.*** Getting started**To run this analysis**, run all the cells in the notebook, starting with the "Load packages and apps" cell. Load packages and appsThis notebook works via two functions, which are referred to as apps: `load_miningrehab_data` and `run_miningrehab_app`.The apps allow the majority of the analysis code to be stored in another file, making the notebook easy to use and run.To view the code behind the apps, open the [notebookapp_miningrehab.py](../Scripts/notebookapp_miningrehab.py) file.
###Code
%matplotlib inline
import sys
import datacube
sys.path.append("../Scripts")
from notebookapp_miningrehab import load_miningrehab_data
from notebookapp_miningrehab import run_miningrehab_app
###Output
_____no_output_____
###Markdown
Load the dataThe `load_miningrehab_data()` command performs several key steps:* Load Fractional Cover and Water Observations from Space (WOfS) data for the study area.* Match the datasets to only retain data with the same time stamps.* Mask areas that are classified as water using WOfS.* Resample the masked Fractional Cover to get monthly average values.* Return the masked data for analysis.The masked data is stored in the `dataset_fc` object.As the command runs, feedback will be provided below the cell.**Please be patient**.The load is complete when the cell status goes from `[*]` to `[number]`.
###Code
dataset_fc = load_miningrehab_data()
###Output
Loading Fractional Cover for Landsat 8
Loading WoFS for Landsat 8
###Markdown
Run the mining appThe `run_mining_app()` command launches an interactive map.Drawing polygons within the boundary (which represents the area covered by the loaded data) will result in plots of the average bare, green and non-green cover in that area.Draw polygons by clicking the &11039; symbol in the app.The app works by taking the loaded data `dataset_fc` as an argument. > **Note:** When drawing polygons, draw one over the mine and one over the forest nearby, then the fractional cover values can be compared on the produced plot.
###Code
run_miningrehab_app(dataset_fc)
###Output
_____no_output_____
###Markdown
Drawing conclusionsHere are some questions to think about:* Rehabilitation can be indicated by either a decrease in bare cover, or an increase in either green or non-green cover. Can you find any evidence that rehabilitation is occurring?* What differences are there between polygons drawn over the mine site and those drawn over the forest? What similarities are there? *** Additional information**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks).**Last modified:** January 2020**Compatible datacube version:**
###Code
print(datacube.__version__)
###Output
1.7
###Markdown
TagsBrowse all available tags on the DEA User Guide's [Tags Index](https://docs.dea.ga.gov.au/genindex.html)
###Code
**Tags**: :index:`sandbox compatible`, :index:`NCI compatible`, :index:`landsat 8`, :index:`fractional cover`, :index:`WOfS`, :index:`real world`, :index:`mining`, :index:`time series`, :index:`interactive`, :index:`widgets`
###Output
_____no_output_____
###Markdown
Tracking rehabilitation of mines * [**Sign up to the DEA Sandbox**](https://docs.dea.ga.gov.au/setup/sandbox.html) to run this notebook interactively from a browser* **Compatibility:** Notebook currently compatible with both the `NCI` and `DEA Sandbox` environments* **Products used:** [ls8_fc_albers](https://explorer.sandbox.dea.ga.gov.au/ls8_fc_albers), [wofs_albers](https://explorer.sandbox.dea.ga.gov.au/wofs_albers) BackgroundLand rehabilitation is an important aspect of responsible mining.For example, The Department of Mines, Industry Regulation and Safety (DMIRS) maintain a Mining Rehabilitation Fund (MRF) that Western Australian mining operators contribute to.The fund is used to rehabilitate abandoned and legacy mines, and is underpinned by the Mining Rehabilitation Fund Act 2012.As part of the fund, tenement holders report ground disturbance, which can help DMIRS monitor how a mine's rehabilitation is going, as well as major disurbance events related to mining activity. At the moment, most mining organisations only review disturbance annually, often contracting out the service to third party surveying and ecological consulting agencies.While these providers generally provide excellent information, there are two main issues:* Annual visits give a very coarse view of how the mine is changing in time.* There is no way to validate or sanity check consultants reports without a site visit. Digital Earth Australia use caseRehabilitation and land disturbance can be monitored through satellite data by tracking the amount of vegetation and bare ground on the site compared with surrounding areas.A decrease in bare ground and increase in vegetation can be linked to positive rehabilitation.A slow increase or sharp spike in the amount of bare ground over a mining site may indicate increased disturbance, which is against the trend expected during rehabilitation efforts.This tracking can be achieved using the Fractional Cover data product from the Joint Remote Sensing Research Program, which is provided through DEA.Fractional Cover is derived from Landsat data, which has a revisit time of around two weeks for Australia, providing regular insight to a given mine's rehabilitation.This would allow companies to identify any disturbance events early in the year and take corrective action before the yearly reporting.It would also allow DMIRS to keep detailed records of how the mines they monitor are changing in time.Fractional Cover can also be used to validate the field reporting from surveying and ecological consultants before submitting reports.While reports from field surveys will provide more detail than most Earth Observation data products, such products provide the ability to provide context and validation of reports.For example, if the survey detects a disturbance, it may be hard to detect a reason.Fractional Cover can be used to identify the point in time, and possibly the cause of each disturbance event. DescriptionIn this example, the Landsat 8 Fractional Cover product is used to assess how land cover (specifically bare soil, green vegetation and non-green vegetation) is changing over time.The worked example below takes users through the code required to* Create a time series data cube over a mine site.* Create graphs to identify rehabilitation trends and disturbance events.* Interpret the results.*** Getting started**To run this analysis**, run all the cells in the notebook, starting with the "Load packages and apps" cell. Load packages and appsThis notebook works via two functions, which are referred to as apps: `load_miningrehab_data` and `run_miningrehab_app`.The apps allow the majority of the analysis code to be stored in another file, making the notebook easy to use and run.To view the code behind the apps, open the [notebookapp_miningrehab.py](../Scripts/notebookapp_miningrehab.py) file.
###Code
%matplotlib inline
import sys
import datacube
sys.path.append("../Scripts")
from notebookapp_miningrehab import load_miningrehab_data
from notebookapp_miningrehab import run_miningrehab_app
###Output
/env/lib/python3.6/site-packages/datacube/storage/masking.py:4: DeprecationWarning: datacube.storage.masking has moved to datacube.utils.masking
category=DeprecationWarning)
###Markdown
Load the dataThe `load_miningrehab_data()` command performs several key steps:* Load Fractional Cover and Water Observations from Space (WOfS) data for the study area.* Match the datasets to only retain data with the same time stamps.* Mask areas that are classified as water using WOfS.* Resample the masked Fractional Cover to get monthly average values.* Return the masked data for analysis.The masked data is stored in the `dataset_fc` object.As the command runs, feedback will be provided below the cell.**Please be patient**.The load is complete when the cell status goes from `[*]` to `[number]`.
###Code
dataset_fc = load_miningrehab_data()
###Output
Loading Fractional Cover for Landsat 8
Loading WoFS for Landsat 8
###Markdown
Run the mining appThe `run_mining_app()` command launches an interactive map.Drawing polygons within the boundary (which represents the area covered by the loaded data) will result in plots of the average bare, green and non-green cover in that area.Draw polygons by clicking the &11039; symbol in the app.The app works by taking the loaded data `dataset_fc` as an argument. > **Note:** When drawing polygons, draw one over the mine and one over the forest nearby, then the fractional cover values can be compared on the produced plot.
###Code
run_miningrehab_app(dataset_fc)
###Output
_____no_output_____
###Markdown
Drawing conclusionsHere are some questions to think about:* Rehabilitation can be indicated by either a decrease in bare cover, or an increase in either green or non-green cover. Can you find any evidence that rehabilitation is occurring?* What differences are there between polygons drawn over the mine site and those drawn over the forest? What similarities are there? *** Additional information**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks).**Last modified:** January 2020**Compatible datacube version:**
###Code
print(datacube.__version__)
###Output
1.8.0b7.dev35+g5023dada
###Markdown
TagsBrowse all available tags on the DEA User Guide's [Tags Index](https://docs.dea.ga.gov.au/genindex.html)
###Code
**Tags**: :index:`sandbox compatible`, :index:`NCI compatible`, :index:`landsat 8`, :index:`fractional cover`, :index:`WOfS`, :index:`real world`, :index:`mining`, :index:`time series`, :index:`interactive`, :index:`widgets`, :index:`no_testing`
###Output
_____no_output_____
###Markdown
Tracking rehabilitation of mines * [**Sign up to the DEA Sandbox**](https://docs.dea.ga.gov.au/setup/sandbox.html) to run this notebook interactively from a browser* **Compatibility:** Notebook currently compatible with both the `NCI` and `DEA Sandbox` environments* **Products used:** [ga_ls_fc_3](https://explorer.sandbox.dea.ga.gov.au/ga_ls_fc_3), [ga_ls_wo_3](https://explorer.sandbox.dea.ga.gov.au/ga_ls_wo_3) BackgroundLand rehabilitation is an important aspect of responsible mining.For example, The Department of Mines, Industry Regulation and Safety (DMIRS) maintain a Mining Rehabilitation Fund (MRF) that Western Australian mining operators contribute to.The fund is used to rehabilitate abandoned and legacy mines, and is underpinned by the Mining Rehabilitation Fund Act 2012.As part of the fund, tenement holders report ground disturbance, which can help DMIRS monitor how a mine's rehabilitation is going, as well as major disurbance events related to mining activity. At the moment, most mining organisations only review disturbance annually, often contracting out the service to third party surveying and ecological consulting agencies.While these providers generally provide excellent information, there are two main issues:* Annual visits give a very coarse view of how the mine is changing in time.* There is no way to validate or sanity check consultants reports without a site visit. Digital Earth Australia use caseRehabilitation and land disturbance can be monitored through satellite data by tracking the amount of vegetation and bare ground on the site compared with surrounding areas.A decrease in bare ground and increase in vegetation can be linked to positive rehabilitation.A slow increase or sharp spike in the amount of bare ground over a mining site may indicate increased disturbance, which is against the trend expected during rehabilitation efforts.This tracking can be achieved using the Fractional Cover data product from the Joint Remote Sensing Research Program, which is provided through DEA.Fractional Cover is derived from Landsat data, which has a revisit time of around two weeks for Australia, providing regular insight to a given mine's rehabilitation.This would allow companies to identify any disturbance events early in the year and take corrective action before the yearly reporting.It would also allow DMIRS to keep detailed records of how the mines they monitor are changing in time.Fractional Cover can also be used to validate the field reporting from surveying and ecological consultants before submitting reports.While reports from field surveys will provide more detail than most Earth Observation data products, such products provide the ability to provide context and validation of reports.For example, if the survey detects a disturbance, it may be hard to detect a reason.Fractional Cover can be used to identify the point in time, and possibly the cause of each disturbance event. DescriptionIn this example, the [DEA Fractional Cover](../DEA_datasets/DEA_Fractional_Cover.ipynb) product is used to assess how land cover (specifically bare soil, green vegetation and non-green vegetation) is changing over time.The worked example below takes users through the code required to* Create a time series data cube over a mine site.* Create graphs to identify rehabilitation trends and disturbance events.* Interpret the results.*** Getting started**To run this analysis**, run all the cells in the notebook, starting with the "Load packages and apps" cell. Load packages and appsThis notebook works via two functions, which are referred to as apps: `load_miningrehab_data` and `run_miningrehab_app`.The apps allow the majority of the analysis code to be stored in another file, making the notebook easy to use and run.To view the code behind the apps, open the [notebookapp_miningrehab.py](../Scripts/notebookapp_miningrehab.py) file.
###Code
%matplotlib inline
import sys
import datacube
sys.path.append("../Scripts")
from notebookapp_miningrehab import load_miningrehab_data
from notebookapp_miningrehab import run_miningrehab_app
###Output
/env/lib/python3.6/site-packages/geopandas/_compat.py:110: UserWarning: The Shapely GEOS version (3.7.2-CAPI-1.11.0 ) is incompatible with the GEOS version PyGEOS was compiled with (3.9.1-CAPI-1.14.2). Conversions between both will be slow.
shapely_geos_version, geos_capi_version_string
###Markdown
Load the dataThe `load_miningrehab_data()` command performs several key steps:* Load [DEA Fractional Cover (FC)](../DEA_datasets/DEA_Fractional_Cover.ipynb) and [DEA Water Observations (WO)](../DEA_datasets/DEA_Water_Observations.ipynb) data for the study area.* Match the datasets to only retain data with the same time stamps.* Mask areas that are classified as water using WOs.* Resample the masked FC to get monthly average values.* Return the masked data for analysis.The masked data is stored in the `dataset_fc` object.As the command runs, feedback will be provided below the cell.**Please be patient**.The load is complete when the cell status goes from `[*]` to `[number]`.
###Code
dataset_fc = load_miningrehab_data()
###Output
Loading DEA Fractional Cover
Loading DEA Water Observations
###Markdown
Run the mining appThe `run_mining_app()` command launches an interactive map.Drawing polygons within the boundary (which represents the area covered by the loaded data) will result in plots of the average bare, green and non-green cover in that area. Draw polygons by clicking the &11039; symbol in the app.The app works by taking the loaded data `dataset_fc` as an argument. > **Note:** When drawing polygons, draw one over the mine and one over the forest nearby, then the fractional cover values can be compared on the produced plot.
###Code
run_miningrehab_app(dataset_fc)
###Output
_____no_output_____
###Markdown
Drawing conclusionsHere are some questions to think about:* Rehabilitation can be indicated by either a decrease in bare cover, or an increase in either green or non-green cover. Can you find any evidence that rehabilitation is occurring?* What differences are there between polygons drawn over the mine site and those drawn over the forest? What similarities are there? *** Additional information**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks).**Last modified:** July 2021**Compatible datacube version:**
###Code
print(datacube.__version__)
###Output
1.8.4.dev81+g80d466a2
###Markdown
TagsBrowse all available tags on the DEA User Guide's [Tags Index](https://docs.dea.ga.gov.au/genindex.html)
###Code
**Tags**: :index:`sandbox compatible`, :index:`NCI compatible`, :index:``, :index:`fractional cover`, :index:`water observations`, :index:`real world`, :index:`mining`, :index:`time series`, :index:`interactive`, :index:`widgets`, :index:`no_testing`
###Output
_____no_output_____
###Markdown
Tracking rehabilitation of mines * [**Sign up to the DEA Sandbox**](https://docs.dea.ga.gov.au/setup/sandbox.html) to run this notebook interactively from a browser* **Compatibility:** Notebook currently compatible with both the `NCI` and `DEA Sandbox` environments* **Products used:** [ga_ls_fc_3](https://explorer.sandbox.dea.ga.gov.au/ga_ls_fc_3), [ga_ls_wo_3](https://explorer.sandbox.dea.ga.gov.au/ga_ls_wo_3) BackgroundLand rehabilitation is an important aspect of responsible mining.For example, The Department of Mines, Industry Regulation and Safety (DMIRS) maintain a Mining Rehabilitation Fund (MRF) that Western Australian mining operators contribute to.The fund is used to rehabilitate abandoned and legacy mines, and is underpinned by the Mining Rehabilitation Fund Act 2012.As part of the fund, tenement holders report ground disturbance, which can help DMIRS monitor how a mine's rehabilitation is going, as well as major disurbance events related to mining activity. At the moment, most mining organisations only review disturbance annually, often contracting out the service to third party surveying and ecological consulting agencies.While these providers generally provide excellent information, there are two main issues:* Annual visits give a very coarse view of how the mine is changing in time.* There is no way to validate or sanity check consultants reports without a site visit. Digital Earth Australia use caseRehabilitation and land disturbance can be monitored through satellite data by tracking the amount of vegetation and bare ground on the site compared with surrounding areas.A decrease in bare ground and increase in vegetation can be linked to positive rehabilitation.A slow increase or sharp spike in the amount of bare ground over a mining site may indicate increased disturbance, which is against the trend expected during rehabilitation efforts.This tracking can be achieved using the Fractional Cover data product from the Joint Remote Sensing Research Program, which is provided through DEA.Fractional Cover is derived from Landsat data, which has a revisit time of around two weeks for Australia, providing regular insight to a given mine's rehabilitation.This would allow companies to identify any disturbance events early in the year and take corrective action before the yearly reporting.It would also allow DMIRS to keep detailed records of how the mines they monitor are changing in time.Fractional Cover can also be used to validate the field reporting from surveying and ecological consultants before submitting reports.While reports from field surveys will provide more detail than most Earth Observation data products, such products provide the ability to provide context and validation of reports.For example, if the survey detects a disturbance, it may be hard to detect a reason.Fractional Cover can be used to identify the point in time, and possibly the cause of each disturbance event. DescriptionIn this example, the [DEA Fractional Cover](../DEA_datasets/DEA_Fractional_Cover.ipynb) product is used to assess how land cover (specifically bare soil, green vegetation and non-green vegetation) is changing over time.The worked example below takes users through the code required to* Create a time series data cube over a mine site.* Create graphs to identify rehabilitation trends and disturbance events.* Interpret the results.*** Getting started**To run this analysis**, run all the cells in the notebook, starting with the "Load packages and apps" cell. Load packages and appsThis notebook works via two functions, which are referred to as apps: `load_miningrehab_data` and `run_miningrehab_app`.The apps allow the majority of the analysis code to be stored in another file, making the notebook easy to use and run.To view the code behind the apps, open the [notebookapp_miningrehab.py](../Scripts/notebookapp_miningrehab.py) file.
###Code
%matplotlib inline
import datacube
import sys
sys.path.insert(1, '../Tools/')
from dea_tools.app import miningrehab
###Output
_____no_output_____
###Markdown
Load the dataThe `load_miningrehab_data()` command performs several key steps:* Load [DEA Fractional Cover (FC)](../DEA_datasets/DEA_Fractional_Cover.ipynb) and [DEA Water Observations (WO)](../DEA_datasets/DEA_Water_Observations.ipynb) data for the study area.* Match the datasets to only retain data with the same time stamps.* Mask areas that are classified as water using WOs.* Resample the masked FC to get monthly average values.* Return the masked data for analysis.The masked data is stored in the `dataset_fc` object.As the command runs, feedback will be provided below the cell.**Please be patient**.The load is complete when the cell status goes from `[*]` to `[number]`.
###Code
dataset_fc = miningrehab.load_miningrehab_data()
###Output
Loading DEA Fractional Cover
Loading DEA Water Observations
###Markdown
Run the mining appThe `run_mining_app()` command launches an interactive map.Drawing polygons within the boundary (which represents the area covered by the loaded data) will result in plots of the average bare, green and non-green cover in that area. Draw polygons by clicking the &11039; symbol in the app.The app works by taking the loaded data `dataset_fc` as an argument. > **Note:** When drawing polygons, draw one over the mine and one over the forest nearby, then the fractional cover values can be compared on the produced plot.
###Code
miningrehab.run_miningrehab_app(dataset_fc)
###Output
_____no_output_____
###Markdown
Drawing conclusionsHere are some questions to think about:* Rehabilitation can be indicated by either a decrease in bare cover, or an increase in either green or non-green cover. Can you find any evidence that rehabilitation is occurring?* What differences are there between polygons drawn over the mine site and those drawn over the forest? What similarities are there? *** Additional information**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks).**Last modified:** January 2022**Compatible datacube version:**
###Code
print(datacube.__version__)
###Output
1.8.6
###Markdown
TagsBrowse all available tags on the DEA User Guide's [Tags Index](https://docs.dea.ga.gov.au/genindex.html)
###Code
**Tags**: :index:`sandbox compatible`, :index:`NCI compatible`, :index:``, :index:`fractional cover`, :index:`water observations`, :index:`real world`, :index:`mining`, :index:`time series`, :index:`interactive`, :index:`widgets`, :index:`no_testing`
###Output
_____no_output_____ |
Paper/Mel Explain.ipynb | ###Markdown
Analyzing Mel
###Code
for i in mel_fb:
print(np.nonzero(i))
mel_frequencies(n_mels+2, fmax=fmax, htk=htk)
###Output
_____no_output_____ |
MLP_final_test/MLP_from_data.ipynb | ###Markdown
This code shows an example for using the imported data from a modified .mat file into a artificial neural network and its training
###Code
import numpy as np
from sklearn.neural_network import MLPRegressor
from sklearn import preprocessing
from sklearn.cross_validation import train_test_split
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from sklearn.metrics import r2_score # in order to test the results
from sklearn.grid_search import GridSearchCV # looking for parameters
import pickle #saving to file
###Output
_____no_output_____
###Markdown
Importing preprocessing data
###Code
#this function reads the file
def read_data(archive, rows, columns):
data = open(archive, 'r')
mylist = data.read().split()
data.close()
myarray = np.array(mylist).reshape(( rows, columns)).astype(float)
return myarray
data = read_data('../get_data_example/set.txt',72, 12)
X = data[:, [0, 2, 4, 6, 7, 8, 9, 10, 11]]
#print pre_X.shape, data.shape
y = data[:,1]
#print y.shape
#getting the time vector for plotting purposes
time_stamp = np.zeros(data.shape[0])
for i in xrange(data.shape[0]):
time_stamp[i] = i*(1.0/60.0)
#print X.shape, time_stamp.shape
X = np.hstack((X, time_stamp.reshape((X.shape[0], 1))))
print X.shape
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
t_test = X_test[:,-1]
t_train = X_train[:, -1]
X_train_std = preprocessing.scale(X_train[:,0:-1])
X_test_std = preprocessing.scale(X_test[:, 0:-1])
###Output
(72, 10)
###Markdown
Sorting out data (for plotting purposes)
###Code
#Here comes the way to sort out the data according to one the elements of it
test_sorted = np.hstack(
(t_test.reshape(X_test_std.shape[0], 1), X_test_std, y_test.reshape(X_test_std.shape[0], 1)))
test_sorted = test_sorted[np.argsort(test_sorted[:,0])] #modified
train_sorted = np.hstack((t_train.reshape(t_train.shape[0], 1), y_train.reshape(y_train.shape[0], 1) ))
train_sorted = train_sorted[np.argsort(train_sorted[:,0])]
###Output
_____no_output_____
###Markdown
Artificial Neural Network (Gridsearch, DO NOT RUN)
###Code
#Grid search, random state =0: same beginning for all
alpha1 = np.linspace(0.001,0.9, 9).tolist()
momentum1 = np.linspace(0.3,0.9, 9).tolist()
params_dist = {"hidden_layer_sizes":[(20, 40), (15, 40), (10,15), (15, 15, 10), (15, 10), (15, 5)],
"activation":['tanh','logistic'],"algorithm":['sgd', 'l-bfgs'], "alpha":alpha1,
"learning_rate":['constant'],"max_iter":[500], "random_state":[0],
"verbose": [False], "warm_start":[False], "momentum":momentum1}
grid = GridSearchCV(MLPRegressor(), param_grid=params_dist)
grid.fit(X_train_std, y_train)
print "Best score:", grid.best_score_
print "Best parameter's set found:\n"
print grid.best_params_
reg = MLPRegressor(warm_start = grid.best_params_['warm_start'], verbose= grid.best_params_['verbose'],
algorithm= grid.best_params_['algorithm'],hidden_layer_sizes=grid.best_params_['hidden_layer_sizes'],
activation= grid.best_params_['activation'], max_iter= grid.best_params_['max_iter'],
random_state= None,alpha= grid.best_params_['alpha'], learning_rate= grid.best_params_['learning_rate'],
momentum= grid.best_params_['momentum'])
reg.fit(X_train_std, y_train)
###Output
_____no_output_____
###Markdown
Plotting
###Code
%matplotlib inline
results = reg.predict(test_sorted[:, 1:-1])
plt.plot(test_sorted[:, 0], results, c='r') # ( sorted time, results)
plt.plot(train_sorted[:, 0], train_sorted[:,1], c='b' ) #expected
plt.scatter(time_stamp, y, c='k')
plt.xlabel("Time(s)")
plt.ylabel("Angular velocities(rad/s)")
red_patch = mpatches.Patch(color='red', label='Predicted')
blue_patch = mpatches.Patch(color='blue', label ='Expected')
black_patch = mpatches.Patch(color='black', label ='Original')
plt.legend(handles=[red_patch, blue_patch, black_patch])
plt.title("MLP results vs Expected values")
plt.show()
print "Accuracy:", reg.score(X_test_std, y_test)
#print "Accuracy test 2", r2_score(test_sorted[:,-1], results)
###Output
_____no_output_____
###Markdown
Saving ANN to file through pickle (and using it later)
###Code
#This prevents the user from losing a previous important result
def save_it(ans):
if ans == "yes":
f = open('data.ann', 'w')
mem = pickle.dumps(grid)
f.write(mem)
f.close()
else:
print "Nothing to save"
save_it("no")
#Loading a successful ANN
f = open('data.ann', 'r')
nw = f.read()
saved_ann = pickle.loads(nw)
print "Just the accuracy:", saved_ann.score(X_test_std, y_test), "\n"
print "Parameters:"
print saved_ann.get_params(), "\n"
print "Loss:", saved_ann.loss_
print "Total of layers:", saved_ann.n_layers_
print "Total of iterations:", saved_ann.n_iter_
#print from previously saved data
%matplotlib inline
results = saved_ann.predict(test_sorted[:, 1:-1])
plt.plot(test_sorted[:, 0], results, c='r') # ( sorted time, results)
plt.plot(train_sorted[:, 0], train_sorted[:,1], c='b' ) #expected
plt.scatter(time_stamp, y, c='k')
plt.xlabel("Time(s)")
plt.ylabel("Angular velocities(rad/s)")
red_patch = mpatches.Patch(color='red', label='Predicted')
blue_patch = mpatches.Patch(color='blue', label ='Expected')
black_patch = mpatches.Patch(color='black', label ='Original')
plt.legend(handles=[red_patch, blue_patch, black_patch])
plt.title("MLP results vs Expected values (Loaded from file)")
plt.show()
print " Accuracy:", saved_ann.score(X_test_std, y_test)
plt.plot(time_stamp, y,'--.', c='r')
plt.xlabel("Time(s)")
plt.ylabel("Angular velocities(rad/s)")
plt.title("Resuts from patient:\n"
" Angular velocities for the right knee")
plt.show()
#print "Accuracy test 2", r2_score(test_sorted[:,-1], results)
print max(y), saved_ann.predict(X_train_std[y_train.tolist().index(max(y_train)),:].reshape((1,9)))
###Output
3.67175193015 [ 3.68474801]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.